00:00:00.000 Started by upstream project "autotest-per-patch" build number 132689 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.016 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:02.706 The recommended git tool is: git 00:00:02.706 using credential 00000000-0000-0000-0000-000000000002 00:00:02.708 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:02.721 Fetching changes from the remote Git repository 00:00:02.725 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:02.737 Using shallow fetch with depth 1 00:00:02.737 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:02.737 > git --version # timeout=10 00:00:02.747 > git --version # 'git version 2.39.2' 00:00:02.747 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:02.761 Setting http proxy: proxy-dmz.intel.com:911 00:00:02.761 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.807 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.818 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.832 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:08.832 > git config core.sparsecheckout # timeout=10 00:00:08.844 > git read-tree -mu HEAD # timeout=10 00:00:08.861 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:08.886 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:08.886 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:08.969 [Pipeline] Start of Pipeline 00:00:08.980 [Pipeline] library 00:00:08.981 Loading library shm_lib@master 00:00:08.981 Library shm_lib@master is cached. Copying from home. 00:00:08.996 [Pipeline] node 00:00:09.002 Running on WFP6 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:09.003 [Pipeline] { 00:00:09.010 [Pipeline] catchError 00:00:09.011 [Pipeline] { 00:00:09.020 [Pipeline] wrap 00:00:09.027 [Pipeline] { 00:00:09.034 [Pipeline] stage 00:00:09.035 [Pipeline] { (Prologue) 00:00:09.263 [Pipeline] sh 00:00:09.555 + logger -p user.info -t JENKINS-CI 00:00:09.571 [Pipeline] echo 00:00:09.572 Node: WFP6 00:00:09.579 [Pipeline] sh 00:00:09.875 [Pipeline] setCustomBuildProperty 00:00:09.885 [Pipeline] echo 00:00:09.886 Cleanup processes 00:00:09.891 [Pipeline] sh 00:00:10.173 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:10.173 4003367 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:10.194 [Pipeline] sh 00:00:10.561 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:10.561 ++ grep -v 'sudo pgrep' 00:00:10.561 ++ awk '{print $1}' 00:00:10.561 + sudo kill -9 00:00:10.561 + true 00:00:10.575 [Pipeline] cleanWs 00:00:10.584 [WS-CLEANUP] Deleting project workspace... 00:00:10.584 [WS-CLEANUP] Deferred wipeout is used... 00:00:10.590 [WS-CLEANUP] done 00:00:10.594 [Pipeline] setCustomBuildProperty 00:00:10.606 [Pipeline] sh 00:00:10.882 + sudo git config --global --replace-all safe.directory '*' 00:00:11.107 [Pipeline] httpRequest 00:00:11.416 [Pipeline] echo 00:00:11.418 Sorcerer 10.211.164.20 is alive 00:00:11.423 [Pipeline] retry 00:00:11.425 [Pipeline] { 00:00:11.435 [Pipeline] httpRequest 00:00:11.438 HttpMethod: GET 00:00:11.439 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.439 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.446 Response Code: HTTP/1.1 200 OK 00:00:11.446 Success: Status code 200 is in the accepted range: 200,404 00:00:11.446 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:40.538 [Pipeline] } 00:00:40.556 [Pipeline] // retry 00:00:40.564 [Pipeline] sh 00:00:40.849 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:40.866 [Pipeline] httpRequest 00:00:41.172 [Pipeline] echo 00:00:41.174 Sorcerer 10.211.164.20 is alive 00:00:41.182 [Pipeline] retry 00:00:41.184 [Pipeline] { 00:00:41.196 [Pipeline] httpRequest 00:00:41.200 HttpMethod: GET 00:00:41.201 URL: http://10.211.164.20/packages/spdk_b7fa4c06bd1843cc6c333f8a2318c3417104a3ac.tar.gz 00:00:41.201 Sending request to url: http://10.211.164.20/packages/spdk_b7fa4c06bd1843cc6c333f8a2318c3417104a3ac.tar.gz 00:00:41.208 Response Code: HTTP/1.1 200 OK 00:00:41.208 Success: Status code 200 is in the accepted range: 200,404 00:00:41.209 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_b7fa4c06bd1843cc6c333f8a2318c3417104a3ac.tar.gz 00:05:40.997 [Pipeline] } 00:05:41.018 [Pipeline] // retry 00:05:41.027 [Pipeline] sh 00:05:41.316 + tar --no-same-owner -xf spdk_b7fa4c06bd1843cc6c333f8a2318c3417104a3ac.tar.gz 00:05:43.864 [Pipeline] sh 00:05:44.147 + git -C spdk log --oneline -n5 00:05:44.147 b7fa4c06b test/nvmf: Solve ambiguity around $NVMF_SECOND_TARGET_IP 00:05:44.147 0f9fa3b10 test/nvmf: Don't pin nvmf_bdevperf and nvmf_target_disconnect to phy 00:05:44.147 e3d2a3a6f test/nvmf: Remove all transport conditions from the test suites 00:05:44.147 3a4e432ea test/nvmf: Drop $RDMA_IP_LIST 00:05:44.147 688351e0e test/nvmf: Drop $NVMF_INITIATOR_IP in favor of $NVMF_FIRST_INITIATOR_IP 00:05:44.158 [Pipeline] } 00:05:44.173 [Pipeline] // stage 00:05:44.182 [Pipeline] stage 00:05:44.184 [Pipeline] { (Prepare) 00:05:44.200 [Pipeline] writeFile 00:05:44.217 [Pipeline] sh 00:05:44.502 + logger -p user.info -t JENKINS-CI 00:05:44.515 [Pipeline] sh 00:05:44.799 + logger -p user.info -t JENKINS-CI 00:05:44.811 [Pipeline] sh 00:05:45.090 + cat autorun-spdk.conf 00:05:45.091 SPDK_RUN_FUNCTIONAL_TEST=1 00:05:45.091 SPDK_TEST_NVMF=1 00:05:45.091 SPDK_TEST_NVME_CLI=1 00:05:45.091 SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:45.091 SPDK_TEST_NVMF_NICS=e810 00:05:45.091 SPDK_TEST_VFIOUSER=1 00:05:45.091 SPDK_RUN_UBSAN=1 00:05:45.091 NET_TYPE=phy 00:05:45.097 RUN_NIGHTLY=0 00:05:45.105 [Pipeline] readFile 00:05:45.164 [Pipeline] withEnv 00:05:45.167 [Pipeline] { 00:05:45.180 [Pipeline] sh 00:05:45.466 + set -ex 00:05:45.466 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:05:45.466 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:05:45.466 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:45.466 ++ SPDK_TEST_NVMF=1 00:05:45.466 ++ SPDK_TEST_NVME_CLI=1 00:05:45.466 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:45.466 ++ SPDK_TEST_NVMF_NICS=e810 00:05:45.466 ++ SPDK_TEST_VFIOUSER=1 00:05:45.466 ++ SPDK_RUN_UBSAN=1 00:05:45.466 ++ NET_TYPE=phy 00:05:45.466 ++ RUN_NIGHTLY=0 00:05:45.466 + case $SPDK_TEST_NVMF_NICS in 00:05:45.466 + DRIVERS=ice 00:05:45.466 + [[ tcp == \r\d\m\a ]] 00:05:45.466 + [[ -n ice ]] 00:05:45.466 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:05:45.466 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:05:52.036 rmmod: ERROR: Module irdma is not currently loaded 00:05:52.036 rmmod: ERROR: Module i40iw is not currently loaded 00:05:52.036 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:05:52.036 + true 00:05:52.036 + for D in $DRIVERS 00:05:52.036 + sudo modprobe ice 00:05:52.036 + exit 0 00:05:52.045 [Pipeline] } 00:05:52.059 [Pipeline] // withEnv 00:05:52.064 [Pipeline] } 00:05:52.079 [Pipeline] // stage 00:05:52.088 [Pipeline] catchError 00:05:52.089 [Pipeline] { 00:05:52.102 [Pipeline] timeout 00:05:52.103 Timeout set to expire in 1 hr 0 min 00:05:52.104 [Pipeline] { 00:05:52.118 [Pipeline] stage 00:05:52.120 [Pipeline] { (Tests) 00:05:52.133 [Pipeline] sh 00:05:52.419 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:05:52.419 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:05:52.419 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:05:52.419 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:05:52.419 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:52.419 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:05:52.419 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:05:52.419 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:05:52.419 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:05:52.419 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:05:52.419 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:05:52.419 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:05:52.419 + source /etc/os-release 00:05:52.419 ++ NAME='Fedora Linux' 00:05:52.419 ++ VERSION='39 (Cloud Edition)' 00:05:52.419 ++ ID=fedora 00:05:52.419 ++ VERSION_ID=39 00:05:52.419 ++ VERSION_CODENAME= 00:05:52.419 ++ PLATFORM_ID=platform:f39 00:05:52.419 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:05:52.419 ++ ANSI_COLOR='0;38;2;60;110;180' 00:05:52.419 ++ LOGO=fedora-logo-icon 00:05:52.419 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:05:52.419 ++ HOME_URL=https://fedoraproject.org/ 00:05:52.419 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:05:52.419 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:05:52.419 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:05:52.419 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:05:52.419 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:05:52.419 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:05:52.419 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:05:52.419 ++ SUPPORT_END=2024-11-12 00:05:52.419 ++ VARIANT='Cloud Edition' 00:05:52.419 ++ VARIANT_ID=cloud 00:05:52.419 + uname -a 00:05:52.419 Linux spdk-wfp-06 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:05:52.419 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:54.959 Hugepages 00:05:54.959 node hugesize free / total 00:05:54.959 node0 1048576kB 0 / 0 00:05:54.959 node0 2048kB 0 / 0 00:05:54.959 node1 1048576kB 0 / 0 00:05:54.959 node1 2048kB 0 / 0 00:05:54.959 00:05:54.959 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:54.959 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:05:54.959 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:05:54.959 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:05:54.959 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:05:54.959 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:05:54.959 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:05:54.959 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:05:54.959 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:05:54.959 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:05:54.959 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:05:54.959 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:05:54.959 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:05:54.959 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:05:54.959 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:05:54.959 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:05:54.959 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:05:54.959 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:05:54.959 + rm -f /tmp/spdk-ld-path 00:05:54.959 + source autorun-spdk.conf 00:05:54.959 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:54.959 ++ SPDK_TEST_NVMF=1 00:05:54.959 ++ SPDK_TEST_NVME_CLI=1 00:05:54.959 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:54.959 ++ SPDK_TEST_NVMF_NICS=e810 00:05:54.959 ++ SPDK_TEST_VFIOUSER=1 00:05:54.959 ++ SPDK_RUN_UBSAN=1 00:05:54.959 ++ NET_TYPE=phy 00:05:54.959 ++ RUN_NIGHTLY=0 00:05:54.959 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:05:54.959 + [[ -n '' ]] 00:05:54.959 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:54.959 + for M in /var/spdk/build-*-manifest.txt 00:05:54.959 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:05:54.959 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:05:54.959 + for M in /var/spdk/build-*-manifest.txt 00:05:54.959 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:05:54.959 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:05:54.959 + for M in /var/spdk/build-*-manifest.txt 00:05:54.959 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:05:54.959 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:05:54.959 ++ uname 00:05:54.959 + [[ Linux == \L\i\n\u\x ]] 00:05:54.959 + sudo dmesg -T 00:05:55.218 + sudo dmesg --clear 00:05:55.218 + dmesg_pid=4005376 00:05:55.218 + [[ Fedora Linux == FreeBSD ]] 00:05:55.218 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:55.218 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:55.218 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:05:55.218 + [[ -x /usr/src/fio-static/fio ]] 00:05:55.218 + export FIO_BIN=/usr/src/fio-static/fio 00:05:55.218 + FIO_BIN=/usr/src/fio-static/fio 00:05:55.218 + sudo dmesg -Tw 00:05:55.218 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:05:55.218 + [[ ! -v VFIO_QEMU_BIN ]] 00:05:55.218 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:05:55.218 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:55.218 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:55.218 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:05:55.218 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:55.218 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:55.218 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:05:55.218 11:49:29 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:05:55.218 11:49:29 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:05:55.218 11:49:29 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:55.218 11:49:29 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:05:55.218 11:49:29 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:05:55.218 11:49:29 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:55.219 11:49:29 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:05:55.219 11:49:29 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:05:55.219 11:49:29 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:05:55.219 11:49:29 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:05:55.219 11:49:29 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:05:55.219 11:49:29 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:05:55.219 11:49:29 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:05:55.219 11:49:29 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:05:55.219 11:49:29 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:55.219 11:49:29 -- scripts/common.sh@15 -- $ shopt -s extglob 00:05:55.219 11:49:29 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:05:55.219 11:49:29 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:55.219 11:49:29 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:55.219 11:49:29 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.219 11:49:29 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.219 11:49:29 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.219 11:49:29 -- paths/export.sh@5 -- $ export PATH 00:05:55.219 11:49:29 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.219 11:49:29 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:05:55.219 11:49:29 -- common/autobuild_common.sh@493 -- $ date +%s 00:05:55.219 11:49:29 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733395769.XXXXXX 00:05:55.219 11:49:29 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733395769.NaLYWt 00:05:55.219 11:49:29 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:05:55.219 11:49:29 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:05:55.219 11:49:29 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:05:55.219 11:49:29 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:05:55.219 11:49:29 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:05:55.219 11:49:29 -- common/autobuild_common.sh@509 -- $ get_config_params 00:05:55.219 11:49:29 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:05:55.219 11:49:29 -- common/autotest_common.sh@10 -- $ set +x 00:05:55.219 11:49:29 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:05:55.219 11:49:29 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:05:55.219 11:49:29 -- pm/common@17 -- $ local monitor 00:05:55.219 11:49:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:55.219 11:49:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:55.219 11:49:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:55.219 11:49:29 -- pm/common@21 -- $ date +%s 00:05:55.219 11:49:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:55.219 11:49:29 -- pm/common@21 -- $ date +%s 00:05:55.219 11:49:29 -- pm/common@25 -- $ sleep 1 00:05:55.219 11:49:29 -- pm/common@21 -- $ date +%s 00:05:55.219 11:49:29 -- pm/common@21 -- $ date +%s 00:05:55.219 11:49:29 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733395769 00:05:55.219 11:49:29 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733395769 00:05:55.219 11:49:29 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733395769 00:05:55.219 11:49:29 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733395769 00:05:55.479 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733395769_collect-vmstat.pm.log 00:05:55.479 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733395769_collect-cpu-load.pm.log 00:05:55.479 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733395769_collect-cpu-temp.pm.log 00:05:55.479 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733395769_collect-bmc-pm.bmc.pm.log 00:05:56.418 11:49:30 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:05:56.418 11:49:30 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:05:56.418 11:49:30 -- spdk/autobuild.sh@12 -- $ umask 022 00:05:56.418 11:49:30 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:56.418 11:49:30 -- spdk/autobuild.sh@16 -- $ date -u 00:05:56.418 Thu Dec 5 10:49:30 AM UTC 2024 00:05:56.418 11:49:30 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:05:56.419 v25.01-pre-303-gb7fa4c06b 00:05:56.419 11:49:30 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:05:56.419 11:49:30 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:05:56.419 11:49:30 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:05:56.419 11:49:30 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:05:56.419 11:49:30 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:05:56.419 11:49:30 -- common/autotest_common.sh@10 -- $ set +x 00:05:56.419 ************************************ 00:05:56.419 START TEST ubsan 00:05:56.419 ************************************ 00:05:56.419 11:49:30 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:05:56.419 using ubsan 00:05:56.419 00:05:56.419 real 0m0.000s 00:05:56.419 user 0m0.000s 00:05:56.419 sys 0m0.000s 00:05:56.419 11:49:30 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:56.419 11:49:30 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:05:56.419 ************************************ 00:05:56.419 END TEST ubsan 00:05:56.419 ************************************ 00:05:56.419 11:49:30 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:05:56.419 11:49:30 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:05:56.419 11:49:30 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:05:56.419 11:49:30 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:05:56.419 11:49:30 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:05:56.419 11:49:30 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:05:56.419 11:49:30 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:05:56.419 11:49:30 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:05:56.419 11:49:30 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:05:56.678 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:05:56.678 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:05:56.937 Using 'verbs' RDMA provider 00:06:10.210 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:06:22.416 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:06:22.416 Creating mk/config.mk...done. 00:06:22.416 Creating mk/cc.flags.mk...done. 00:06:22.416 Type 'make' to build. 00:06:22.416 11:49:55 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:06:22.416 11:49:55 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:06:22.416 11:49:55 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:06:22.416 11:49:55 -- common/autotest_common.sh@10 -- $ set +x 00:06:22.416 ************************************ 00:06:22.416 START TEST make 00:06:22.416 ************************************ 00:06:22.416 11:49:55 make -- common/autotest_common.sh@1129 -- $ make -j96 00:06:22.416 make[1]: Nothing to be done for 'all'. 00:06:23.795 The Meson build system 00:06:23.795 Version: 1.5.0 00:06:23.795 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:06:23.795 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:06:23.795 Build type: native build 00:06:23.795 Project name: libvfio-user 00:06:23.795 Project version: 0.0.1 00:06:23.795 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:06:23.795 C linker for the host machine: cc ld.bfd 2.40-14 00:06:23.795 Host machine cpu family: x86_64 00:06:23.795 Host machine cpu: x86_64 00:06:23.795 Run-time dependency threads found: YES 00:06:23.795 Library dl found: YES 00:06:23.795 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:06:23.795 Run-time dependency json-c found: YES 0.17 00:06:23.795 Run-time dependency cmocka found: YES 1.1.7 00:06:23.795 Program pytest-3 found: NO 00:06:23.795 Program flake8 found: NO 00:06:23.795 Program misspell-fixer found: NO 00:06:23.795 Program restructuredtext-lint found: NO 00:06:23.795 Program valgrind found: YES (/usr/bin/valgrind) 00:06:23.795 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:06:23.795 Compiler for C supports arguments -Wmissing-declarations: YES 00:06:23.795 Compiler for C supports arguments -Wwrite-strings: YES 00:06:23.795 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:06:23.795 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:06:23.795 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:06:23.795 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:06:23.795 Build targets in project: 8 00:06:23.795 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:06:23.795 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:06:23.795 00:06:23.795 libvfio-user 0.0.1 00:06:23.795 00:06:23.795 User defined options 00:06:23.795 buildtype : debug 00:06:23.795 default_library: shared 00:06:23.795 libdir : /usr/local/lib 00:06:23.795 00:06:23.795 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:06:24.053 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:06:24.053 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:06:24.311 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:06:24.311 [3/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:06:24.311 [4/37] Compiling C object samples/null.p/null.c.o 00:06:24.311 [5/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:06:24.311 [6/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:06:24.311 [7/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:06:24.311 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:06:24.311 [9/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:06:24.311 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:06:24.311 [11/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:06:24.311 [12/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:06:24.311 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:06:24.311 [14/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:06:24.311 [15/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:06:24.311 [16/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:06:24.311 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:06:24.311 [18/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:06:24.311 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:06:24.311 [20/37] Compiling C object test/unit_tests.p/mocks.c.o 00:06:24.311 [21/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:06:24.311 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:06:24.311 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:06:24.311 [24/37] Compiling C object samples/server.p/server.c.o 00:06:24.311 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:06:24.311 [26/37] Compiling C object samples/client.p/client.c.o 00:06:24.311 [27/37] Linking target samples/client 00:06:24.311 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:06:24.311 [29/37] Linking target test/unit_tests 00:06:24.311 [30/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:06:24.570 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:06:24.570 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:06:24.570 [33/37] Linking target samples/gpio-pci-idio-16 00:06:24.570 [34/37] Linking target samples/null 00:06:24.570 [35/37] Linking target samples/server 00:06:24.570 [36/37] Linking target samples/shadow_ioeventfd_server 00:06:24.570 [37/37] Linking target samples/lspci 00:06:24.570 INFO: autodetecting backend as ninja 00:06:24.570 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:06:24.828 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:06:25.086 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:06:25.086 ninja: no work to do. 00:06:30.356 The Meson build system 00:06:30.356 Version: 1.5.0 00:06:30.356 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:06:30.356 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:06:30.356 Build type: native build 00:06:30.356 Program cat found: YES (/usr/bin/cat) 00:06:30.356 Project name: DPDK 00:06:30.356 Project version: 24.03.0 00:06:30.356 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:06:30.356 C linker for the host machine: cc ld.bfd 2.40-14 00:06:30.356 Host machine cpu family: x86_64 00:06:30.356 Host machine cpu: x86_64 00:06:30.356 Message: ## Building in Developer Mode ## 00:06:30.356 Program pkg-config found: YES (/usr/bin/pkg-config) 00:06:30.356 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:06:30.356 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:06:30.356 Program python3 found: YES (/usr/bin/python3) 00:06:30.356 Program cat found: YES (/usr/bin/cat) 00:06:30.356 Compiler for C supports arguments -march=native: YES 00:06:30.356 Checking for size of "void *" : 8 00:06:30.356 Checking for size of "void *" : 8 (cached) 00:06:30.356 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:06:30.356 Library m found: YES 00:06:30.356 Library numa found: YES 00:06:30.356 Has header "numaif.h" : YES 00:06:30.357 Library fdt found: NO 00:06:30.357 Library execinfo found: NO 00:06:30.357 Has header "execinfo.h" : YES 00:06:30.357 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:06:30.357 Run-time dependency libarchive found: NO (tried pkgconfig) 00:06:30.357 Run-time dependency libbsd found: NO (tried pkgconfig) 00:06:30.357 Run-time dependency jansson found: NO (tried pkgconfig) 00:06:30.357 Run-time dependency openssl found: YES 3.1.1 00:06:30.357 Run-time dependency libpcap found: YES 1.10.4 00:06:30.357 Has header "pcap.h" with dependency libpcap: YES 00:06:30.357 Compiler for C supports arguments -Wcast-qual: YES 00:06:30.357 Compiler for C supports arguments -Wdeprecated: YES 00:06:30.357 Compiler for C supports arguments -Wformat: YES 00:06:30.357 Compiler for C supports arguments -Wformat-nonliteral: NO 00:06:30.357 Compiler for C supports arguments -Wformat-security: NO 00:06:30.357 Compiler for C supports arguments -Wmissing-declarations: YES 00:06:30.357 Compiler for C supports arguments -Wmissing-prototypes: YES 00:06:30.357 Compiler for C supports arguments -Wnested-externs: YES 00:06:30.357 Compiler for C supports arguments -Wold-style-definition: YES 00:06:30.357 Compiler for C supports arguments -Wpointer-arith: YES 00:06:30.357 Compiler for C supports arguments -Wsign-compare: YES 00:06:30.357 Compiler for C supports arguments -Wstrict-prototypes: YES 00:06:30.357 Compiler for C supports arguments -Wundef: YES 00:06:30.357 Compiler for C supports arguments -Wwrite-strings: YES 00:06:30.357 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:06:30.357 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:06:30.357 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:06:30.357 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:06:30.357 Program objdump found: YES (/usr/bin/objdump) 00:06:30.357 Compiler for C supports arguments -mavx512f: YES 00:06:30.357 Checking if "AVX512 checking" compiles: YES 00:06:30.357 Fetching value of define "__SSE4_2__" : 1 00:06:30.357 Fetching value of define "__AES__" : 1 00:06:30.357 Fetching value of define "__AVX__" : 1 00:06:30.357 Fetching value of define "__AVX2__" : 1 00:06:30.357 Fetching value of define "__AVX512BW__" : 1 00:06:30.357 Fetching value of define "__AVX512CD__" : 1 00:06:30.357 Fetching value of define "__AVX512DQ__" : 1 00:06:30.357 Fetching value of define "__AVX512F__" : 1 00:06:30.357 Fetching value of define "__AVX512VL__" : 1 00:06:30.357 Fetching value of define "__PCLMUL__" : 1 00:06:30.357 Fetching value of define "__RDRND__" : 1 00:06:30.357 Fetching value of define "__RDSEED__" : 1 00:06:30.357 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:06:30.357 Fetching value of define "__znver1__" : (undefined) 00:06:30.357 Fetching value of define "__znver2__" : (undefined) 00:06:30.357 Fetching value of define "__znver3__" : (undefined) 00:06:30.357 Fetching value of define "__znver4__" : (undefined) 00:06:30.357 Compiler for C supports arguments -Wno-format-truncation: YES 00:06:30.357 Message: lib/log: Defining dependency "log" 00:06:30.357 Message: lib/kvargs: Defining dependency "kvargs" 00:06:30.357 Message: lib/telemetry: Defining dependency "telemetry" 00:06:30.357 Checking for function "getentropy" : NO 00:06:30.357 Message: lib/eal: Defining dependency "eal" 00:06:30.357 Message: lib/ring: Defining dependency "ring" 00:06:30.357 Message: lib/rcu: Defining dependency "rcu" 00:06:30.357 Message: lib/mempool: Defining dependency "mempool" 00:06:30.357 Message: lib/mbuf: Defining dependency "mbuf" 00:06:30.357 Fetching value of define "__PCLMUL__" : 1 (cached) 00:06:30.357 Fetching value of define "__AVX512F__" : 1 (cached) 00:06:30.357 Fetching value of define "__AVX512BW__" : 1 (cached) 00:06:30.357 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:06:30.357 Fetching value of define "__AVX512VL__" : 1 (cached) 00:06:30.357 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:06:30.357 Compiler for C supports arguments -mpclmul: YES 00:06:30.357 Compiler for C supports arguments -maes: YES 00:06:30.357 Compiler for C supports arguments -mavx512f: YES (cached) 00:06:30.357 Compiler for C supports arguments -mavx512bw: YES 00:06:30.357 Compiler for C supports arguments -mavx512dq: YES 00:06:30.357 Compiler for C supports arguments -mavx512vl: YES 00:06:30.357 Compiler for C supports arguments -mvpclmulqdq: YES 00:06:30.357 Compiler for C supports arguments -mavx2: YES 00:06:30.357 Compiler for C supports arguments -mavx: YES 00:06:30.357 Message: lib/net: Defining dependency "net" 00:06:30.357 Message: lib/meter: Defining dependency "meter" 00:06:30.357 Message: lib/ethdev: Defining dependency "ethdev" 00:06:30.357 Message: lib/pci: Defining dependency "pci" 00:06:30.357 Message: lib/cmdline: Defining dependency "cmdline" 00:06:30.357 Message: lib/hash: Defining dependency "hash" 00:06:30.357 Message: lib/timer: Defining dependency "timer" 00:06:30.357 Message: lib/compressdev: Defining dependency "compressdev" 00:06:30.357 Message: lib/cryptodev: Defining dependency "cryptodev" 00:06:30.357 Message: lib/dmadev: Defining dependency "dmadev" 00:06:30.357 Compiler for C supports arguments -Wno-cast-qual: YES 00:06:30.357 Message: lib/power: Defining dependency "power" 00:06:30.357 Message: lib/reorder: Defining dependency "reorder" 00:06:30.357 Message: lib/security: Defining dependency "security" 00:06:30.357 Has header "linux/userfaultfd.h" : YES 00:06:30.357 Has header "linux/vduse.h" : YES 00:06:30.357 Message: lib/vhost: Defining dependency "vhost" 00:06:30.357 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:06:30.357 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:06:30.357 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:06:30.357 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:06:30.357 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:06:30.357 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:06:30.357 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:06:30.357 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:06:30.357 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:06:30.357 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:06:30.357 Program doxygen found: YES (/usr/local/bin/doxygen) 00:06:30.357 Configuring doxy-api-html.conf using configuration 00:06:30.357 Configuring doxy-api-man.conf using configuration 00:06:30.357 Program mandb found: YES (/usr/bin/mandb) 00:06:30.357 Program sphinx-build found: NO 00:06:30.357 Configuring rte_build_config.h using configuration 00:06:30.357 Message: 00:06:30.357 ================= 00:06:30.357 Applications Enabled 00:06:30.357 ================= 00:06:30.357 00:06:30.357 apps: 00:06:30.357 00:06:30.357 00:06:30.357 Message: 00:06:30.357 ================= 00:06:30.357 Libraries Enabled 00:06:30.357 ================= 00:06:30.357 00:06:30.357 libs: 00:06:30.357 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:06:30.357 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:06:30.357 cryptodev, dmadev, power, reorder, security, vhost, 00:06:30.357 00:06:30.357 Message: 00:06:30.357 =============== 00:06:30.357 Drivers Enabled 00:06:30.357 =============== 00:06:30.357 00:06:30.357 common: 00:06:30.357 00:06:30.357 bus: 00:06:30.357 pci, vdev, 00:06:30.357 mempool: 00:06:30.357 ring, 00:06:30.357 dma: 00:06:30.357 00:06:30.357 net: 00:06:30.357 00:06:30.357 crypto: 00:06:30.357 00:06:30.357 compress: 00:06:30.357 00:06:30.357 vdpa: 00:06:30.357 00:06:30.357 00:06:30.357 Message: 00:06:30.357 ================= 00:06:30.357 Content Skipped 00:06:30.357 ================= 00:06:30.357 00:06:30.357 apps: 00:06:30.357 dumpcap: explicitly disabled via build config 00:06:30.357 graph: explicitly disabled via build config 00:06:30.357 pdump: explicitly disabled via build config 00:06:30.357 proc-info: explicitly disabled via build config 00:06:30.357 test-acl: explicitly disabled via build config 00:06:30.357 test-bbdev: explicitly disabled via build config 00:06:30.357 test-cmdline: explicitly disabled via build config 00:06:30.357 test-compress-perf: explicitly disabled via build config 00:06:30.357 test-crypto-perf: explicitly disabled via build config 00:06:30.357 test-dma-perf: explicitly disabled via build config 00:06:30.357 test-eventdev: explicitly disabled via build config 00:06:30.357 test-fib: explicitly disabled via build config 00:06:30.357 test-flow-perf: explicitly disabled via build config 00:06:30.357 test-gpudev: explicitly disabled via build config 00:06:30.357 test-mldev: explicitly disabled via build config 00:06:30.357 test-pipeline: explicitly disabled via build config 00:06:30.357 test-pmd: explicitly disabled via build config 00:06:30.357 test-regex: explicitly disabled via build config 00:06:30.357 test-sad: explicitly disabled via build config 00:06:30.357 test-security-perf: explicitly disabled via build config 00:06:30.357 00:06:30.357 libs: 00:06:30.357 argparse: explicitly disabled via build config 00:06:30.357 metrics: explicitly disabled via build config 00:06:30.357 acl: explicitly disabled via build config 00:06:30.357 bbdev: explicitly disabled via build config 00:06:30.357 bitratestats: explicitly disabled via build config 00:06:30.357 bpf: explicitly disabled via build config 00:06:30.357 cfgfile: explicitly disabled via build config 00:06:30.357 distributor: explicitly disabled via build config 00:06:30.357 efd: explicitly disabled via build config 00:06:30.357 eventdev: explicitly disabled via build config 00:06:30.357 dispatcher: explicitly disabled via build config 00:06:30.357 gpudev: explicitly disabled via build config 00:06:30.357 gro: explicitly disabled via build config 00:06:30.357 gso: explicitly disabled via build config 00:06:30.357 ip_frag: explicitly disabled via build config 00:06:30.357 jobstats: explicitly disabled via build config 00:06:30.357 latencystats: explicitly disabled via build config 00:06:30.358 lpm: explicitly disabled via build config 00:06:30.358 member: explicitly disabled via build config 00:06:30.358 pcapng: explicitly disabled via build config 00:06:30.358 rawdev: explicitly disabled via build config 00:06:30.358 regexdev: explicitly disabled via build config 00:06:30.358 mldev: explicitly disabled via build config 00:06:30.358 rib: explicitly disabled via build config 00:06:30.358 sched: explicitly disabled via build config 00:06:30.358 stack: explicitly disabled via build config 00:06:30.358 ipsec: explicitly disabled via build config 00:06:30.358 pdcp: explicitly disabled via build config 00:06:30.358 fib: explicitly disabled via build config 00:06:30.358 port: explicitly disabled via build config 00:06:30.358 pdump: explicitly disabled via build config 00:06:30.358 table: explicitly disabled via build config 00:06:30.358 pipeline: explicitly disabled via build config 00:06:30.358 graph: explicitly disabled via build config 00:06:30.358 node: explicitly disabled via build config 00:06:30.358 00:06:30.358 drivers: 00:06:30.358 common/cpt: not in enabled drivers build config 00:06:30.358 common/dpaax: not in enabled drivers build config 00:06:30.358 common/iavf: not in enabled drivers build config 00:06:30.358 common/idpf: not in enabled drivers build config 00:06:30.358 common/ionic: not in enabled drivers build config 00:06:30.358 common/mvep: not in enabled drivers build config 00:06:30.358 common/octeontx: not in enabled drivers build config 00:06:30.358 bus/auxiliary: not in enabled drivers build config 00:06:30.358 bus/cdx: not in enabled drivers build config 00:06:30.358 bus/dpaa: not in enabled drivers build config 00:06:30.358 bus/fslmc: not in enabled drivers build config 00:06:30.358 bus/ifpga: not in enabled drivers build config 00:06:30.358 bus/platform: not in enabled drivers build config 00:06:30.358 bus/uacce: not in enabled drivers build config 00:06:30.358 bus/vmbus: not in enabled drivers build config 00:06:30.358 common/cnxk: not in enabled drivers build config 00:06:30.358 common/mlx5: not in enabled drivers build config 00:06:30.358 common/nfp: not in enabled drivers build config 00:06:30.358 common/nitrox: not in enabled drivers build config 00:06:30.358 common/qat: not in enabled drivers build config 00:06:30.358 common/sfc_efx: not in enabled drivers build config 00:06:30.358 mempool/bucket: not in enabled drivers build config 00:06:30.358 mempool/cnxk: not in enabled drivers build config 00:06:30.358 mempool/dpaa: not in enabled drivers build config 00:06:30.358 mempool/dpaa2: not in enabled drivers build config 00:06:30.358 mempool/octeontx: not in enabled drivers build config 00:06:30.358 mempool/stack: not in enabled drivers build config 00:06:30.358 dma/cnxk: not in enabled drivers build config 00:06:30.358 dma/dpaa: not in enabled drivers build config 00:06:30.358 dma/dpaa2: not in enabled drivers build config 00:06:30.358 dma/hisilicon: not in enabled drivers build config 00:06:30.358 dma/idxd: not in enabled drivers build config 00:06:30.358 dma/ioat: not in enabled drivers build config 00:06:30.358 dma/skeleton: not in enabled drivers build config 00:06:30.358 net/af_packet: not in enabled drivers build config 00:06:30.358 net/af_xdp: not in enabled drivers build config 00:06:30.358 net/ark: not in enabled drivers build config 00:06:30.358 net/atlantic: not in enabled drivers build config 00:06:30.358 net/avp: not in enabled drivers build config 00:06:30.358 net/axgbe: not in enabled drivers build config 00:06:30.358 net/bnx2x: not in enabled drivers build config 00:06:30.358 net/bnxt: not in enabled drivers build config 00:06:30.358 net/bonding: not in enabled drivers build config 00:06:30.358 net/cnxk: not in enabled drivers build config 00:06:30.358 net/cpfl: not in enabled drivers build config 00:06:30.358 net/cxgbe: not in enabled drivers build config 00:06:30.358 net/dpaa: not in enabled drivers build config 00:06:30.358 net/dpaa2: not in enabled drivers build config 00:06:30.358 net/e1000: not in enabled drivers build config 00:06:30.358 net/ena: not in enabled drivers build config 00:06:30.358 net/enetc: not in enabled drivers build config 00:06:30.358 net/enetfec: not in enabled drivers build config 00:06:30.358 net/enic: not in enabled drivers build config 00:06:30.358 net/failsafe: not in enabled drivers build config 00:06:30.358 net/fm10k: not in enabled drivers build config 00:06:30.358 net/gve: not in enabled drivers build config 00:06:30.358 net/hinic: not in enabled drivers build config 00:06:30.358 net/hns3: not in enabled drivers build config 00:06:30.358 net/i40e: not in enabled drivers build config 00:06:30.358 net/iavf: not in enabled drivers build config 00:06:30.358 net/ice: not in enabled drivers build config 00:06:30.358 net/idpf: not in enabled drivers build config 00:06:30.358 net/igc: not in enabled drivers build config 00:06:30.358 net/ionic: not in enabled drivers build config 00:06:30.358 net/ipn3ke: not in enabled drivers build config 00:06:30.358 net/ixgbe: not in enabled drivers build config 00:06:30.358 net/mana: not in enabled drivers build config 00:06:30.358 net/memif: not in enabled drivers build config 00:06:30.358 net/mlx4: not in enabled drivers build config 00:06:30.358 net/mlx5: not in enabled drivers build config 00:06:30.358 net/mvneta: not in enabled drivers build config 00:06:30.358 net/mvpp2: not in enabled drivers build config 00:06:30.358 net/netvsc: not in enabled drivers build config 00:06:30.358 net/nfb: not in enabled drivers build config 00:06:30.358 net/nfp: not in enabled drivers build config 00:06:30.358 net/ngbe: not in enabled drivers build config 00:06:30.358 net/null: not in enabled drivers build config 00:06:30.358 net/octeontx: not in enabled drivers build config 00:06:30.358 net/octeon_ep: not in enabled drivers build config 00:06:30.358 net/pcap: not in enabled drivers build config 00:06:30.358 net/pfe: not in enabled drivers build config 00:06:30.358 net/qede: not in enabled drivers build config 00:06:30.358 net/ring: not in enabled drivers build config 00:06:30.358 net/sfc: not in enabled drivers build config 00:06:30.358 net/softnic: not in enabled drivers build config 00:06:30.358 net/tap: not in enabled drivers build config 00:06:30.358 net/thunderx: not in enabled drivers build config 00:06:30.358 net/txgbe: not in enabled drivers build config 00:06:30.358 net/vdev_netvsc: not in enabled drivers build config 00:06:30.358 net/vhost: not in enabled drivers build config 00:06:30.358 net/virtio: not in enabled drivers build config 00:06:30.358 net/vmxnet3: not in enabled drivers build config 00:06:30.358 raw/*: missing internal dependency, "rawdev" 00:06:30.358 crypto/armv8: not in enabled drivers build config 00:06:30.358 crypto/bcmfs: not in enabled drivers build config 00:06:30.358 crypto/caam_jr: not in enabled drivers build config 00:06:30.358 crypto/ccp: not in enabled drivers build config 00:06:30.358 crypto/cnxk: not in enabled drivers build config 00:06:30.358 crypto/dpaa_sec: not in enabled drivers build config 00:06:30.358 crypto/dpaa2_sec: not in enabled drivers build config 00:06:30.358 crypto/ipsec_mb: not in enabled drivers build config 00:06:30.358 crypto/mlx5: not in enabled drivers build config 00:06:30.358 crypto/mvsam: not in enabled drivers build config 00:06:30.358 crypto/nitrox: not in enabled drivers build config 00:06:30.358 crypto/null: not in enabled drivers build config 00:06:30.358 crypto/octeontx: not in enabled drivers build config 00:06:30.358 crypto/openssl: not in enabled drivers build config 00:06:30.358 crypto/scheduler: not in enabled drivers build config 00:06:30.358 crypto/uadk: not in enabled drivers build config 00:06:30.358 crypto/virtio: not in enabled drivers build config 00:06:30.358 compress/isal: not in enabled drivers build config 00:06:30.358 compress/mlx5: not in enabled drivers build config 00:06:30.358 compress/nitrox: not in enabled drivers build config 00:06:30.358 compress/octeontx: not in enabled drivers build config 00:06:30.358 compress/zlib: not in enabled drivers build config 00:06:30.358 regex/*: missing internal dependency, "regexdev" 00:06:30.358 ml/*: missing internal dependency, "mldev" 00:06:30.358 vdpa/ifc: not in enabled drivers build config 00:06:30.358 vdpa/mlx5: not in enabled drivers build config 00:06:30.358 vdpa/nfp: not in enabled drivers build config 00:06:30.358 vdpa/sfc: not in enabled drivers build config 00:06:30.358 event/*: missing internal dependency, "eventdev" 00:06:30.358 baseband/*: missing internal dependency, "bbdev" 00:06:30.358 gpu/*: missing internal dependency, "gpudev" 00:06:30.358 00:06:30.358 00:06:30.617 Build targets in project: 85 00:06:30.618 00:06:30.618 DPDK 24.03.0 00:06:30.618 00:06:30.618 User defined options 00:06:30.618 buildtype : debug 00:06:30.618 default_library : shared 00:06:30.618 libdir : lib 00:06:30.618 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:30.618 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:06:30.618 c_link_args : 00:06:30.618 cpu_instruction_set: native 00:06:30.618 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:06:30.618 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:06:30.618 enable_docs : false 00:06:30.618 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:06:30.618 enable_kmods : false 00:06:30.618 max_lcores : 128 00:06:30.618 tests : false 00:06:30.618 00:06:30.618 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:06:30.876 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:06:31.140 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:06:31.140 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:06:31.140 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:06:31.140 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:06:31.140 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:06:31.140 [6/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:06:31.140 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:06:31.140 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:06:31.140 [9/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:06:31.140 [10/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:06:31.140 [11/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:06:31.140 [12/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:06:31.140 [13/268] Linking static target lib/librte_kvargs.a 00:06:31.140 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:06:31.140 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:06:31.140 [16/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:06:31.140 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:06:31.402 [18/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:06:31.402 [19/268] Linking static target lib/librte_log.a 00:06:31.402 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:06:31.402 [21/268] Linking static target lib/librte_pci.a 00:06:31.402 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:06:31.402 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:06:31.402 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:06:31.402 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:06:31.664 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:06:31.664 [27/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:06:31.664 [28/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:06:31.664 [29/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:06:31.664 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:06:31.664 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:06:31.664 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:06:31.664 [33/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:06:31.664 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:06:31.664 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:06:31.664 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:06:31.664 [37/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:06:31.664 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:06:31.664 [39/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:06:31.664 [40/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:06:31.664 [41/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:06:31.664 [42/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:06:31.664 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:06:31.664 [44/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:06:31.664 [45/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:06:31.664 [46/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:06:31.664 [47/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:06:31.664 [48/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:06:31.664 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:06:31.664 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:06:31.664 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:06:31.664 [52/268] Linking static target lib/librte_meter.a 00:06:31.664 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:06:31.664 [54/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:06:31.664 [55/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:06:31.664 [56/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:06:31.664 [57/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:06:31.664 [58/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:06:31.664 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:06:31.664 [60/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:06:31.664 [61/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:06:31.664 [62/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:06:31.664 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:06:31.664 [64/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:06:31.665 [65/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:06:31.665 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:06:31.665 [67/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:06:31.665 [68/268] Linking static target lib/librte_ring.a 00:06:31.665 [69/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:06:31.665 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:06:31.665 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:06:31.665 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:06:31.665 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:06:31.665 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:06:31.665 [75/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:06:31.665 [76/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:06:31.665 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:06:31.665 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:06:31.665 [79/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:06:31.665 [80/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:06:31.665 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:06:31.665 [82/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:06:31.665 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:06:31.665 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:06:31.665 [85/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:06:31.665 [86/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:06:31.665 [87/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:06:31.665 [88/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:06:31.665 [89/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:06:31.665 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:06:31.665 [91/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:06:31.665 [92/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:06:31.923 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:06:31.923 [94/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:06:31.923 [95/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:06:31.923 [96/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:06:31.923 [97/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:06:31.923 [98/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:06:31.923 [99/268] Linking static target lib/librte_telemetry.a 00:06:31.923 [100/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:06:31.923 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:06:31.923 [102/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:06:31.923 [103/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:06:31.923 [104/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:06:31.923 [105/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:06:31.923 [106/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:06:31.923 [107/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:06:31.923 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:06:31.923 [109/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:06:31.923 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:06:31.923 [111/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:06:31.923 [112/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:06:31.923 [113/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:06:31.923 [114/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:06:31.923 [115/268] Linking static target lib/librte_mempool.a 00:06:31.923 [116/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:06:31.923 [117/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:06:31.923 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:06:31.923 [119/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:06:31.923 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:06:31.923 [121/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:06:31.923 [122/268] Linking static target lib/librte_net.a 00:06:31.923 [123/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:06:31.923 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:06:31.923 [125/268] Linking static target lib/librte_rcu.a 00:06:31.923 [126/268] Linking static target lib/librte_eal.a 00:06:31.923 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:06:31.923 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:06:31.923 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:06:31.923 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:06:31.923 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:06:31.923 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:06:31.923 [133/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:06:31.923 [134/268] Linking static target lib/librte_cmdline.a 00:06:31.923 [135/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:06:31.923 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:06:31.924 [137/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:06:32.182 [138/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:06:32.182 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:06:32.182 [140/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:06:32.182 [141/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:06:32.182 [142/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:06:32.182 [143/268] Linking target lib/librte_log.so.24.1 00:06:32.182 [144/268] Linking static target lib/librte_mbuf.a 00:06:32.182 [145/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:06:32.182 [146/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:06:32.182 [147/268] Linking static target lib/librte_timer.a 00:06:32.182 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:06:32.182 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:06:32.183 [150/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:06:32.183 [151/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:06:32.183 [152/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:06:32.183 [153/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:06:32.183 [154/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:06:32.183 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:06:32.183 [156/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:06:32.183 [157/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:06:32.183 [158/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:06:32.183 [159/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:06:32.183 [160/268] Linking static target lib/librte_dmadev.a 00:06:32.183 [161/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:06:32.183 [162/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:06:32.183 [163/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:06:32.183 [164/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:06:32.183 [165/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:06:32.183 [166/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:06:32.183 [167/268] Linking static target lib/librte_reorder.a 00:06:32.183 [168/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:06:32.183 [169/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:06:32.183 [170/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:06:32.183 [171/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:06:32.183 [172/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:06:32.183 [173/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:06:32.183 [174/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:06:32.183 [175/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:06:32.183 [176/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:06:32.183 [177/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:06:32.183 [178/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:06:32.183 [179/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:06:32.183 [180/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:06:32.183 [181/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:06:32.183 [182/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:06:32.183 [183/268] Linking target lib/librte_kvargs.so.24.1 00:06:32.183 [184/268] Linking target lib/librte_telemetry.so.24.1 00:06:32.183 [185/268] Linking static target lib/librte_compressdev.a 00:06:32.183 [186/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:06:32.467 [187/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:06:32.467 [188/268] Linking static target lib/librte_power.a 00:06:32.467 [189/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:06:32.467 [190/268] Linking static target lib/librte_hash.a 00:06:32.467 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:06:32.467 [192/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:06:32.467 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:06:32.467 [194/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:06:32.467 [195/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:06:32.467 [196/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:06:32.467 [197/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:06:32.468 [198/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:06:32.468 [199/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:06:32.468 [200/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:06:32.468 [201/268] Linking static target lib/librte_security.a 00:06:32.468 [202/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:06:32.468 [203/268] Linking static target drivers/librte_bus_vdev.a 00:06:32.468 [204/268] Linking static target drivers/librte_bus_pci.a 00:06:32.468 [205/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:06:32.468 [206/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:06:32.468 [207/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:06:32.468 [208/268] Linking static target drivers/librte_mempool_ring.a 00:06:32.468 [209/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:06:32.727 [210/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:06:32.727 [211/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:06:32.727 [212/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:06:32.727 [213/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:06:32.727 [214/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:06:32.727 [215/268] Linking static target lib/librte_cryptodev.a 00:06:32.727 [216/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:06:32.986 [217/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:32.986 [218/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:06:32.986 [219/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:32.986 [220/268] Linking static target lib/librte_ethdev.a 00:06:32.986 [221/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:32.986 [222/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:06:33.245 [223/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:06:33.245 [224/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:06:33.245 [225/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:06:33.245 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:06:33.245 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:06:34.180 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:06:34.180 [229/268] Linking static target lib/librte_vhost.a 00:06:34.748 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:36.125 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:06:41.387 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:41.952 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:06:41.952 [234/268] Linking target lib/librte_eal.so.24.1 00:06:42.210 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:06:42.210 [236/268] Linking target lib/librte_pci.so.24.1 00:06:42.210 [237/268] Linking target lib/librte_ring.so.24.1 00:06:42.210 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:06:42.210 [239/268] Linking target lib/librte_dmadev.so.24.1 00:06:42.210 [240/268] Linking target lib/librte_meter.so.24.1 00:06:42.210 [241/268] Linking target lib/librte_timer.so.24.1 00:06:42.210 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:06:42.210 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:06:42.210 [244/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:06:42.210 [245/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:06:42.210 [246/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:06:42.210 [247/268] Linking target lib/librte_rcu.so.24.1 00:06:42.210 [248/268] Linking target lib/librte_mempool.so.24.1 00:06:42.210 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:06:42.469 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:06:42.469 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:06:42.469 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:06:42.469 [253/268] Linking target lib/librte_mbuf.so.24.1 00:06:42.727 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:06:42.727 [255/268] Linking target lib/librte_net.so.24.1 00:06:42.727 [256/268] Linking target lib/librte_compressdev.so.24.1 00:06:42.727 [257/268] Linking target lib/librte_reorder.so.24.1 00:06:42.727 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:06:42.727 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:06:42.727 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:06:42.727 [261/268] Linking target lib/librte_hash.so.24.1 00:06:42.727 [262/268] Linking target lib/librte_cmdline.so.24.1 00:06:42.987 [263/268] Linking target lib/librte_security.so.24.1 00:06:42.987 [264/268] Linking target lib/librte_ethdev.so.24.1 00:06:42.987 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:06:42.987 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:06:42.987 [267/268] Linking target lib/librte_power.so.24.1 00:06:42.987 [268/268] Linking target lib/librte_vhost.so.24.1 00:06:42.987 INFO: autodetecting backend as ninja 00:06:42.987 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:06:55.197 CC lib/log/log.o 00:06:55.197 CC lib/ut/ut.o 00:06:55.197 CC lib/log/log_flags.o 00:06:55.197 CC lib/log/log_deprecated.o 00:06:55.197 CC lib/ut_mock/mock.o 00:06:55.197 LIB libspdk_ut_mock.a 00:06:55.197 LIB libspdk_ut.a 00:06:55.197 LIB libspdk_log.a 00:06:55.197 SO libspdk_ut_mock.so.6.0 00:06:55.197 SO libspdk_ut.so.2.0 00:06:55.197 SO libspdk_log.so.7.1 00:06:55.197 SYMLINK libspdk_ut_mock.so 00:06:55.197 SYMLINK libspdk_ut.so 00:06:55.197 SYMLINK libspdk_log.so 00:06:55.197 CC lib/ioat/ioat.o 00:06:55.197 CC lib/util/base64.o 00:06:55.197 CC lib/util/bit_array.o 00:06:55.197 CC lib/dma/dma.o 00:06:55.197 CC lib/util/cpuset.o 00:06:55.197 CXX lib/trace_parser/trace.o 00:06:55.197 CC lib/util/crc16.o 00:06:55.197 CC lib/util/crc32.o 00:06:55.197 CC lib/util/crc32c.o 00:06:55.197 CC lib/util/crc32_ieee.o 00:06:55.197 CC lib/util/crc64.o 00:06:55.197 CC lib/util/dif.o 00:06:55.197 CC lib/util/fd.o 00:06:55.197 CC lib/util/fd_group.o 00:06:55.197 CC lib/util/file.o 00:06:55.197 CC lib/util/hexlify.o 00:06:55.197 CC lib/util/iov.o 00:06:55.197 CC lib/util/math.o 00:06:55.197 CC lib/util/net.o 00:06:55.197 CC lib/util/pipe.o 00:06:55.197 CC lib/util/strerror_tls.o 00:06:55.197 CC lib/util/string.o 00:06:55.197 CC lib/util/uuid.o 00:06:55.197 CC lib/util/xor.o 00:06:55.197 CC lib/util/zipf.o 00:06:55.197 CC lib/util/md5.o 00:06:55.197 CC lib/vfio_user/host/vfio_user_pci.o 00:06:55.197 CC lib/vfio_user/host/vfio_user.o 00:06:55.197 LIB libspdk_dma.a 00:06:55.197 SO libspdk_dma.so.5.0 00:06:55.197 LIB libspdk_ioat.a 00:06:55.197 SO libspdk_ioat.so.7.0 00:06:55.197 SYMLINK libspdk_dma.so 00:06:55.197 SYMLINK libspdk_ioat.so 00:06:55.197 LIB libspdk_vfio_user.a 00:06:55.197 SO libspdk_vfio_user.so.5.0 00:06:55.197 SYMLINK libspdk_vfio_user.so 00:06:55.197 LIB libspdk_util.a 00:06:55.197 SO libspdk_util.so.10.1 00:06:55.197 SYMLINK libspdk_util.so 00:06:55.197 LIB libspdk_trace_parser.a 00:06:55.197 SO libspdk_trace_parser.so.6.0 00:06:55.197 SYMLINK libspdk_trace_parser.so 00:06:55.197 CC lib/json/json_parse.o 00:06:55.197 CC lib/json/json_util.o 00:06:55.197 CC lib/json/json_write.o 00:06:55.197 CC lib/conf/conf.o 00:06:55.197 CC lib/idxd/idxd.o 00:06:55.197 CC lib/idxd/idxd_user.o 00:06:55.197 CC lib/idxd/idxd_kernel.o 00:06:55.197 CC lib/env_dpdk/env.o 00:06:55.197 CC lib/env_dpdk/memory.o 00:06:55.197 CC lib/env_dpdk/pci.o 00:06:55.197 CC lib/vmd/vmd.o 00:06:55.197 CC lib/rdma_utils/rdma_utils.o 00:06:55.197 CC lib/vmd/led.o 00:06:55.197 CC lib/env_dpdk/init.o 00:06:55.197 CC lib/env_dpdk/threads.o 00:06:55.197 CC lib/env_dpdk/pci_ioat.o 00:06:55.197 CC lib/env_dpdk/pci_virtio.o 00:06:55.197 CC lib/env_dpdk/pci_vmd.o 00:06:55.197 CC lib/env_dpdk/pci_idxd.o 00:06:55.197 CC lib/env_dpdk/pci_event.o 00:06:55.197 CC lib/env_dpdk/sigbus_handler.o 00:06:55.197 CC lib/env_dpdk/pci_dpdk.o 00:06:55.197 CC lib/env_dpdk/pci_dpdk_2207.o 00:06:55.197 CC lib/env_dpdk/pci_dpdk_2211.o 00:06:55.197 LIB libspdk_conf.a 00:06:55.197 SO libspdk_conf.so.6.0 00:06:55.197 LIB libspdk_json.a 00:06:55.197 LIB libspdk_rdma_utils.a 00:06:55.197 SO libspdk_json.so.6.0 00:06:55.197 SYMLINK libspdk_conf.so 00:06:55.197 SO libspdk_rdma_utils.so.1.0 00:06:55.197 SYMLINK libspdk_json.so 00:06:55.197 SYMLINK libspdk_rdma_utils.so 00:06:55.455 LIB libspdk_idxd.a 00:06:55.455 SO libspdk_idxd.so.12.1 00:06:55.455 LIB libspdk_vmd.a 00:06:55.455 SO libspdk_vmd.so.6.0 00:06:55.455 SYMLINK libspdk_idxd.so 00:06:55.455 SYMLINK libspdk_vmd.so 00:06:55.455 CC lib/jsonrpc/jsonrpc_server.o 00:06:55.455 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:06:55.455 CC lib/jsonrpc/jsonrpc_client.o 00:06:55.455 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:06:55.455 CC lib/rdma_provider/common.o 00:06:55.455 CC lib/rdma_provider/rdma_provider_verbs.o 00:06:55.713 LIB libspdk_rdma_provider.a 00:06:55.713 LIB libspdk_jsonrpc.a 00:06:55.713 SO libspdk_rdma_provider.so.7.0 00:06:55.713 SO libspdk_jsonrpc.so.6.0 00:06:55.971 SYMLINK libspdk_rdma_provider.so 00:06:55.971 SYMLINK libspdk_jsonrpc.so 00:06:55.971 LIB libspdk_env_dpdk.a 00:06:55.971 SO libspdk_env_dpdk.so.15.1 00:06:56.230 SYMLINK libspdk_env_dpdk.so 00:06:56.230 CC lib/rpc/rpc.o 00:06:56.539 LIB libspdk_rpc.a 00:06:56.539 SO libspdk_rpc.so.6.0 00:06:56.539 SYMLINK libspdk_rpc.so 00:06:56.798 CC lib/keyring/keyring.o 00:06:56.798 CC lib/trace/trace.o 00:06:56.798 CC lib/trace/trace_flags.o 00:06:56.798 CC lib/keyring/keyring_rpc.o 00:06:56.798 CC lib/trace/trace_rpc.o 00:06:56.798 CC lib/notify/notify.o 00:06:56.798 CC lib/notify/notify_rpc.o 00:06:57.056 LIB libspdk_notify.a 00:06:57.056 LIB libspdk_keyring.a 00:06:57.056 SO libspdk_notify.so.6.0 00:06:57.056 LIB libspdk_trace.a 00:06:57.056 SO libspdk_keyring.so.2.0 00:06:57.056 SO libspdk_trace.so.11.0 00:06:57.056 SYMLINK libspdk_notify.so 00:06:57.056 SYMLINK libspdk_keyring.so 00:06:57.056 SYMLINK libspdk_trace.so 00:06:57.315 CC lib/sock/sock.o 00:06:57.315 CC lib/sock/sock_rpc.o 00:06:57.315 CC lib/thread/thread.o 00:06:57.315 CC lib/thread/iobuf.o 00:06:57.881 LIB libspdk_sock.a 00:06:57.881 SO libspdk_sock.so.10.0 00:06:57.881 SYMLINK libspdk_sock.so 00:06:58.140 CC lib/nvme/nvme_ctrlr_cmd.o 00:06:58.140 CC lib/nvme/nvme_ctrlr.o 00:06:58.140 CC lib/nvme/nvme_fabric.o 00:06:58.140 CC lib/nvme/nvme_ns_cmd.o 00:06:58.140 CC lib/nvme/nvme_ns.o 00:06:58.140 CC lib/nvme/nvme_pcie_common.o 00:06:58.140 CC lib/nvme/nvme_pcie.o 00:06:58.140 CC lib/nvme/nvme_qpair.o 00:06:58.140 CC lib/nvme/nvme.o 00:06:58.140 CC lib/nvme/nvme_quirks.o 00:06:58.140 CC lib/nvme/nvme_transport.o 00:06:58.140 CC lib/nvme/nvme_discovery.o 00:06:58.140 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:06:58.140 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:06:58.140 CC lib/nvme/nvme_tcp.o 00:06:58.140 CC lib/nvme/nvme_opal.o 00:06:58.140 CC lib/nvme/nvme_io_msg.o 00:06:58.140 CC lib/nvme/nvme_poll_group.o 00:06:58.140 CC lib/nvme/nvme_zns.o 00:06:58.140 CC lib/nvme/nvme_stubs.o 00:06:58.140 CC lib/nvme/nvme_auth.o 00:06:58.140 CC lib/nvme/nvme_cuse.o 00:06:58.140 CC lib/nvme/nvme_vfio_user.o 00:06:58.140 CC lib/nvme/nvme_rdma.o 00:06:58.399 LIB libspdk_thread.a 00:06:58.657 SO libspdk_thread.so.11.0 00:06:58.657 SYMLINK libspdk_thread.so 00:06:58.916 CC lib/vfu_tgt/tgt_endpoint.o 00:06:58.916 CC lib/vfu_tgt/tgt_rpc.o 00:06:58.916 CC lib/init/json_config.o 00:06:58.916 CC lib/init/subsystem.o 00:06:58.916 CC lib/init/subsystem_rpc.o 00:06:58.916 CC lib/init/rpc.o 00:06:58.916 CC lib/fsdev/fsdev.o 00:06:58.916 CC lib/accel/accel.o 00:06:58.916 CC lib/fsdev/fsdev_io.o 00:06:58.916 CC lib/accel/accel_rpc.o 00:06:58.916 CC lib/fsdev/fsdev_rpc.o 00:06:58.916 CC lib/accel/accel_sw.o 00:06:58.916 CC lib/virtio/virtio_vhost_user.o 00:06:58.916 CC lib/virtio/virtio.o 00:06:58.916 CC lib/virtio/virtio_vfio_user.o 00:06:58.916 CC lib/virtio/virtio_pci.o 00:06:58.916 CC lib/blob/blobstore.o 00:06:58.916 CC lib/blob/request.o 00:06:58.916 CC lib/blob/zeroes.o 00:06:58.916 CC lib/blob/blob_bs_dev.o 00:06:59.175 LIB libspdk_init.a 00:06:59.175 SO libspdk_init.so.6.0 00:06:59.175 LIB libspdk_vfu_tgt.a 00:06:59.175 SO libspdk_vfu_tgt.so.3.0 00:06:59.175 LIB libspdk_virtio.a 00:06:59.175 SYMLINK libspdk_init.so 00:06:59.175 SO libspdk_virtio.so.7.0 00:06:59.175 SYMLINK libspdk_vfu_tgt.so 00:06:59.434 SYMLINK libspdk_virtio.so 00:06:59.434 LIB libspdk_fsdev.a 00:06:59.434 SO libspdk_fsdev.so.2.0 00:06:59.434 CC lib/event/app.o 00:06:59.434 CC lib/event/reactor.o 00:06:59.434 CC lib/event/log_rpc.o 00:06:59.434 CC lib/event/app_rpc.o 00:06:59.434 CC lib/event/scheduler_static.o 00:06:59.434 SYMLINK libspdk_fsdev.so 00:06:59.693 LIB libspdk_accel.a 00:06:59.693 SO libspdk_accel.so.16.0 00:06:59.693 LIB libspdk_nvme.a 00:06:59.953 SYMLINK libspdk_accel.so 00:06:59.953 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:06:59.953 LIB libspdk_event.a 00:06:59.953 SO libspdk_nvme.so.15.0 00:06:59.953 SO libspdk_event.so.14.0 00:06:59.953 SYMLINK libspdk_event.so 00:07:00.212 SYMLINK libspdk_nvme.so 00:07:00.212 CC lib/bdev/bdev.o 00:07:00.212 CC lib/bdev/bdev_rpc.o 00:07:00.212 CC lib/bdev/bdev_zone.o 00:07:00.212 CC lib/bdev/part.o 00:07:00.212 CC lib/bdev/scsi_nvme.o 00:07:00.212 LIB libspdk_fuse_dispatcher.a 00:07:00.471 SO libspdk_fuse_dispatcher.so.1.0 00:07:00.471 SYMLINK libspdk_fuse_dispatcher.so 00:07:01.042 LIB libspdk_blob.a 00:07:01.305 SO libspdk_blob.so.12.0 00:07:01.305 SYMLINK libspdk_blob.so 00:07:01.564 CC lib/blobfs/blobfs.o 00:07:01.564 CC lib/blobfs/tree.o 00:07:01.564 CC lib/lvol/lvol.o 00:07:01.851 LIB libspdk_bdev.a 00:07:02.110 SO libspdk_bdev.so.17.0 00:07:02.110 SYMLINK libspdk_bdev.so 00:07:02.110 LIB libspdk_blobfs.a 00:07:02.110 SO libspdk_blobfs.so.11.0 00:07:02.110 LIB libspdk_lvol.a 00:07:02.110 SYMLINK libspdk_blobfs.so 00:07:02.110 SO libspdk_lvol.so.11.0 00:07:02.368 SYMLINK libspdk_lvol.so 00:07:02.368 CC lib/ublk/ublk.o 00:07:02.368 CC lib/ublk/ublk_rpc.o 00:07:02.368 CC lib/scsi/dev.o 00:07:02.368 CC lib/scsi/lun.o 00:07:02.368 CC lib/nbd/nbd.o 00:07:02.368 CC lib/nbd/nbd_rpc.o 00:07:02.368 CC lib/scsi/port.o 00:07:02.368 CC lib/scsi/scsi.o 00:07:02.368 CC lib/scsi/scsi_bdev.o 00:07:02.368 CC lib/scsi/scsi_pr.o 00:07:02.368 CC lib/scsi/scsi_rpc.o 00:07:02.368 CC lib/nvmf/ctrlr.o 00:07:02.368 CC lib/scsi/task.o 00:07:02.368 CC lib/nvmf/ctrlr_discovery.o 00:07:02.368 CC lib/ftl/ftl_core.o 00:07:02.368 CC lib/nvmf/ctrlr_bdev.o 00:07:02.368 CC lib/nvmf/subsystem.o 00:07:02.368 CC lib/ftl/ftl_init.o 00:07:02.368 CC lib/ftl/ftl_layout.o 00:07:02.368 CC lib/nvmf/nvmf.o 00:07:02.368 CC lib/nvmf/nvmf_rpc.o 00:07:02.368 CC lib/ftl/ftl_debug.o 00:07:02.368 CC lib/nvmf/transport.o 00:07:02.368 CC lib/nvmf/tcp.o 00:07:02.368 CC lib/ftl/ftl_io.o 00:07:02.368 CC lib/nvmf/stubs.o 00:07:02.368 CC lib/ftl/ftl_sb.o 00:07:02.368 CC lib/ftl/ftl_l2p.o 00:07:02.368 CC lib/nvmf/mdns_server.o 00:07:02.368 CC lib/ftl/ftl_l2p_flat.o 00:07:02.369 CC lib/nvmf/vfio_user.o 00:07:02.369 CC lib/ftl/ftl_nv_cache.o 00:07:02.369 CC lib/nvmf/rdma.o 00:07:02.369 CC lib/ftl/ftl_band.o 00:07:02.369 CC lib/ftl/ftl_band_ops.o 00:07:02.369 CC lib/nvmf/auth.o 00:07:02.369 CC lib/ftl/ftl_writer.o 00:07:02.369 CC lib/ftl/ftl_rq.o 00:07:02.369 CC lib/ftl/ftl_reloc.o 00:07:02.369 CC lib/ftl/ftl_l2p_cache.o 00:07:02.369 CC lib/ftl/mngt/ftl_mngt.o 00:07:02.369 CC lib/ftl/ftl_p2l_log.o 00:07:02.369 CC lib/ftl/ftl_p2l.o 00:07:02.369 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:07:02.369 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:07:02.369 CC lib/ftl/mngt/ftl_mngt_md.o 00:07:02.369 CC lib/ftl/mngt/ftl_mngt_startup.o 00:07:02.369 CC lib/ftl/mngt/ftl_mngt_misc.o 00:07:02.369 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:07:02.369 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:07:02.369 CC lib/ftl/mngt/ftl_mngt_band.o 00:07:02.369 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:07:02.369 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:07:02.369 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:07:02.369 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:07:02.369 CC lib/ftl/utils/ftl_conf.o 00:07:02.369 CC lib/ftl/utils/ftl_md.o 00:07:02.369 CC lib/ftl/utils/ftl_mempool.o 00:07:02.369 CC lib/ftl/utils/ftl_bitmap.o 00:07:02.369 CC lib/ftl/utils/ftl_property.o 00:07:02.369 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:07:02.369 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:07:02.369 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:07:02.369 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:07:02.369 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:07:02.369 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:07:02.369 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:07:02.369 CC lib/ftl/upgrade/ftl_sb_v3.o 00:07:02.369 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:07:02.369 CC lib/ftl/upgrade/ftl_sb_v5.o 00:07:02.369 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:07:02.369 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:07:02.369 CC lib/ftl/nvc/ftl_nvc_dev.o 00:07:02.369 CC lib/ftl/base/ftl_base_dev.o 00:07:02.369 CC lib/ftl/base/ftl_base_bdev.o 00:07:02.369 CC lib/ftl/ftl_trace.o 00:07:02.936 LIB libspdk_nbd.a 00:07:02.936 SO libspdk_nbd.so.7.0 00:07:02.936 SYMLINK libspdk_nbd.so 00:07:03.194 LIB libspdk_ublk.a 00:07:03.195 LIB libspdk_scsi.a 00:07:03.195 SO libspdk_ublk.so.3.0 00:07:03.195 SO libspdk_scsi.so.9.0 00:07:03.195 SYMLINK libspdk_ublk.so 00:07:03.195 SYMLINK libspdk_scsi.so 00:07:03.453 LIB libspdk_ftl.a 00:07:03.453 SO libspdk_ftl.so.9.0 00:07:03.453 CC lib/iscsi/conn.o 00:07:03.453 CC lib/iscsi/init_grp.o 00:07:03.453 CC lib/iscsi/iscsi.o 00:07:03.453 CC lib/iscsi/param.o 00:07:03.453 CC lib/iscsi/portal_grp.o 00:07:03.453 CC lib/iscsi/tgt_node.o 00:07:03.453 CC lib/iscsi/iscsi_subsystem.o 00:07:03.453 CC lib/iscsi/iscsi_rpc.o 00:07:03.453 CC lib/iscsi/task.o 00:07:03.453 CC lib/vhost/vhost.o 00:07:03.453 CC lib/vhost/vhost_rpc.o 00:07:03.453 CC lib/vhost/vhost_scsi.o 00:07:03.453 CC lib/vhost/vhost_blk.o 00:07:03.453 CC lib/vhost/rte_vhost_user.o 00:07:03.712 SYMLINK libspdk_ftl.so 00:07:04.279 LIB libspdk_nvmf.a 00:07:04.279 SO libspdk_nvmf.so.20.0 00:07:04.279 LIB libspdk_vhost.a 00:07:04.279 SO libspdk_vhost.so.8.0 00:07:04.538 SYMLINK libspdk_nvmf.so 00:07:04.538 SYMLINK libspdk_vhost.so 00:07:04.538 LIB libspdk_iscsi.a 00:07:04.538 SO libspdk_iscsi.so.8.0 00:07:04.797 SYMLINK libspdk_iscsi.so 00:07:05.473 CC module/vfu_device/vfu_virtio_blk.o 00:07:05.473 CC module/vfu_device/vfu_virtio.o 00:07:05.473 CC module/env_dpdk/env_dpdk_rpc.o 00:07:05.473 CC module/vfu_device/vfu_virtio_scsi.o 00:07:05.473 CC module/vfu_device/vfu_virtio_fs.o 00:07:05.473 CC module/vfu_device/vfu_virtio_rpc.o 00:07:05.473 LIB libspdk_env_dpdk_rpc.a 00:07:05.473 CC module/scheduler/dynamic/scheduler_dynamic.o 00:07:05.473 CC module/keyring/linux/keyring.o 00:07:05.473 CC module/keyring/linux/keyring_rpc.o 00:07:05.473 CC module/sock/posix/posix.o 00:07:05.473 CC module/keyring/file/keyring.o 00:07:05.473 CC module/keyring/file/keyring_rpc.o 00:07:05.473 CC module/accel/dsa/accel_dsa.o 00:07:05.473 CC module/blob/bdev/blob_bdev.o 00:07:05.473 CC module/accel/dsa/accel_dsa_rpc.o 00:07:05.473 CC module/fsdev/aio/fsdev_aio.o 00:07:05.473 CC module/scheduler/gscheduler/gscheduler.o 00:07:05.473 CC module/fsdev/aio/linux_aio_mgr.o 00:07:05.473 CC module/fsdev/aio/fsdev_aio_rpc.o 00:07:05.473 CC module/accel/iaa/accel_iaa.o 00:07:05.473 CC module/accel/ioat/accel_ioat_rpc.o 00:07:05.473 CC module/accel/ioat/accel_ioat.o 00:07:05.473 CC module/accel/iaa/accel_iaa_rpc.o 00:07:05.473 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:07:05.473 CC module/accel/error/accel_error.o 00:07:05.473 CC module/accel/error/accel_error_rpc.o 00:07:05.473 SO libspdk_env_dpdk_rpc.so.6.0 00:07:05.473 SYMLINK libspdk_env_dpdk_rpc.so 00:07:05.473 LIB libspdk_keyring_linux.a 00:07:05.473 LIB libspdk_keyring_file.a 00:07:05.473 LIB libspdk_scheduler_dynamic.a 00:07:05.473 LIB libspdk_scheduler_gscheduler.a 00:07:05.473 SO libspdk_keyring_linux.so.1.0 00:07:05.819 SO libspdk_scheduler_dynamic.so.4.0 00:07:05.819 LIB libspdk_scheduler_dpdk_governor.a 00:07:05.819 SO libspdk_keyring_file.so.2.0 00:07:05.819 LIB libspdk_accel_ioat.a 00:07:05.819 SO libspdk_scheduler_gscheduler.so.4.0 00:07:05.819 LIB libspdk_accel_error.a 00:07:05.819 LIB libspdk_accel_iaa.a 00:07:05.819 SO libspdk_scheduler_dpdk_governor.so.4.0 00:07:05.819 SO libspdk_accel_error.so.2.0 00:07:05.819 SYMLINK libspdk_keyring_linux.so 00:07:05.819 SYMLINK libspdk_scheduler_dynamic.so 00:07:05.819 SO libspdk_accel_ioat.so.6.0 00:07:05.819 SYMLINK libspdk_keyring_file.so 00:07:05.819 SO libspdk_accel_iaa.so.3.0 00:07:05.819 SYMLINK libspdk_scheduler_gscheduler.so 00:07:05.819 LIB libspdk_blob_bdev.a 00:07:05.819 LIB libspdk_accel_dsa.a 00:07:05.819 SYMLINK libspdk_scheduler_dpdk_governor.so 00:07:05.819 SYMLINK libspdk_accel_error.so 00:07:05.819 SYMLINK libspdk_accel_iaa.so 00:07:05.819 SO libspdk_blob_bdev.so.12.0 00:07:05.819 SYMLINK libspdk_accel_ioat.so 00:07:05.819 SO libspdk_accel_dsa.so.5.0 00:07:05.819 LIB libspdk_vfu_device.a 00:07:05.819 SYMLINK libspdk_blob_bdev.so 00:07:05.819 SYMLINK libspdk_accel_dsa.so 00:07:05.819 SO libspdk_vfu_device.so.3.0 00:07:05.819 SYMLINK libspdk_vfu_device.so 00:07:05.819 LIB libspdk_fsdev_aio.a 00:07:06.106 LIB libspdk_sock_posix.a 00:07:06.106 SO libspdk_fsdev_aio.so.1.0 00:07:06.106 SO libspdk_sock_posix.so.6.0 00:07:06.106 SYMLINK libspdk_fsdev_aio.so 00:07:06.106 SYMLINK libspdk_sock_posix.so 00:07:06.106 CC module/bdev/error/vbdev_error_rpc.o 00:07:06.106 CC module/bdev/error/vbdev_error.o 00:07:06.106 CC module/bdev/nvme/bdev_nvme.o 00:07:06.106 CC module/bdev/gpt/gpt.o 00:07:06.106 CC module/bdev/gpt/vbdev_gpt.o 00:07:06.106 CC module/bdev/nvme/bdev_nvme_rpc.o 00:07:06.106 CC module/bdev/virtio/bdev_virtio_scsi.o 00:07:06.106 CC module/bdev/virtio/bdev_virtio_blk.o 00:07:06.106 CC module/bdev/virtio/bdev_virtio_rpc.o 00:07:06.106 CC module/bdev/nvme/vbdev_opal.o 00:07:06.106 CC module/bdev/nvme/nvme_rpc.o 00:07:06.106 CC module/bdev/nvme/vbdev_opal_rpc.o 00:07:06.106 CC module/bdev/nvme/bdev_mdns_client.o 00:07:06.106 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:07:06.106 CC module/bdev/raid/bdev_raid.o 00:07:06.106 CC module/bdev/raid/bdev_raid_rpc.o 00:07:06.106 CC module/bdev/raid/bdev_raid_sb.o 00:07:06.106 CC module/bdev/delay/vbdev_delay.o 00:07:06.106 CC module/bdev/raid/raid0.o 00:07:06.106 CC module/bdev/raid/raid1.o 00:07:06.106 CC module/bdev/delay/vbdev_delay_rpc.o 00:07:06.106 CC module/bdev/lvol/vbdev_lvol.o 00:07:06.106 CC module/bdev/raid/concat.o 00:07:06.106 CC module/bdev/aio/bdev_aio.o 00:07:06.106 CC module/bdev/passthru/vbdev_passthru.o 00:07:06.106 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:07:06.106 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:07:06.106 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:07:06.106 CC module/bdev/aio/bdev_aio_rpc.o 00:07:06.106 CC module/bdev/zone_block/vbdev_zone_block.o 00:07:06.106 CC module/blobfs/bdev/blobfs_bdev.o 00:07:06.106 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:07:06.106 CC module/bdev/split/vbdev_split_rpc.o 00:07:06.106 CC module/bdev/split/vbdev_split.o 00:07:06.106 CC module/bdev/malloc/bdev_malloc.o 00:07:06.106 CC module/bdev/malloc/bdev_malloc_rpc.o 00:07:06.106 CC module/bdev/ftl/bdev_ftl.o 00:07:06.106 CC module/bdev/ftl/bdev_ftl_rpc.o 00:07:06.106 CC module/bdev/null/bdev_null.o 00:07:06.106 CC module/bdev/null/bdev_null_rpc.o 00:07:06.106 CC module/bdev/iscsi/bdev_iscsi.o 00:07:06.106 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:07:06.363 LIB libspdk_blobfs_bdev.a 00:07:06.363 SO libspdk_blobfs_bdev.so.6.0 00:07:06.363 LIB libspdk_bdev_error.a 00:07:06.621 LIB libspdk_bdev_gpt.a 00:07:06.621 SO libspdk_bdev_error.so.6.0 00:07:06.621 LIB libspdk_bdev_split.a 00:07:06.621 SO libspdk_bdev_gpt.so.6.0 00:07:06.621 SYMLINK libspdk_blobfs_bdev.so 00:07:06.621 SO libspdk_bdev_split.so.6.0 00:07:06.621 LIB libspdk_bdev_null.a 00:07:06.621 LIB libspdk_bdev_passthru.a 00:07:06.621 SYMLINK libspdk_bdev_error.so 00:07:06.621 LIB libspdk_bdev_zone_block.a 00:07:06.621 LIB libspdk_bdev_ftl.a 00:07:06.621 SYMLINK libspdk_bdev_gpt.so 00:07:06.621 SO libspdk_bdev_null.so.6.0 00:07:06.621 SO libspdk_bdev_passthru.so.6.0 00:07:06.621 LIB libspdk_bdev_aio.a 00:07:06.621 SO libspdk_bdev_zone_block.so.6.0 00:07:06.621 SYMLINK libspdk_bdev_split.so 00:07:06.621 SO libspdk_bdev_ftl.so.6.0 00:07:06.621 LIB libspdk_bdev_delay.a 00:07:06.621 LIB libspdk_bdev_malloc.a 00:07:06.621 LIB libspdk_bdev_iscsi.a 00:07:06.621 SO libspdk_bdev_aio.so.6.0 00:07:06.621 SYMLINK libspdk_bdev_null.so 00:07:06.621 SYMLINK libspdk_bdev_passthru.so 00:07:06.621 SO libspdk_bdev_delay.so.6.0 00:07:06.621 SYMLINK libspdk_bdev_zone_block.so 00:07:06.621 SO libspdk_bdev_malloc.so.6.0 00:07:06.621 SO libspdk_bdev_iscsi.so.6.0 00:07:06.621 SYMLINK libspdk_bdev_ftl.so 00:07:06.621 SYMLINK libspdk_bdev_aio.so 00:07:06.621 LIB libspdk_bdev_lvol.a 00:07:06.621 SYMLINK libspdk_bdev_delay.so 00:07:06.621 SYMLINK libspdk_bdev_iscsi.so 00:07:06.621 SYMLINK libspdk_bdev_malloc.so 00:07:06.621 LIB libspdk_bdev_virtio.a 00:07:06.880 SO libspdk_bdev_lvol.so.6.0 00:07:06.880 SO libspdk_bdev_virtio.so.6.0 00:07:06.880 SYMLINK libspdk_bdev_lvol.so 00:07:06.880 SYMLINK libspdk_bdev_virtio.so 00:07:07.138 LIB libspdk_bdev_raid.a 00:07:07.138 SO libspdk_bdev_raid.so.6.0 00:07:07.138 SYMLINK libspdk_bdev_raid.so 00:07:08.086 LIB libspdk_bdev_nvme.a 00:07:08.086 SO libspdk_bdev_nvme.so.7.1 00:07:08.345 SYMLINK libspdk_bdev_nvme.so 00:07:08.912 CC module/event/subsystems/iobuf/iobuf.o 00:07:08.912 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:07:08.912 CC module/event/subsystems/sock/sock.o 00:07:08.912 CC module/event/subsystems/scheduler/scheduler.o 00:07:08.912 CC module/event/subsystems/vmd/vmd.o 00:07:08.912 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:07:08.912 CC module/event/subsystems/vmd/vmd_rpc.o 00:07:08.912 CC module/event/subsystems/keyring/keyring.o 00:07:08.912 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:07:08.912 CC module/event/subsystems/fsdev/fsdev.o 00:07:09.171 LIB libspdk_event_iobuf.a 00:07:09.171 LIB libspdk_event_keyring.a 00:07:09.171 LIB libspdk_event_scheduler.a 00:07:09.171 LIB libspdk_event_sock.a 00:07:09.171 LIB libspdk_event_vhost_blk.a 00:07:09.171 LIB libspdk_event_fsdev.a 00:07:09.171 LIB libspdk_event_vfu_tgt.a 00:07:09.171 LIB libspdk_event_vmd.a 00:07:09.171 SO libspdk_event_keyring.so.1.0 00:07:09.171 SO libspdk_event_iobuf.so.3.0 00:07:09.171 SO libspdk_event_scheduler.so.4.0 00:07:09.171 SO libspdk_event_sock.so.5.0 00:07:09.171 SO libspdk_event_vhost_blk.so.3.0 00:07:09.171 SO libspdk_event_vfu_tgt.so.3.0 00:07:09.171 SO libspdk_event_fsdev.so.1.0 00:07:09.171 SO libspdk_event_vmd.so.6.0 00:07:09.171 SYMLINK libspdk_event_scheduler.so 00:07:09.171 SYMLINK libspdk_event_keyring.so 00:07:09.171 SYMLINK libspdk_event_iobuf.so 00:07:09.171 SYMLINK libspdk_event_sock.so 00:07:09.171 SYMLINK libspdk_event_vhost_blk.so 00:07:09.171 SYMLINK libspdk_event_vfu_tgt.so 00:07:09.171 SYMLINK libspdk_event_fsdev.so 00:07:09.171 SYMLINK libspdk_event_vmd.so 00:07:09.430 CC module/event/subsystems/accel/accel.o 00:07:09.689 LIB libspdk_event_accel.a 00:07:09.689 SO libspdk_event_accel.so.6.0 00:07:09.689 SYMLINK libspdk_event_accel.so 00:07:09.948 CC module/event/subsystems/bdev/bdev.o 00:07:10.208 LIB libspdk_event_bdev.a 00:07:10.208 SO libspdk_event_bdev.so.6.0 00:07:10.208 SYMLINK libspdk_event_bdev.so 00:07:10.468 CC module/event/subsystems/scsi/scsi.o 00:07:10.468 CC module/event/subsystems/ublk/ublk.o 00:07:10.468 CC module/event/subsystems/nbd/nbd.o 00:07:10.468 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:07:10.468 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:07:10.727 LIB libspdk_event_nbd.a 00:07:10.727 LIB libspdk_event_ublk.a 00:07:10.727 LIB libspdk_event_scsi.a 00:07:10.727 SO libspdk_event_nbd.so.6.0 00:07:10.727 SO libspdk_event_ublk.so.3.0 00:07:10.727 SO libspdk_event_scsi.so.6.0 00:07:10.727 LIB libspdk_event_nvmf.a 00:07:10.727 SYMLINK libspdk_event_nbd.so 00:07:10.727 SYMLINK libspdk_event_ublk.so 00:07:10.727 SYMLINK libspdk_event_scsi.so 00:07:10.727 SO libspdk_event_nvmf.so.6.0 00:07:10.986 SYMLINK libspdk_event_nvmf.so 00:07:11.263 CC module/event/subsystems/iscsi/iscsi.o 00:07:11.263 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:07:11.263 LIB libspdk_event_vhost_scsi.a 00:07:11.263 LIB libspdk_event_iscsi.a 00:07:11.263 SO libspdk_event_vhost_scsi.so.3.0 00:07:11.263 SO libspdk_event_iscsi.so.6.0 00:07:11.263 SYMLINK libspdk_event_vhost_scsi.so 00:07:11.263 SYMLINK libspdk_event_iscsi.so 00:07:11.522 SO libspdk.so.6.0 00:07:11.522 SYMLINK libspdk.so 00:07:11.781 CC app/spdk_lspci/spdk_lspci.o 00:07:11.781 CC app/trace_record/trace_record.o 00:07:11.781 CXX app/trace/trace.o 00:07:11.781 CC test/rpc_client/rpc_client_test.o 00:07:11.781 CC app/spdk_nvme_perf/perf.o 00:07:11.781 CC app/spdk_nvme_identify/identify.o 00:07:11.781 CC app/spdk_top/spdk_top.o 00:07:11.781 TEST_HEADER include/spdk/accel.h 00:07:11.781 TEST_HEADER include/spdk/accel_module.h 00:07:11.781 CC app/spdk_nvme_discover/discovery_aer.o 00:07:11.781 TEST_HEADER include/spdk/barrier.h 00:07:12.043 TEST_HEADER include/spdk/assert.h 00:07:12.043 TEST_HEADER include/spdk/bdev_module.h 00:07:12.043 TEST_HEADER include/spdk/base64.h 00:07:12.043 TEST_HEADER include/spdk/bdev.h 00:07:12.043 TEST_HEADER include/spdk/blob_bdev.h 00:07:12.043 TEST_HEADER include/spdk/bit_pool.h 00:07:12.043 TEST_HEADER include/spdk/blobfs_bdev.h 00:07:12.043 TEST_HEADER include/spdk/bdev_zone.h 00:07:12.043 TEST_HEADER include/spdk/bit_array.h 00:07:12.043 TEST_HEADER include/spdk/blobfs.h 00:07:12.043 TEST_HEADER include/spdk/blob.h 00:07:12.043 TEST_HEADER include/spdk/conf.h 00:07:12.043 TEST_HEADER include/spdk/cpuset.h 00:07:12.043 TEST_HEADER include/spdk/config.h 00:07:12.043 TEST_HEADER include/spdk/crc16.h 00:07:12.043 TEST_HEADER include/spdk/crc64.h 00:07:12.043 TEST_HEADER include/spdk/crc32.h 00:07:12.043 TEST_HEADER include/spdk/dma.h 00:07:12.043 TEST_HEADER include/spdk/dif.h 00:07:12.043 TEST_HEADER include/spdk/env_dpdk.h 00:07:12.043 TEST_HEADER include/spdk/env.h 00:07:12.043 TEST_HEADER include/spdk/endian.h 00:07:12.043 TEST_HEADER include/spdk/fd.h 00:07:12.043 TEST_HEADER include/spdk/event.h 00:07:12.043 TEST_HEADER include/spdk/file.h 00:07:12.043 TEST_HEADER include/spdk/fsdev.h 00:07:12.043 TEST_HEADER include/spdk/fsdev_module.h 00:07:12.043 TEST_HEADER include/spdk/ftl.h 00:07:12.043 TEST_HEADER include/spdk/fd_group.h 00:07:12.043 TEST_HEADER include/spdk/gpt_spec.h 00:07:12.043 TEST_HEADER include/spdk/hexlify.h 00:07:12.043 TEST_HEADER include/spdk/histogram_data.h 00:07:12.043 TEST_HEADER include/spdk/idxd.h 00:07:12.043 TEST_HEADER include/spdk/fuse_dispatcher.h 00:07:12.043 TEST_HEADER include/spdk/idxd_spec.h 00:07:12.043 TEST_HEADER include/spdk/init.h 00:07:12.043 TEST_HEADER include/spdk/ioat.h 00:07:12.043 TEST_HEADER include/spdk/iscsi_spec.h 00:07:12.043 TEST_HEADER include/spdk/json.h 00:07:12.043 TEST_HEADER include/spdk/ioat_spec.h 00:07:12.043 CC examples/interrupt_tgt/interrupt_tgt.o 00:07:12.043 TEST_HEADER include/spdk/keyring.h 00:07:12.043 TEST_HEADER include/spdk/keyring_module.h 00:07:12.043 TEST_HEADER include/spdk/jsonrpc.h 00:07:12.043 TEST_HEADER include/spdk/log.h 00:07:12.043 TEST_HEADER include/spdk/lvol.h 00:07:12.043 TEST_HEADER include/spdk/likely.h 00:07:12.043 TEST_HEADER include/spdk/md5.h 00:07:12.043 TEST_HEADER include/spdk/mmio.h 00:07:12.043 TEST_HEADER include/spdk/memory.h 00:07:12.043 TEST_HEADER include/spdk/nbd.h 00:07:12.043 CC app/spdk_dd/spdk_dd.o 00:07:12.043 TEST_HEADER include/spdk/notify.h 00:07:12.043 TEST_HEADER include/spdk/net.h 00:07:12.043 TEST_HEADER include/spdk/nvme.h 00:07:12.043 CC app/nvmf_tgt/nvmf_main.o 00:07:12.043 TEST_HEADER include/spdk/nvme_intel.h 00:07:12.043 TEST_HEADER include/spdk/nvme_ocssd.h 00:07:12.043 TEST_HEADER include/spdk/nvme_spec.h 00:07:12.043 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:07:12.043 TEST_HEADER include/spdk/nvmf_cmd.h 00:07:12.043 TEST_HEADER include/spdk/nvme_zns.h 00:07:12.043 TEST_HEADER include/spdk/nvmf.h 00:07:12.043 TEST_HEADER include/spdk/nvmf_transport.h 00:07:12.043 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:07:12.043 TEST_HEADER include/spdk/nvmf_spec.h 00:07:12.043 CC app/iscsi_tgt/iscsi_tgt.o 00:07:12.043 TEST_HEADER include/spdk/opal.h 00:07:12.043 TEST_HEADER include/spdk/pci_ids.h 00:07:12.043 TEST_HEADER include/spdk/pipe.h 00:07:12.043 TEST_HEADER include/spdk/opal_spec.h 00:07:12.044 TEST_HEADER include/spdk/queue.h 00:07:12.044 TEST_HEADER include/spdk/reduce.h 00:07:12.044 CC app/spdk_tgt/spdk_tgt.o 00:07:12.044 TEST_HEADER include/spdk/rpc.h 00:07:12.044 TEST_HEADER include/spdk/scsi.h 00:07:12.044 TEST_HEADER include/spdk/scheduler.h 00:07:12.044 TEST_HEADER include/spdk/scsi_spec.h 00:07:12.044 TEST_HEADER include/spdk/sock.h 00:07:12.044 TEST_HEADER include/spdk/stdinc.h 00:07:12.044 TEST_HEADER include/spdk/thread.h 00:07:12.044 TEST_HEADER include/spdk/trace.h 00:07:12.044 TEST_HEADER include/spdk/string.h 00:07:12.044 TEST_HEADER include/spdk/trace_parser.h 00:07:12.044 TEST_HEADER include/spdk/tree.h 00:07:12.044 TEST_HEADER include/spdk/ublk.h 00:07:12.044 TEST_HEADER include/spdk/util.h 00:07:12.044 TEST_HEADER include/spdk/uuid.h 00:07:12.044 TEST_HEADER include/spdk/version.h 00:07:12.044 TEST_HEADER include/spdk/vfio_user_pci.h 00:07:12.044 TEST_HEADER include/spdk/vhost.h 00:07:12.044 TEST_HEADER include/spdk/vmd.h 00:07:12.044 TEST_HEADER include/spdk/vfio_user_spec.h 00:07:12.044 TEST_HEADER include/spdk/xor.h 00:07:12.044 TEST_HEADER include/spdk/zipf.h 00:07:12.044 CXX test/cpp_headers/accel.o 00:07:12.044 CXX test/cpp_headers/accel_module.o 00:07:12.044 CXX test/cpp_headers/assert.o 00:07:12.044 CXX test/cpp_headers/base64.o 00:07:12.044 CXX test/cpp_headers/barrier.o 00:07:12.044 CXX test/cpp_headers/bdev.o 00:07:12.044 CXX test/cpp_headers/bdev_module.o 00:07:12.044 CXX test/cpp_headers/bdev_zone.o 00:07:12.044 CXX test/cpp_headers/bit_array.o 00:07:12.044 CXX test/cpp_headers/blobfs_bdev.o 00:07:12.044 CXX test/cpp_headers/bit_pool.o 00:07:12.044 CXX test/cpp_headers/blob_bdev.o 00:07:12.044 CXX test/cpp_headers/blobfs.o 00:07:12.044 CXX test/cpp_headers/conf.o 00:07:12.044 CXX test/cpp_headers/blob.o 00:07:12.044 CXX test/cpp_headers/config.o 00:07:12.044 CXX test/cpp_headers/cpuset.o 00:07:12.044 CXX test/cpp_headers/crc16.o 00:07:12.044 CXX test/cpp_headers/crc32.o 00:07:12.044 CXX test/cpp_headers/crc64.o 00:07:12.044 CXX test/cpp_headers/dma.o 00:07:12.044 CXX test/cpp_headers/dif.o 00:07:12.044 CXX test/cpp_headers/endian.o 00:07:12.044 CXX test/cpp_headers/env.o 00:07:12.044 CXX test/cpp_headers/env_dpdk.o 00:07:12.044 CXX test/cpp_headers/fd_group.o 00:07:12.044 CXX test/cpp_headers/event.o 00:07:12.044 CXX test/cpp_headers/fd.o 00:07:12.044 CXX test/cpp_headers/file.o 00:07:12.044 CXX test/cpp_headers/fsdev.o 00:07:12.044 CXX test/cpp_headers/fsdev_module.o 00:07:12.044 CXX test/cpp_headers/ftl.o 00:07:12.044 CXX test/cpp_headers/fuse_dispatcher.o 00:07:12.044 CXX test/cpp_headers/gpt_spec.o 00:07:12.044 CXX test/cpp_headers/hexlify.o 00:07:12.044 CXX test/cpp_headers/histogram_data.o 00:07:12.044 CXX test/cpp_headers/idxd.o 00:07:12.044 CXX test/cpp_headers/idxd_spec.o 00:07:12.044 CXX test/cpp_headers/ioat.o 00:07:12.044 CXX test/cpp_headers/ioat_spec.o 00:07:12.044 CXX test/cpp_headers/init.o 00:07:12.044 CXX test/cpp_headers/iscsi_spec.o 00:07:12.044 CXX test/cpp_headers/jsonrpc.o 00:07:12.044 CXX test/cpp_headers/keyring.o 00:07:12.044 CXX test/cpp_headers/json.o 00:07:12.044 CXX test/cpp_headers/likely.o 00:07:12.044 CXX test/cpp_headers/keyring_module.o 00:07:12.044 CXX test/cpp_headers/log.o 00:07:12.044 CXX test/cpp_headers/lvol.o 00:07:12.044 CXX test/cpp_headers/md5.o 00:07:12.044 CXX test/cpp_headers/mmio.o 00:07:12.044 CXX test/cpp_headers/memory.o 00:07:12.044 CXX test/cpp_headers/nbd.o 00:07:12.044 CXX test/cpp_headers/notify.o 00:07:12.044 CXX test/cpp_headers/net.o 00:07:12.044 CXX test/cpp_headers/nvme_intel.o 00:07:12.044 CXX test/cpp_headers/nvme.o 00:07:12.044 CXX test/cpp_headers/nvme_ocssd.o 00:07:12.044 CXX test/cpp_headers/nvme_ocssd_spec.o 00:07:12.044 CXX test/cpp_headers/nvme_spec.o 00:07:12.044 CXX test/cpp_headers/nvme_zns.o 00:07:12.044 CXX test/cpp_headers/nvmf_cmd.o 00:07:12.044 CXX test/cpp_headers/nvmf.o 00:07:12.044 CXX test/cpp_headers/nvmf_fc_spec.o 00:07:12.044 CXX test/cpp_headers/nvmf_spec.o 00:07:12.044 CXX test/cpp_headers/nvmf_transport.o 00:07:12.044 CXX test/cpp_headers/opal.o 00:07:12.044 CC examples/util/zipf/zipf.o 00:07:12.044 CC test/thread/poller_perf/poller_perf.o 00:07:12.044 CC examples/ioat/verify/verify.o 00:07:12.044 CC test/app/histogram_perf/histogram_perf.o 00:07:12.044 CC examples/ioat/perf/perf.o 00:07:12.044 CC test/app/stub/stub.o 00:07:12.044 CXX test/cpp_headers/opal_spec.o 00:07:12.044 CC test/env/pci/pci_ut.o 00:07:12.044 CC app/fio/nvme/fio_plugin.o 00:07:12.044 CC test/app/jsoncat/jsoncat.o 00:07:12.044 CC test/env/vtophys/vtophys.o 00:07:12.044 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:07:12.044 CC test/dma/test_dma/test_dma.o 00:07:12.316 CC app/fio/bdev/fio_plugin.o 00:07:12.316 CC test/app/bdev_svc/bdev_svc.o 00:07:12.316 CC test/env/memory/memory_ut.o 00:07:12.316 LINK spdk_lspci 00:07:12.316 LINK rpc_client_test 00:07:12.316 LINK spdk_nvme_discover 00:07:12.316 LINK interrupt_tgt 00:07:12.576 CC test/env/mem_callbacks/mem_callbacks.o 00:07:12.576 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:07:12.576 LINK spdk_tgt 00:07:12.576 LINK nvmf_tgt 00:07:12.576 LINK spdk_trace_record 00:07:12.576 LINK histogram_perf 00:07:12.576 LINK stub 00:07:12.576 CXX test/cpp_headers/pci_ids.o 00:07:12.576 CXX test/cpp_headers/pipe.o 00:07:12.576 CXX test/cpp_headers/queue.o 00:07:12.576 CXX test/cpp_headers/rpc.o 00:07:12.576 CXX test/cpp_headers/reduce.o 00:07:12.576 CXX test/cpp_headers/scheduler.o 00:07:12.576 CXX test/cpp_headers/scsi.o 00:07:12.576 CXX test/cpp_headers/scsi_spec.o 00:07:12.576 CXX test/cpp_headers/sock.o 00:07:12.576 CXX test/cpp_headers/stdinc.o 00:07:12.576 CXX test/cpp_headers/string.o 00:07:12.576 CXX test/cpp_headers/thread.o 00:07:12.576 CXX test/cpp_headers/trace.o 00:07:12.576 CXX test/cpp_headers/trace_parser.o 00:07:12.576 CXX test/cpp_headers/tree.o 00:07:12.576 CXX test/cpp_headers/ublk.o 00:07:12.576 CXX test/cpp_headers/util.o 00:07:12.576 LINK iscsi_tgt 00:07:12.576 CXX test/cpp_headers/uuid.o 00:07:12.576 CXX test/cpp_headers/version.o 00:07:12.576 CXX test/cpp_headers/vfio_user_pci.o 00:07:12.576 LINK poller_perf 00:07:12.576 CXX test/cpp_headers/vfio_user_spec.o 00:07:12.576 CXX test/cpp_headers/vhost.o 00:07:12.576 CXX test/cpp_headers/vmd.o 00:07:12.576 LINK jsoncat 00:07:12.576 CXX test/cpp_headers/xor.o 00:07:12.834 CXX test/cpp_headers/zipf.o 00:07:12.834 LINK zipf 00:07:12.834 LINK verify 00:07:12.834 LINK vtophys 00:07:12.834 LINK env_dpdk_post_init 00:07:12.834 LINK bdev_svc 00:07:12.834 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:07:12.834 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:07:12.834 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:07:12.834 LINK ioat_perf 00:07:12.834 LINK spdk_dd 00:07:12.834 LINK pci_ut 00:07:12.834 LINK spdk_trace 00:07:13.091 LINK spdk_nvme 00:07:13.091 LINK spdk_bdev 00:07:13.091 LINK test_dma 00:07:13.091 LINK spdk_nvme_identify 00:07:13.091 LINK nvme_fuzz 00:07:13.091 CC test/event/reactor_perf/reactor_perf.o 00:07:13.091 CC test/event/reactor/reactor.o 00:07:13.091 LINK spdk_top 00:07:13.091 CC examples/vmd/lsvmd/lsvmd.o 00:07:13.091 CC test/event/app_repeat/app_repeat.o 00:07:13.350 CC examples/vmd/led/led.o 00:07:13.350 CC test/event/event_perf/event_perf.o 00:07:13.350 CC examples/idxd/perf/perf.o 00:07:13.350 CC examples/sock/hello_world/hello_sock.o 00:07:13.350 LINK spdk_nvme_perf 00:07:13.350 CC examples/thread/thread/thread_ex.o 00:07:13.350 LINK vhost_fuzz 00:07:13.350 CC test/event/scheduler/scheduler.o 00:07:13.350 CC app/vhost/vhost.o 00:07:13.350 LINK mem_callbacks 00:07:13.350 LINK lsvmd 00:07:13.350 LINK reactor_perf 00:07:13.350 LINK reactor 00:07:13.350 LINK event_perf 00:07:13.350 LINK app_repeat 00:07:13.350 LINK led 00:07:13.609 LINK hello_sock 00:07:13.609 LINK thread 00:07:13.609 LINK scheduler 00:07:13.609 LINK idxd_perf 00:07:13.609 LINK vhost 00:07:13.609 CC test/nvme/aer/aer.o 00:07:13.609 CC test/nvme/connect_stress/connect_stress.o 00:07:13.609 CC test/nvme/cuse/cuse.o 00:07:13.609 CC test/nvme/sgl/sgl.o 00:07:13.609 CC test/nvme/reserve/reserve.o 00:07:13.609 CC test/nvme/compliance/nvme_compliance.o 00:07:13.609 CC test/nvme/simple_copy/simple_copy.o 00:07:13.609 CC test/nvme/fdp/fdp.o 00:07:13.609 CC test/nvme/startup/startup.o 00:07:13.609 CC test/nvme/err_injection/err_injection.o 00:07:13.609 CC test/nvme/reset/reset.o 00:07:13.609 CC test/nvme/overhead/overhead.o 00:07:13.609 CC test/nvme/boot_partition/boot_partition.o 00:07:13.609 CC test/nvme/e2edp/nvme_dp.o 00:07:13.609 CC test/nvme/doorbell_aers/doorbell_aers.o 00:07:13.609 CC test/nvme/fused_ordering/fused_ordering.o 00:07:13.609 CC test/accel/dif/dif.o 00:07:13.609 CC test/blobfs/mkfs/mkfs.o 00:07:13.609 LINK memory_ut 00:07:13.866 CC test/lvol/esnap/esnap.o 00:07:13.866 LINK connect_stress 00:07:13.866 LINK startup 00:07:13.866 LINK boot_partition 00:07:13.866 LINK doorbell_aers 00:07:13.866 LINK reserve 00:07:13.866 LINK err_injection 00:07:13.866 LINK fused_ordering 00:07:13.866 LINK simple_copy 00:07:13.866 LINK sgl 00:07:13.866 LINK aer 00:07:13.866 LINK mkfs 00:07:13.866 LINK nvme_dp 00:07:13.866 LINK reset 00:07:13.866 LINK overhead 00:07:13.866 CC examples/nvme/arbitration/arbitration.o 00:07:13.866 CC examples/nvme/cmb_copy/cmb_copy.o 00:07:13.866 CC examples/nvme/hotplug/hotplug.o 00:07:13.866 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:07:13.866 CC examples/nvme/abort/abort.o 00:07:13.866 CC examples/nvme/hello_world/hello_world.o 00:07:13.866 CC examples/nvme/reconnect/reconnect.o 00:07:13.866 CC examples/nvme/nvme_manage/nvme_manage.o 00:07:13.866 LINK nvme_compliance 00:07:13.866 LINK fdp 00:07:13.866 CC examples/accel/perf/accel_perf.o 00:07:14.125 CC examples/blob/hello_world/hello_blob.o 00:07:14.125 CC examples/blob/cli/blobcli.o 00:07:14.125 CC examples/fsdev/hello_world/hello_fsdev.o 00:07:14.125 LINK pmr_persistence 00:07:14.125 LINK cmb_copy 00:07:14.125 LINK hotplug 00:07:14.125 LINK hello_world 00:07:14.125 LINK arbitration 00:07:14.125 LINK reconnect 00:07:14.125 LINK abort 00:07:14.383 LINK dif 00:07:14.383 LINK iscsi_fuzz 00:07:14.383 LINK hello_blob 00:07:14.383 LINK hello_fsdev 00:07:14.383 LINK nvme_manage 00:07:14.383 LINK accel_perf 00:07:14.383 LINK blobcli 00:07:14.641 LINK cuse 00:07:14.899 CC test/bdev/bdevio/bdevio.o 00:07:14.899 CC examples/bdev/hello_world/hello_bdev.o 00:07:14.899 CC examples/bdev/bdevperf/bdevperf.o 00:07:15.158 LINK hello_bdev 00:07:15.158 LINK bdevio 00:07:15.417 LINK bdevperf 00:07:15.986 CC examples/nvmf/nvmf/nvmf.o 00:07:16.247 LINK nvmf 00:07:17.625 LINK esnap 00:07:17.625 00:07:17.625 real 0m55.754s 00:07:17.625 user 8m17.069s 00:07:17.625 sys 3m39.095s 00:07:17.625 11:50:51 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:07:17.625 11:50:51 make -- common/autotest_common.sh@10 -- $ set +x 00:07:17.625 ************************************ 00:07:17.625 END TEST make 00:07:17.625 ************************************ 00:07:17.625 11:50:51 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:07:17.625 11:50:51 -- pm/common@29 -- $ signal_monitor_resources TERM 00:07:17.625 11:50:51 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:07:17.625 11:50:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:17.625 11:50:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:07:17.625 11:50:51 -- pm/common@44 -- $ pid=4005420 00:07:17.625 11:50:51 -- pm/common@50 -- $ kill -TERM 4005420 00:07:17.625 11:50:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:17.625 11:50:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:07:17.625 11:50:51 -- pm/common@44 -- $ pid=4005422 00:07:17.625 11:50:51 -- pm/common@50 -- $ kill -TERM 4005422 00:07:17.625 11:50:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:17.625 11:50:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:07:17.625 11:50:51 -- pm/common@44 -- $ pid=4005424 00:07:17.625 11:50:51 -- pm/common@50 -- $ kill -TERM 4005424 00:07:17.625 11:50:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:17.625 11:50:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:07:17.625 11:50:51 -- pm/common@44 -- $ pid=4005449 00:07:17.625 11:50:51 -- pm/common@50 -- $ sudo -E kill -TERM 4005449 00:07:17.626 11:50:51 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:07:17.626 11:50:51 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:07:17.626 11:50:51 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:17.626 11:50:51 -- common/autotest_common.sh@1711 -- # lcov --version 00:07:17.626 11:50:51 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:17.885 11:50:51 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:17.885 11:50:51 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:17.885 11:50:51 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:17.885 11:50:51 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:17.885 11:50:51 -- scripts/common.sh@336 -- # IFS=.-: 00:07:17.885 11:50:51 -- scripts/common.sh@336 -- # read -ra ver1 00:07:17.885 11:50:51 -- scripts/common.sh@337 -- # IFS=.-: 00:07:17.885 11:50:51 -- scripts/common.sh@337 -- # read -ra ver2 00:07:17.885 11:50:51 -- scripts/common.sh@338 -- # local 'op=<' 00:07:17.885 11:50:51 -- scripts/common.sh@340 -- # ver1_l=2 00:07:17.885 11:50:51 -- scripts/common.sh@341 -- # ver2_l=1 00:07:17.885 11:50:51 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:17.886 11:50:51 -- scripts/common.sh@344 -- # case "$op" in 00:07:17.886 11:50:51 -- scripts/common.sh@345 -- # : 1 00:07:17.886 11:50:51 -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:17.886 11:50:51 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:17.886 11:50:51 -- scripts/common.sh@365 -- # decimal 1 00:07:17.886 11:50:51 -- scripts/common.sh@353 -- # local d=1 00:07:17.886 11:50:51 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:17.886 11:50:51 -- scripts/common.sh@355 -- # echo 1 00:07:17.886 11:50:51 -- scripts/common.sh@365 -- # ver1[v]=1 00:07:17.886 11:50:51 -- scripts/common.sh@366 -- # decimal 2 00:07:17.886 11:50:51 -- scripts/common.sh@353 -- # local d=2 00:07:17.886 11:50:51 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:17.886 11:50:51 -- scripts/common.sh@355 -- # echo 2 00:07:17.886 11:50:51 -- scripts/common.sh@366 -- # ver2[v]=2 00:07:17.886 11:50:51 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:17.886 11:50:51 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:17.886 11:50:51 -- scripts/common.sh@368 -- # return 0 00:07:17.886 11:50:51 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:17.886 11:50:51 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:17.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.886 --rc genhtml_branch_coverage=1 00:07:17.886 --rc genhtml_function_coverage=1 00:07:17.886 --rc genhtml_legend=1 00:07:17.886 --rc geninfo_all_blocks=1 00:07:17.886 --rc geninfo_unexecuted_blocks=1 00:07:17.886 00:07:17.886 ' 00:07:17.886 11:50:51 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:17.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.886 --rc genhtml_branch_coverage=1 00:07:17.886 --rc genhtml_function_coverage=1 00:07:17.886 --rc genhtml_legend=1 00:07:17.886 --rc geninfo_all_blocks=1 00:07:17.886 --rc geninfo_unexecuted_blocks=1 00:07:17.886 00:07:17.886 ' 00:07:17.886 11:50:51 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:17.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.886 --rc genhtml_branch_coverage=1 00:07:17.886 --rc genhtml_function_coverage=1 00:07:17.886 --rc genhtml_legend=1 00:07:17.886 --rc geninfo_all_blocks=1 00:07:17.886 --rc geninfo_unexecuted_blocks=1 00:07:17.886 00:07:17.886 ' 00:07:17.886 11:50:51 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:17.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.886 --rc genhtml_branch_coverage=1 00:07:17.886 --rc genhtml_function_coverage=1 00:07:17.886 --rc genhtml_legend=1 00:07:17.886 --rc geninfo_all_blocks=1 00:07:17.886 --rc geninfo_unexecuted_blocks=1 00:07:17.886 00:07:17.886 ' 00:07:17.886 11:50:51 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:17.886 11:50:51 -- nvmf/common.sh@7 -- # uname -s 00:07:17.886 11:50:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:17.886 11:50:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:17.886 11:50:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:17.886 11:50:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:17.886 11:50:51 -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:17.886 11:50:51 -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:07:17.886 11:50:51 -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:17.886 11:50:51 -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:07:17.886 11:50:51 -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:17.886 11:50:51 -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:17.886 11:50:51 -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:17.886 11:50:51 -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:07:17.886 11:50:51 -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:07:17.886 11:50:51 -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:17.886 11:50:51 -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:17.886 11:50:51 -- scripts/common.sh@15 -- # shopt -s extglob 00:07:17.886 11:50:51 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:17.886 11:50:51 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:17.886 11:50:51 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:17.886 11:50:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.886 11:50:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.886 11:50:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.886 11:50:51 -- paths/export.sh@5 -- # export PATH 00:07:17.886 11:50:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.886 11:50:51 -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:07:17.886 11:50:51 -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:07:17.886 11:50:51 -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:07:17.886 11:50:51 -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:07:17.886 11:50:51 -- nvmf/common.sh@50 -- # : 0 00:07:17.886 11:50:51 -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:07:17.886 11:50:51 -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:07:17.886 11:50:51 -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:07:17.886 11:50:51 -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:17.886 11:50:51 -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:17.886 11:50:51 -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:07:17.886 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:07:17.886 11:50:51 -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:07:17.886 11:50:51 -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:07:17.886 11:50:51 -- nvmf/common.sh@54 -- # have_pci_nics=0 00:07:17.886 11:50:51 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:07:17.886 11:50:51 -- spdk/autotest.sh@32 -- # uname -s 00:07:17.886 11:50:51 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:07:17.886 11:50:51 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:07:17.886 11:50:51 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:07:17.886 11:50:51 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:07:17.886 11:50:51 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:07:17.886 11:50:51 -- spdk/autotest.sh@44 -- # modprobe nbd 00:07:17.886 11:50:51 -- spdk/autotest.sh@46 -- # type -P udevadm 00:07:17.886 11:50:51 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:07:17.886 11:50:51 -- spdk/autotest.sh@48 -- # udevadm_pid=4067852 00:07:17.886 11:50:51 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:07:17.886 11:50:51 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:07:17.886 11:50:51 -- pm/common@17 -- # local monitor 00:07:17.886 11:50:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:17.886 11:50:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:17.886 11:50:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:17.886 11:50:51 -- pm/common@21 -- # date +%s 00:07:17.886 11:50:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:17.886 11:50:51 -- pm/common@21 -- # date +%s 00:07:17.886 11:50:51 -- pm/common@25 -- # sleep 1 00:07:17.886 11:50:51 -- pm/common@21 -- # date +%s 00:07:17.886 11:50:51 -- pm/common@21 -- # date +%s 00:07:17.887 11:50:51 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733395851 00:07:17.887 11:50:51 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733395851 00:07:17.887 11:50:51 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733395851 00:07:17.887 11:50:51 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733395851 00:07:17.887 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733395851_collect-cpu-load.pm.log 00:07:17.887 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733395851_collect-vmstat.pm.log 00:07:17.887 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733395851_collect-cpu-temp.pm.log 00:07:17.887 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733395851_collect-bmc-pm.bmc.pm.log 00:07:18.825 11:50:52 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:07:18.825 11:50:52 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:07:18.825 11:50:52 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:18.825 11:50:52 -- common/autotest_common.sh@10 -- # set +x 00:07:18.825 11:50:52 -- spdk/autotest.sh@59 -- # create_test_list 00:07:18.825 11:50:52 -- common/autotest_common.sh@752 -- # xtrace_disable 00:07:18.825 11:50:52 -- common/autotest_common.sh@10 -- # set +x 00:07:18.825 11:50:53 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:07:18.825 11:50:53 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:18.825 11:50:53 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:18.825 11:50:53 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:07:18.825 11:50:53 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:18.825 11:50:53 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:07:18.825 11:50:53 -- common/autotest_common.sh@1457 -- # uname 00:07:19.085 11:50:53 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:07:19.085 11:50:53 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:07:19.085 11:50:53 -- common/autotest_common.sh@1477 -- # uname 00:07:19.085 11:50:53 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:07:19.085 11:50:53 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:07:19.085 11:50:53 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:07:19.085 lcov: LCOV version 1.15 00:07:19.085 11:50:53 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:07:31.291 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:07:31.291 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:07:46.170 11:51:17 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:07:46.170 11:51:17 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:46.170 11:51:17 -- common/autotest_common.sh@10 -- # set +x 00:07:46.170 11:51:18 -- spdk/autotest.sh@78 -- # rm -f 00:07:46.170 11:51:18 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:07:46.737 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:07:46.737 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:07:46.737 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:07:46.737 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:07:46.737 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:07:46.737 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:07:46.737 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:07:46.737 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:07:46.996 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:07:46.996 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:07:46.996 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:07:46.996 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:07:46.996 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:07:46.996 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:07:46.996 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:07:46.996 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:07:46.996 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:07:47.254 11:51:21 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:07:47.254 11:51:21 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:07:47.254 11:51:21 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:07:47.254 11:51:21 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:07:47.254 11:51:21 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:07:47.254 11:51:21 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:07:47.254 11:51:21 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:07:47.254 11:51:21 -- common/autotest_common.sh@1669 -- # bdf=0000:5e:00.0 00:07:47.254 11:51:21 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:47.254 11:51:21 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:07:47.254 11:51:21 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:07:47.254 11:51:21 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:47.254 11:51:21 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:47.254 11:51:21 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:07:47.254 11:51:21 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:47.254 11:51:21 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:47.254 11:51:21 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:07:47.254 11:51:21 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:07:47.254 11:51:21 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:07:47.254 No valid GPT data, bailing 00:07:47.254 11:51:21 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:07:47.254 11:51:21 -- scripts/common.sh@394 -- # pt= 00:07:47.254 11:51:21 -- scripts/common.sh@395 -- # return 1 00:07:47.254 11:51:21 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:07:47.254 1+0 records in 00:07:47.254 1+0 records out 00:07:47.255 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00572758 s, 183 MB/s 00:07:47.255 11:51:21 -- spdk/autotest.sh@105 -- # sync 00:07:47.255 11:51:21 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:07:47.255 11:51:21 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:07:47.255 11:51:21 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:07:53.817 11:51:26 -- spdk/autotest.sh@111 -- # uname -s 00:07:53.817 11:51:26 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:07:53.817 11:51:26 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:07:53.817 11:51:26 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:07:55.719 Hugepages 00:07:55.719 node hugesize free / total 00:07:55.719 node0 1048576kB 0 / 0 00:07:55.719 node0 2048kB 0 / 0 00:07:55.719 node1 1048576kB 0 / 0 00:07:55.719 node1 2048kB 0 / 0 00:07:55.719 00:07:55.719 Type BDF Vendor Device NUMA Driver Device Block devices 00:07:55.719 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:07:55.719 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:07:55.719 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:07:55.719 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:07:55.719 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:07:55.719 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:07:55.719 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:07:55.719 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:07:55.719 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:07:55.719 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:07:55.719 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:07:55.719 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:07:55.719 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:07:55.719 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:07:55.719 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:07:55.719 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:07:55.719 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:07:55.719 11:51:29 -- spdk/autotest.sh@117 -- # uname -s 00:07:55.719 11:51:29 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:07:55.719 11:51:29 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:07:55.719 11:51:29 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:07:59.091 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:07:59.091 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:07:59.091 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:07:59.091 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:07:59.091 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:07:59.091 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:07:59.091 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:07:59.091 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:07:59.091 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:07:59.091 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:07:59.091 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:07:59.091 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:07:59.091 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:07:59.091 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:07:59.091 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:07:59.091 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:08:00.031 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:08:00.031 11:51:34 -- common/autotest_common.sh@1517 -- # sleep 1 00:08:01.408 11:51:35 -- common/autotest_common.sh@1518 -- # bdfs=() 00:08:01.408 11:51:35 -- common/autotest_common.sh@1518 -- # local bdfs 00:08:01.408 11:51:35 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:08:01.408 11:51:35 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:08:01.408 11:51:35 -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:01.408 11:51:35 -- common/autotest_common.sh@1498 -- # local bdfs 00:08:01.408 11:51:35 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:01.408 11:51:35 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:08:01.408 11:51:35 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:01.408 11:51:35 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:08:01.408 11:51:35 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:08:01.408 11:51:35 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:08:03.939 Waiting for block devices as requested 00:08:03.939 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:08:04.198 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:08:04.198 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:08:04.198 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:08:04.198 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:08:04.456 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:08:04.456 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:08:04.456 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:08:04.714 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:08:04.714 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:08:04.714 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:08:04.972 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:08:04.972 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:08:04.972 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:08:04.972 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:08:05.229 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:08:05.229 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:08:05.229 11:51:39 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:08:05.229 11:51:39 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:08:05.230 11:51:39 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:08:05.230 11:51:39 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:08:05.230 11:51:39 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:08:05.230 11:51:39 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:08:05.230 11:51:39 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:08:05.230 11:51:39 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:08:05.230 11:51:39 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:08:05.230 11:51:39 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:08:05.230 11:51:39 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:08:05.230 11:51:39 -- common/autotest_common.sh@1531 -- # grep oacs 00:08:05.230 11:51:39 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:08:05.487 11:51:39 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:08:05.487 11:51:39 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:08:05.487 11:51:39 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:08:05.487 11:51:39 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:08:05.487 11:51:39 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:08:05.487 11:51:39 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:08:05.487 11:51:39 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:08:05.487 11:51:39 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:08:05.487 11:51:39 -- common/autotest_common.sh@1543 -- # continue 00:08:05.487 11:51:39 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:08:05.487 11:51:39 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:05.487 11:51:39 -- common/autotest_common.sh@10 -- # set +x 00:08:05.487 11:51:39 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:08:05.487 11:51:39 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:05.487 11:51:39 -- common/autotest_common.sh@10 -- # set +x 00:08:05.487 11:51:39 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:08:08.098 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:08:08.098 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:08:08.358 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:08:08.358 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:08:08.358 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:08:08.358 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:08:08.358 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:08:08.358 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:08:08.358 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:08:08.358 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:08:08.358 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:08:08.358 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:08:08.358 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:08:08.358 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:08:08.358 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:08:08.358 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:08:09.734 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:08:09.734 11:51:43 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:08:09.734 11:51:43 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:09.734 11:51:43 -- common/autotest_common.sh@10 -- # set +x 00:08:09.993 11:51:43 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:08:09.993 11:51:43 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:08:09.993 11:51:43 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:08:09.993 11:51:43 -- common/autotest_common.sh@1563 -- # bdfs=() 00:08:09.993 11:51:43 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:08:09.993 11:51:43 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:08:09.993 11:51:43 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:08:09.993 11:51:43 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:08:09.993 11:51:43 -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:09.993 11:51:43 -- common/autotest_common.sh@1498 -- # local bdfs 00:08:09.993 11:51:43 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:09.993 11:51:43 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:08:09.993 11:51:43 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:09.993 11:51:44 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:08:09.993 11:51:44 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:08:09.993 11:51:44 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:08:09.993 11:51:44 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:08:09.993 11:51:44 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:08:09.993 11:51:44 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:08:09.993 11:51:44 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:08:09.993 11:51:44 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:08:09.993 11:51:44 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:08:09.993 11:51:44 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:08:09.993 11:51:44 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=4082601 00:08:09.993 11:51:44 -- common/autotest_common.sh@1585 -- # waitforlisten 4082601 00:08:09.993 11:51:44 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:09.993 11:51:44 -- common/autotest_common.sh@835 -- # '[' -z 4082601 ']' 00:08:09.993 11:51:44 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.993 11:51:44 -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:09.993 11:51:44 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.993 11:51:44 -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:09.993 11:51:44 -- common/autotest_common.sh@10 -- # set +x 00:08:09.993 [2024-12-05 11:51:44.080480] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:08:09.993 [2024-12-05 11:51:44.080528] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4082601 ] 00:08:09.993 [2024-12-05 11:51:44.156485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.252 [2024-12-05 11:51:44.199090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.252 11:51:44 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:10.252 11:51:44 -- common/autotest_common.sh@868 -- # return 0 00:08:10.252 11:51:44 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:08:10.252 11:51:44 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:08:10.252 11:51:44 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:08:13.552 nvme0n1 00:08:13.552 11:51:47 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:08:13.552 [2024-12-05 11:51:47.587947] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:08:13.552 request: 00:08:13.552 { 00:08:13.552 "nvme_ctrlr_name": "nvme0", 00:08:13.552 "password": "test", 00:08:13.553 "method": "bdev_nvme_opal_revert", 00:08:13.553 "req_id": 1 00:08:13.553 } 00:08:13.553 Got JSON-RPC error response 00:08:13.553 response: 00:08:13.553 { 00:08:13.553 "code": -32602, 00:08:13.553 "message": "Invalid parameters" 00:08:13.553 } 00:08:13.553 11:51:47 -- common/autotest_common.sh@1591 -- # true 00:08:13.553 11:51:47 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:08:13.553 11:51:47 -- common/autotest_common.sh@1595 -- # killprocess 4082601 00:08:13.553 11:51:47 -- common/autotest_common.sh@954 -- # '[' -z 4082601 ']' 00:08:13.553 11:51:47 -- common/autotest_common.sh@958 -- # kill -0 4082601 00:08:13.553 11:51:47 -- common/autotest_common.sh@959 -- # uname 00:08:13.553 11:51:47 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:13.553 11:51:47 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4082601 00:08:13.553 11:51:47 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:13.553 11:51:47 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:13.553 11:51:47 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4082601' 00:08:13.553 killing process with pid 4082601 00:08:13.553 11:51:47 -- common/autotest_common.sh@973 -- # kill 4082601 00:08:13.553 11:51:47 -- common/autotest_common.sh@978 -- # wait 4082601 00:08:16.090 11:51:49 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:08:16.090 11:51:49 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:08:16.090 11:51:49 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:08:16.090 11:51:49 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:08:16.090 11:51:49 -- spdk/autotest.sh@149 -- # timing_enter lib 00:08:16.090 11:51:49 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:16.090 11:51:49 -- common/autotest_common.sh@10 -- # set +x 00:08:16.090 11:51:49 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:08:16.090 11:51:49 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:08:16.090 11:51:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:16.090 11:51:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:16.090 11:51:49 -- common/autotest_common.sh@10 -- # set +x 00:08:16.090 ************************************ 00:08:16.090 START TEST env 00:08:16.090 ************************************ 00:08:16.090 11:51:49 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:08:16.090 * Looking for test storage... 00:08:16.090 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:08:16.090 11:51:49 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:16.090 11:51:49 env -- common/autotest_common.sh@1711 -- # lcov --version 00:08:16.090 11:51:49 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:16.090 11:51:50 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:16.090 11:51:50 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:16.090 11:51:50 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:16.090 11:51:50 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:16.090 11:51:50 env -- scripts/common.sh@336 -- # IFS=.-: 00:08:16.090 11:51:50 env -- scripts/common.sh@336 -- # read -ra ver1 00:08:16.090 11:51:50 env -- scripts/common.sh@337 -- # IFS=.-: 00:08:16.090 11:51:50 env -- scripts/common.sh@337 -- # read -ra ver2 00:08:16.090 11:51:50 env -- scripts/common.sh@338 -- # local 'op=<' 00:08:16.090 11:51:50 env -- scripts/common.sh@340 -- # ver1_l=2 00:08:16.090 11:51:50 env -- scripts/common.sh@341 -- # ver2_l=1 00:08:16.090 11:51:50 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:16.090 11:51:50 env -- scripts/common.sh@344 -- # case "$op" in 00:08:16.090 11:51:50 env -- scripts/common.sh@345 -- # : 1 00:08:16.090 11:51:50 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:16.090 11:51:50 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:16.090 11:51:50 env -- scripts/common.sh@365 -- # decimal 1 00:08:16.090 11:51:50 env -- scripts/common.sh@353 -- # local d=1 00:08:16.090 11:51:50 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:16.090 11:51:50 env -- scripts/common.sh@355 -- # echo 1 00:08:16.090 11:51:50 env -- scripts/common.sh@365 -- # ver1[v]=1 00:08:16.090 11:51:50 env -- scripts/common.sh@366 -- # decimal 2 00:08:16.090 11:51:50 env -- scripts/common.sh@353 -- # local d=2 00:08:16.090 11:51:50 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:16.090 11:51:50 env -- scripts/common.sh@355 -- # echo 2 00:08:16.090 11:51:50 env -- scripts/common.sh@366 -- # ver2[v]=2 00:08:16.090 11:51:50 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:16.090 11:51:50 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:16.090 11:51:50 env -- scripts/common.sh@368 -- # return 0 00:08:16.090 11:51:50 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:16.090 11:51:50 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:16.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.090 --rc genhtml_branch_coverage=1 00:08:16.090 --rc genhtml_function_coverage=1 00:08:16.090 --rc genhtml_legend=1 00:08:16.090 --rc geninfo_all_blocks=1 00:08:16.090 --rc geninfo_unexecuted_blocks=1 00:08:16.090 00:08:16.090 ' 00:08:16.090 11:51:50 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:16.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.090 --rc genhtml_branch_coverage=1 00:08:16.090 --rc genhtml_function_coverage=1 00:08:16.090 --rc genhtml_legend=1 00:08:16.090 --rc geninfo_all_blocks=1 00:08:16.090 --rc geninfo_unexecuted_blocks=1 00:08:16.090 00:08:16.090 ' 00:08:16.090 11:51:50 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:16.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.090 --rc genhtml_branch_coverage=1 00:08:16.090 --rc genhtml_function_coverage=1 00:08:16.090 --rc genhtml_legend=1 00:08:16.090 --rc geninfo_all_blocks=1 00:08:16.090 --rc geninfo_unexecuted_blocks=1 00:08:16.090 00:08:16.090 ' 00:08:16.090 11:51:50 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:16.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.090 --rc genhtml_branch_coverage=1 00:08:16.090 --rc genhtml_function_coverage=1 00:08:16.090 --rc genhtml_legend=1 00:08:16.090 --rc geninfo_all_blocks=1 00:08:16.090 --rc geninfo_unexecuted_blocks=1 00:08:16.090 00:08:16.090 ' 00:08:16.090 11:51:50 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:08:16.090 11:51:50 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:16.090 11:51:50 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:16.091 11:51:50 env -- common/autotest_common.sh@10 -- # set +x 00:08:16.091 ************************************ 00:08:16.091 START TEST env_memory 00:08:16.091 ************************************ 00:08:16.091 11:51:50 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:08:16.091 00:08:16.091 00:08:16.091 CUnit - A unit testing framework for C - Version 2.1-3 00:08:16.091 http://cunit.sourceforge.net/ 00:08:16.091 00:08:16.091 00:08:16.091 Suite: memory 00:08:16.091 Test: alloc and free memory map ...[2024-12-05 11:51:50.120416] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:08:16.091 passed 00:08:16.091 Test: mem map translation ...[2024-12-05 11:51:50.138166] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:08:16.091 [2024-12-05 11:51:50.138181] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:08:16.091 [2024-12-05 11:51:50.138213] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:08:16.091 [2024-12-05 11:51:50.138219] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:08:16.091 passed 00:08:16.091 Test: mem map registration ...[2024-12-05 11:51:50.173811] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:08:16.091 [2024-12-05 11:51:50.173824] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:08:16.091 passed 00:08:16.091 Test: mem map adjacent registrations ...passed 00:08:16.091 00:08:16.091 Run Summary: Type Total Ran Passed Failed Inactive 00:08:16.091 suites 1 1 n/a 0 0 00:08:16.091 tests 4 4 4 0 0 00:08:16.091 asserts 152 152 152 0 n/a 00:08:16.091 00:08:16.091 Elapsed time = 0.134 seconds 00:08:16.091 00:08:16.091 real 0m0.146s 00:08:16.091 user 0m0.136s 00:08:16.091 sys 0m0.010s 00:08:16.091 11:51:50 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:16.091 11:51:50 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:08:16.091 ************************************ 00:08:16.091 END TEST env_memory 00:08:16.091 ************************************ 00:08:16.091 11:51:50 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:08:16.091 11:51:50 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:16.091 11:51:50 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:16.091 11:51:50 env -- common/autotest_common.sh@10 -- # set +x 00:08:16.351 ************************************ 00:08:16.351 START TEST env_vtophys 00:08:16.351 ************************************ 00:08:16.351 11:51:50 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:08:16.351 EAL: lib.eal log level changed from notice to debug 00:08:16.351 EAL: Detected lcore 0 as core 0 on socket 0 00:08:16.351 EAL: Detected lcore 1 as core 1 on socket 0 00:08:16.351 EAL: Detected lcore 2 as core 2 on socket 0 00:08:16.351 EAL: Detected lcore 3 as core 3 on socket 0 00:08:16.351 EAL: Detected lcore 4 as core 4 on socket 0 00:08:16.351 EAL: Detected lcore 5 as core 5 on socket 0 00:08:16.351 EAL: Detected lcore 6 as core 6 on socket 0 00:08:16.351 EAL: Detected lcore 7 as core 8 on socket 0 00:08:16.351 EAL: Detected lcore 8 as core 9 on socket 0 00:08:16.351 EAL: Detected lcore 9 as core 10 on socket 0 00:08:16.351 EAL: Detected lcore 10 as core 11 on socket 0 00:08:16.351 EAL: Detected lcore 11 as core 12 on socket 0 00:08:16.351 EAL: Detected lcore 12 as core 13 on socket 0 00:08:16.351 EAL: Detected lcore 13 as core 16 on socket 0 00:08:16.351 EAL: Detected lcore 14 as core 17 on socket 0 00:08:16.351 EAL: Detected lcore 15 as core 18 on socket 0 00:08:16.351 EAL: Detected lcore 16 as core 19 on socket 0 00:08:16.352 EAL: Detected lcore 17 as core 20 on socket 0 00:08:16.352 EAL: Detected lcore 18 as core 21 on socket 0 00:08:16.352 EAL: Detected lcore 19 as core 25 on socket 0 00:08:16.352 EAL: Detected lcore 20 as core 26 on socket 0 00:08:16.352 EAL: Detected lcore 21 as core 27 on socket 0 00:08:16.352 EAL: Detected lcore 22 as core 28 on socket 0 00:08:16.352 EAL: Detected lcore 23 as core 29 on socket 0 00:08:16.352 EAL: Detected lcore 24 as core 0 on socket 1 00:08:16.352 EAL: Detected lcore 25 as core 1 on socket 1 00:08:16.352 EAL: Detected lcore 26 as core 2 on socket 1 00:08:16.352 EAL: Detected lcore 27 as core 3 on socket 1 00:08:16.352 EAL: Detected lcore 28 as core 4 on socket 1 00:08:16.352 EAL: Detected lcore 29 as core 5 on socket 1 00:08:16.352 EAL: Detected lcore 30 as core 6 on socket 1 00:08:16.352 EAL: Detected lcore 31 as core 8 on socket 1 00:08:16.352 EAL: Detected lcore 32 as core 10 on socket 1 00:08:16.352 EAL: Detected lcore 33 as core 11 on socket 1 00:08:16.352 EAL: Detected lcore 34 as core 12 on socket 1 00:08:16.352 EAL: Detected lcore 35 as core 13 on socket 1 00:08:16.352 EAL: Detected lcore 36 as core 16 on socket 1 00:08:16.352 EAL: Detected lcore 37 as core 17 on socket 1 00:08:16.352 EAL: Detected lcore 38 as core 18 on socket 1 00:08:16.352 EAL: Detected lcore 39 as core 19 on socket 1 00:08:16.352 EAL: Detected lcore 40 as core 20 on socket 1 00:08:16.352 EAL: Detected lcore 41 as core 21 on socket 1 00:08:16.352 EAL: Detected lcore 42 as core 24 on socket 1 00:08:16.352 EAL: Detected lcore 43 as core 25 on socket 1 00:08:16.352 EAL: Detected lcore 44 as core 26 on socket 1 00:08:16.352 EAL: Detected lcore 45 as core 27 on socket 1 00:08:16.352 EAL: Detected lcore 46 as core 28 on socket 1 00:08:16.352 EAL: Detected lcore 47 as core 29 on socket 1 00:08:16.352 EAL: Detected lcore 48 as core 0 on socket 0 00:08:16.352 EAL: Detected lcore 49 as core 1 on socket 0 00:08:16.352 EAL: Detected lcore 50 as core 2 on socket 0 00:08:16.352 EAL: Detected lcore 51 as core 3 on socket 0 00:08:16.352 EAL: Detected lcore 52 as core 4 on socket 0 00:08:16.352 EAL: Detected lcore 53 as core 5 on socket 0 00:08:16.352 EAL: Detected lcore 54 as core 6 on socket 0 00:08:16.352 EAL: Detected lcore 55 as core 8 on socket 0 00:08:16.352 EAL: Detected lcore 56 as core 9 on socket 0 00:08:16.352 EAL: Detected lcore 57 as core 10 on socket 0 00:08:16.352 EAL: Detected lcore 58 as core 11 on socket 0 00:08:16.352 EAL: Detected lcore 59 as core 12 on socket 0 00:08:16.352 EAL: Detected lcore 60 as core 13 on socket 0 00:08:16.352 EAL: Detected lcore 61 as core 16 on socket 0 00:08:16.352 EAL: Detected lcore 62 as core 17 on socket 0 00:08:16.352 EAL: Detected lcore 63 as core 18 on socket 0 00:08:16.352 EAL: Detected lcore 64 as core 19 on socket 0 00:08:16.352 EAL: Detected lcore 65 as core 20 on socket 0 00:08:16.352 EAL: Detected lcore 66 as core 21 on socket 0 00:08:16.352 EAL: Detected lcore 67 as core 25 on socket 0 00:08:16.352 EAL: Detected lcore 68 as core 26 on socket 0 00:08:16.352 EAL: Detected lcore 69 as core 27 on socket 0 00:08:16.352 EAL: Detected lcore 70 as core 28 on socket 0 00:08:16.352 EAL: Detected lcore 71 as core 29 on socket 0 00:08:16.352 EAL: Detected lcore 72 as core 0 on socket 1 00:08:16.352 EAL: Detected lcore 73 as core 1 on socket 1 00:08:16.352 EAL: Detected lcore 74 as core 2 on socket 1 00:08:16.352 EAL: Detected lcore 75 as core 3 on socket 1 00:08:16.352 EAL: Detected lcore 76 as core 4 on socket 1 00:08:16.352 EAL: Detected lcore 77 as core 5 on socket 1 00:08:16.352 EAL: Detected lcore 78 as core 6 on socket 1 00:08:16.352 EAL: Detected lcore 79 as core 8 on socket 1 00:08:16.352 EAL: Detected lcore 80 as core 10 on socket 1 00:08:16.352 EAL: Detected lcore 81 as core 11 on socket 1 00:08:16.352 EAL: Detected lcore 82 as core 12 on socket 1 00:08:16.352 EAL: Detected lcore 83 as core 13 on socket 1 00:08:16.352 EAL: Detected lcore 84 as core 16 on socket 1 00:08:16.352 EAL: Detected lcore 85 as core 17 on socket 1 00:08:16.352 EAL: Detected lcore 86 as core 18 on socket 1 00:08:16.352 EAL: Detected lcore 87 as core 19 on socket 1 00:08:16.352 EAL: Detected lcore 88 as core 20 on socket 1 00:08:16.352 EAL: Detected lcore 89 as core 21 on socket 1 00:08:16.352 EAL: Detected lcore 90 as core 24 on socket 1 00:08:16.352 EAL: Detected lcore 91 as core 25 on socket 1 00:08:16.352 EAL: Detected lcore 92 as core 26 on socket 1 00:08:16.352 EAL: Detected lcore 93 as core 27 on socket 1 00:08:16.352 EAL: Detected lcore 94 as core 28 on socket 1 00:08:16.352 EAL: Detected lcore 95 as core 29 on socket 1 00:08:16.352 EAL: Maximum logical cores by configuration: 128 00:08:16.352 EAL: Detected CPU lcores: 96 00:08:16.352 EAL: Detected NUMA nodes: 2 00:08:16.352 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:08:16.352 EAL: Detected shared linkage of DPDK 00:08:16.352 EAL: No shared files mode enabled, IPC will be disabled 00:08:16.352 EAL: Bus pci wants IOVA as 'DC' 00:08:16.352 EAL: Buses did not request a specific IOVA mode. 00:08:16.352 EAL: IOMMU is available, selecting IOVA as VA mode. 00:08:16.352 EAL: Selected IOVA mode 'VA' 00:08:16.352 EAL: Probing VFIO support... 00:08:16.352 EAL: IOMMU type 1 (Type 1) is supported 00:08:16.352 EAL: IOMMU type 7 (sPAPR) is not supported 00:08:16.352 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:08:16.352 EAL: VFIO support initialized 00:08:16.352 EAL: Ask a virtual area of 0x2e000 bytes 00:08:16.352 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:08:16.352 EAL: Setting up physically contiguous memory... 00:08:16.352 EAL: Setting maximum number of open files to 524288 00:08:16.352 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:08:16.352 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:08:16.352 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:08:16.352 EAL: Ask a virtual area of 0x61000 bytes 00:08:16.352 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:08:16.352 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:16.352 EAL: Ask a virtual area of 0x400000000 bytes 00:08:16.352 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:08:16.352 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:08:16.352 EAL: Ask a virtual area of 0x61000 bytes 00:08:16.352 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:08:16.352 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:16.352 EAL: Ask a virtual area of 0x400000000 bytes 00:08:16.352 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:08:16.352 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:08:16.352 EAL: Ask a virtual area of 0x61000 bytes 00:08:16.352 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:08:16.352 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:16.352 EAL: Ask a virtual area of 0x400000000 bytes 00:08:16.352 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:08:16.352 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:08:16.352 EAL: Ask a virtual area of 0x61000 bytes 00:08:16.352 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:08:16.352 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:16.352 EAL: Ask a virtual area of 0x400000000 bytes 00:08:16.352 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:08:16.352 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:08:16.352 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:08:16.352 EAL: Ask a virtual area of 0x61000 bytes 00:08:16.352 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:08:16.352 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:16.352 EAL: Ask a virtual area of 0x400000000 bytes 00:08:16.352 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:08:16.352 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:08:16.353 EAL: Ask a virtual area of 0x61000 bytes 00:08:16.353 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:08:16.353 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:16.353 EAL: Ask a virtual area of 0x400000000 bytes 00:08:16.353 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:08:16.353 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:08:16.353 EAL: Ask a virtual area of 0x61000 bytes 00:08:16.353 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:08:16.353 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:16.353 EAL: Ask a virtual area of 0x400000000 bytes 00:08:16.353 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:08:16.353 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:08:16.353 EAL: Ask a virtual area of 0x61000 bytes 00:08:16.353 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:08:16.353 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:16.353 EAL: Ask a virtual area of 0x400000000 bytes 00:08:16.353 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:08:16.353 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:08:16.353 EAL: Hugepages will be freed exactly as allocated. 00:08:16.353 EAL: No shared files mode enabled, IPC is disabled 00:08:16.353 EAL: No shared files mode enabled, IPC is disabled 00:08:16.353 EAL: TSC frequency is ~2100000 KHz 00:08:16.353 EAL: Main lcore 0 is ready (tid=7fc78d5dda00;cpuset=[0]) 00:08:16.353 EAL: Trying to obtain current memory policy. 00:08:16.353 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:16.353 EAL: Restoring previous memory policy: 0 00:08:16.353 EAL: request: mp_malloc_sync 00:08:16.353 EAL: No shared files mode enabled, IPC is disabled 00:08:16.353 EAL: Heap on socket 0 was expanded by 2MB 00:08:16.353 EAL: No shared files mode enabled, IPC is disabled 00:08:16.353 EAL: No PCI address specified using 'addr=' in: bus=pci 00:08:16.353 EAL: Mem event callback 'spdk:(nil)' registered 00:08:16.353 00:08:16.353 00:08:16.353 CUnit - A unit testing framework for C - Version 2.1-3 00:08:16.353 http://cunit.sourceforge.net/ 00:08:16.353 00:08:16.353 00:08:16.353 Suite: components_suite 00:08:16.353 Test: vtophys_malloc_test ...passed 00:08:16.353 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:08:16.353 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:16.353 EAL: Restoring previous memory policy: 4 00:08:16.353 EAL: Calling mem event callback 'spdk:(nil)' 00:08:16.353 EAL: request: mp_malloc_sync 00:08:16.353 EAL: No shared files mode enabled, IPC is disabled 00:08:16.353 EAL: Heap on socket 0 was expanded by 4MB 00:08:16.353 EAL: Calling mem event callback 'spdk:(nil)' 00:08:16.353 EAL: request: mp_malloc_sync 00:08:16.353 EAL: No shared files mode enabled, IPC is disabled 00:08:16.353 EAL: Heap on socket 0 was shrunk by 4MB 00:08:16.353 EAL: Trying to obtain current memory policy. 00:08:16.353 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:16.353 EAL: Restoring previous memory policy: 4 00:08:16.353 EAL: Calling mem event callback 'spdk:(nil)' 00:08:16.353 EAL: request: mp_malloc_sync 00:08:16.353 EAL: No shared files mode enabled, IPC is disabled 00:08:16.353 EAL: Heap on socket 0 was expanded by 6MB 00:08:16.353 EAL: Calling mem event callback 'spdk:(nil)' 00:08:16.353 EAL: request: mp_malloc_sync 00:08:16.353 EAL: No shared files mode enabled, IPC is disabled 00:08:16.353 EAL: Heap on socket 0 was shrunk by 6MB 00:08:16.353 EAL: Trying to obtain current memory policy. 00:08:16.353 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:16.353 EAL: Restoring previous memory policy: 4 00:08:16.353 EAL: Calling mem event callback 'spdk:(nil)' 00:08:16.353 EAL: request: mp_malloc_sync 00:08:16.353 EAL: No shared files mode enabled, IPC is disabled 00:08:16.353 EAL: Heap on socket 0 was expanded by 10MB 00:08:16.353 EAL: Calling mem event callback 'spdk:(nil)' 00:08:16.353 EAL: request: mp_malloc_sync 00:08:16.353 EAL: No shared files mode enabled, IPC is disabled 00:08:16.353 EAL: Heap on socket 0 was shrunk by 10MB 00:08:16.353 EAL: Trying to obtain current memory policy. 00:08:16.353 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:16.353 EAL: Restoring previous memory policy: 4 00:08:16.353 EAL: Calling mem event callback 'spdk:(nil)' 00:08:16.353 EAL: request: mp_malloc_sync 00:08:16.353 EAL: No shared files mode enabled, IPC is disabled 00:08:16.353 EAL: Heap on socket 0 was expanded by 18MB 00:08:16.353 EAL: Calling mem event callback 'spdk:(nil)' 00:08:16.353 EAL: request: mp_malloc_sync 00:08:16.353 EAL: No shared files mode enabled, IPC is disabled 00:08:16.353 EAL: Heap on socket 0 was shrunk by 18MB 00:08:16.353 EAL: Trying to obtain current memory policy. 00:08:16.353 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:16.353 EAL: Restoring previous memory policy: 4 00:08:16.353 EAL: Calling mem event callback 'spdk:(nil)' 00:08:16.353 EAL: request: mp_malloc_sync 00:08:16.353 EAL: No shared files mode enabled, IPC is disabled 00:08:16.353 EAL: Heap on socket 0 was expanded by 34MB 00:08:16.353 EAL: Calling mem event callback 'spdk:(nil)' 00:08:16.353 EAL: request: mp_malloc_sync 00:08:16.353 EAL: No shared files mode enabled, IPC is disabled 00:08:16.353 EAL: Heap on socket 0 was shrunk by 34MB 00:08:16.353 EAL: Trying to obtain current memory policy. 00:08:16.353 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:16.353 EAL: Restoring previous memory policy: 4 00:08:16.353 EAL: Calling mem event callback 'spdk:(nil)' 00:08:16.353 EAL: request: mp_malloc_sync 00:08:16.353 EAL: No shared files mode enabled, IPC is disabled 00:08:16.353 EAL: Heap on socket 0 was expanded by 66MB 00:08:16.353 EAL: Calling mem event callback 'spdk:(nil)' 00:08:16.353 EAL: request: mp_malloc_sync 00:08:16.353 EAL: No shared files mode enabled, IPC is disabled 00:08:16.353 EAL: Heap on socket 0 was shrunk by 66MB 00:08:16.353 EAL: Trying to obtain current memory policy. 00:08:16.353 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:16.353 EAL: Restoring previous memory policy: 4 00:08:16.353 EAL: Calling mem event callback 'spdk:(nil)' 00:08:16.353 EAL: request: mp_malloc_sync 00:08:16.353 EAL: No shared files mode enabled, IPC is disabled 00:08:16.353 EAL: Heap on socket 0 was expanded by 130MB 00:08:16.353 EAL: Calling mem event callback 'spdk:(nil)' 00:08:16.353 EAL: request: mp_malloc_sync 00:08:16.353 EAL: No shared files mode enabled, IPC is disabled 00:08:16.353 EAL: Heap on socket 0 was shrunk by 130MB 00:08:16.353 EAL: Trying to obtain current memory policy. 00:08:16.353 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:16.353 EAL: Restoring previous memory policy: 4 00:08:16.353 EAL: Calling mem event callback 'spdk:(nil)' 00:08:16.353 EAL: request: mp_malloc_sync 00:08:16.353 EAL: No shared files mode enabled, IPC is disabled 00:08:16.353 EAL: Heap on socket 0 was expanded by 258MB 00:08:16.613 EAL: Calling mem event callback 'spdk:(nil)' 00:08:16.613 EAL: request: mp_malloc_sync 00:08:16.613 EAL: No shared files mode enabled, IPC is disabled 00:08:16.613 EAL: Heap on socket 0 was shrunk by 258MB 00:08:16.613 EAL: Trying to obtain current memory policy. 00:08:16.613 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:16.613 EAL: Restoring previous memory policy: 4 00:08:16.613 EAL: Calling mem event callback 'spdk:(nil)' 00:08:16.613 EAL: request: mp_malloc_sync 00:08:16.614 EAL: No shared files mode enabled, IPC is disabled 00:08:16.614 EAL: Heap on socket 0 was expanded by 514MB 00:08:16.614 EAL: Calling mem event callback 'spdk:(nil)' 00:08:16.873 EAL: request: mp_malloc_sync 00:08:16.873 EAL: No shared files mode enabled, IPC is disabled 00:08:16.873 EAL: Heap on socket 0 was shrunk by 514MB 00:08:16.873 EAL: Trying to obtain current memory policy. 00:08:16.873 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:16.873 EAL: Restoring previous memory policy: 4 00:08:16.873 EAL: Calling mem event callback 'spdk:(nil)' 00:08:16.873 EAL: request: mp_malloc_sync 00:08:16.873 EAL: No shared files mode enabled, IPC is disabled 00:08:16.873 EAL: Heap on socket 0 was expanded by 1026MB 00:08:17.164 EAL: Calling mem event callback 'spdk:(nil)' 00:08:17.423 EAL: request: mp_malloc_sync 00:08:17.423 EAL: No shared files mode enabled, IPC is disabled 00:08:17.423 EAL: Heap on socket 0 was shrunk by 1026MB 00:08:17.423 passed 00:08:17.423 00:08:17.423 Run Summary: Type Total Ran Passed Failed Inactive 00:08:17.423 suites 1 1 n/a 0 0 00:08:17.423 tests 2 2 2 0 0 00:08:17.423 asserts 497 497 497 0 n/a 00:08:17.423 00:08:17.423 Elapsed time = 0.964 seconds 00:08:17.423 EAL: Calling mem event callback 'spdk:(nil)' 00:08:17.423 EAL: request: mp_malloc_sync 00:08:17.423 EAL: No shared files mode enabled, IPC is disabled 00:08:17.423 EAL: Heap on socket 0 was shrunk by 2MB 00:08:17.423 EAL: No shared files mode enabled, IPC is disabled 00:08:17.423 EAL: No shared files mode enabled, IPC is disabled 00:08:17.423 EAL: No shared files mode enabled, IPC is disabled 00:08:17.423 00:08:17.423 real 0m1.093s 00:08:17.423 user 0m0.643s 00:08:17.423 sys 0m0.422s 00:08:17.423 11:51:51 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:17.424 11:51:51 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:08:17.424 ************************************ 00:08:17.424 END TEST env_vtophys 00:08:17.424 ************************************ 00:08:17.424 11:51:51 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:08:17.424 11:51:51 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:17.424 11:51:51 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:17.424 11:51:51 env -- common/autotest_common.sh@10 -- # set +x 00:08:17.424 ************************************ 00:08:17.424 START TEST env_pci 00:08:17.424 ************************************ 00:08:17.424 11:51:51 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:08:17.424 00:08:17.424 00:08:17.424 CUnit - A unit testing framework for C - Version 2.1-3 00:08:17.424 http://cunit.sourceforge.net/ 00:08:17.424 00:08:17.424 00:08:17.424 Suite: pci 00:08:17.424 Test: pci_hook ...[2024-12-05 11:51:51.468494] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 4083916 has claimed it 00:08:17.424 EAL: Cannot find device (10000:00:01.0) 00:08:17.424 EAL: Failed to attach device on primary process 00:08:17.424 passed 00:08:17.424 00:08:17.424 Run Summary: Type Total Ran Passed Failed Inactive 00:08:17.424 suites 1 1 n/a 0 0 00:08:17.424 tests 1 1 1 0 0 00:08:17.424 asserts 25 25 25 0 n/a 00:08:17.424 00:08:17.424 Elapsed time = 0.025 seconds 00:08:17.424 00:08:17.424 real 0m0.045s 00:08:17.424 user 0m0.016s 00:08:17.424 sys 0m0.029s 00:08:17.424 11:51:51 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:17.424 11:51:51 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:08:17.424 ************************************ 00:08:17.424 END TEST env_pci 00:08:17.424 ************************************ 00:08:17.424 11:51:51 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:08:17.424 11:51:51 env -- env/env.sh@15 -- # uname 00:08:17.424 11:51:51 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:08:17.424 11:51:51 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:08:17.424 11:51:51 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:17.424 11:51:51 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:17.424 11:51:51 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:17.424 11:51:51 env -- common/autotest_common.sh@10 -- # set +x 00:08:17.424 ************************************ 00:08:17.424 START TEST env_dpdk_post_init 00:08:17.424 ************************************ 00:08:17.424 11:51:51 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:17.424 EAL: Detected CPU lcores: 96 00:08:17.424 EAL: Detected NUMA nodes: 2 00:08:17.424 EAL: Detected shared linkage of DPDK 00:08:17.424 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:17.683 EAL: Selected IOVA mode 'VA' 00:08:17.683 EAL: VFIO support initialized 00:08:17.683 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:17.683 EAL: Using IOMMU type 1 (Type 1) 00:08:17.683 EAL: Ignore mapping IO port bar(1) 00:08:17.683 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:08:17.683 EAL: Ignore mapping IO port bar(1) 00:08:17.683 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:08:17.683 EAL: Ignore mapping IO port bar(1) 00:08:17.683 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:08:17.683 EAL: Ignore mapping IO port bar(1) 00:08:17.683 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:08:17.683 EAL: Ignore mapping IO port bar(1) 00:08:17.683 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:08:17.683 EAL: Ignore mapping IO port bar(1) 00:08:17.683 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:08:17.683 EAL: Ignore mapping IO port bar(1) 00:08:17.683 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:08:17.683 EAL: Ignore mapping IO port bar(1) 00:08:17.683 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:08:18.617 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:08:18.617 EAL: Ignore mapping IO port bar(1) 00:08:18.617 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:08:18.617 EAL: Ignore mapping IO port bar(1) 00:08:18.617 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:08:18.617 EAL: Ignore mapping IO port bar(1) 00:08:18.617 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:08:18.617 EAL: Ignore mapping IO port bar(1) 00:08:18.618 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:08:18.618 EAL: Ignore mapping IO port bar(1) 00:08:18.618 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:08:18.618 EAL: Ignore mapping IO port bar(1) 00:08:18.618 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:08:18.618 EAL: Ignore mapping IO port bar(1) 00:08:18.618 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:08:18.618 EAL: Ignore mapping IO port bar(1) 00:08:18.618 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:08:22.806 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:08:22.806 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:08:22.806 Starting DPDK initialization... 00:08:22.806 Starting SPDK post initialization... 00:08:22.806 SPDK NVMe probe 00:08:22.806 Attaching to 0000:5e:00.0 00:08:22.806 Attached to 0000:5e:00.0 00:08:22.806 Cleaning up... 00:08:22.806 00:08:22.806 real 0m4.968s 00:08:22.806 user 0m3.546s 00:08:22.806 sys 0m0.488s 00:08:22.806 11:51:56 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:22.806 11:51:56 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:08:22.806 ************************************ 00:08:22.806 END TEST env_dpdk_post_init 00:08:22.806 ************************************ 00:08:22.806 11:51:56 env -- env/env.sh@26 -- # uname 00:08:22.806 11:51:56 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:08:22.806 11:51:56 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:08:22.806 11:51:56 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:22.806 11:51:56 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:22.806 11:51:56 env -- common/autotest_common.sh@10 -- # set +x 00:08:22.806 ************************************ 00:08:22.806 START TEST env_mem_callbacks 00:08:22.806 ************************************ 00:08:22.806 11:51:56 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:08:22.806 EAL: Detected CPU lcores: 96 00:08:22.806 EAL: Detected NUMA nodes: 2 00:08:22.806 EAL: Detected shared linkage of DPDK 00:08:22.806 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:22.806 EAL: Selected IOVA mode 'VA' 00:08:22.806 EAL: VFIO support initialized 00:08:22.806 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:22.806 00:08:22.806 00:08:22.806 CUnit - A unit testing framework for C - Version 2.1-3 00:08:22.806 http://cunit.sourceforge.net/ 00:08:22.806 00:08:22.806 00:08:22.806 Suite: memory 00:08:22.806 Test: test ... 00:08:22.806 register 0x200000200000 2097152 00:08:22.806 malloc 3145728 00:08:22.806 register 0x200000400000 4194304 00:08:22.806 buf 0x200000500000 len 3145728 PASSED 00:08:22.806 malloc 64 00:08:22.806 buf 0x2000004fff40 len 64 PASSED 00:08:22.806 malloc 4194304 00:08:22.806 register 0x200000800000 6291456 00:08:22.806 buf 0x200000a00000 len 4194304 PASSED 00:08:22.806 free 0x200000500000 3145728 00:08:22.806 free 0x2000004fff40 64 00:08:22.806 unregister 0x200000400000 4194304 PASSED 00:08:22.806 free 0x200000a00000 4194304 00:08:22.806 unregister 0x200000800000 6291456 PASSED 00:08:22.806 malloc 8388608 00:08:22.806 register 0x200000400000 10485760 00:08:22.806 buf 0x200000600000 len 8388608 PASSED 00:08:22.806 free 0x200000600000 8388608 00:08:22.806 unregister 0x200000400000 10485760 PASSED 00:08:22.806 passed 00:08:22.806 00:08:22.806 Run Summary: Type Total Ran Passed Failed Inactive 00:08:22.806 suites 1 1 n/a 0 0 00:08:22.806 tests 1 1 1 0 0 00:08:22.806 asserts 15 15 15 0 n/a 00:08:22.806 00:08:22.806 Elapsed time = 0.008 seconds 00:08:22.806 00:08:22.806 real 0m0.057s 00:08:22.806 user 0m0.021s 00:08:22.806 sys 0m0.036s 00:08:22.806 11:51:56 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:22.806 11:51:56 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:08:22.806 ************************************ 00:08:22.806 END TEST env_mem_callbacks 00:08:22.806 ************************************ 00:08:22.806 00:08:22.806 real 0m6.839s 00:08:22.806 user 0m4.602s 00:08:22.806 sys 0m1.309s 00:08:22.806 11:51:56 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:22.806 11:51:56 env -- common/autotest_common.sh@10 -- # set +x 00:08:22.806 ************************************ 00:08:22.806 END TEST env 00:08:22.806 ************************************ 00:08:22.806 11:51:56 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:08:22.806 11:51:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:22.806 11:51:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:22.806 11:51:56 -- common/autotest_common.sh@10 -- # set +x 00:08:22.806 ************************************ 00:08:22.806 START TEST rpc 00:08:22.806 ************************************ 00:08:22.806 11:51:56 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:08:22.806 * Looking for test storage... 00:08:22.806 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:08:22.806 11:51:56 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:22.806 11:51:56 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:08:22.806 11:51:56 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:22.806 11:51:56 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:22.806 11:51:56 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:22.806 11:51:56 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:22.806 11:51:56 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:22.806 11:51:56 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:22.806 11:51:56 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:22.806 11:51:56 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:22.806 11:51:56 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:22.806 11:51:56 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:22.806 11:51:56 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:22.806 11:51:56 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:22.806 11:51:56 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:22.806 11:51:56 rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:22.806 11:51:56 rpc -- scripts/common.sh@345 -- # : 1 00:08:22.806 11:51:56 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:22.806 11:51:56 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:22.806 11:51:56 rpc -- scripts/common.sh@365 -- # decimal 1 00:08:22.807 11:51:56 rpc -- scripts/common.sh@353 -- # local d=1 00:08:22.807 11:51:56 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:22.807 11:51:56 rpc -- scripts/common.sh@355 -- # echo 1 00:08:22.807 11:51:56 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:22.807 11:51:56 rpc -- scripts/common.sh@366 -- # decimal 2 00:08:22.807 11:51:56 rpc -- scripts/common.sh@353 -- # local d=2 00:08:22.807 11:51:56 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:22.807 11:51:56 rpc -- scripts/common.sh@355 -- # echo 2 00:08:22.807 11:51:56 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:22.807 11:51:56 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:22.807 11:51:56 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:22.807 11:51:56 rpc -- scripts/common.sh@368 -- # return 0 00:08:22.807 11:51:56 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:22.807 11:51:56 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:22.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.807 --rc genhtml_branch_coverage=1 00:08:22.807 --rc genhtml_function_coverage=1 00:08:22.807 --rc genhtml_legend=1 00:08:22.807 --rc geninfo_all_blocks=1 00:08:22.807 --rc geninfo_unexecuted_blocks=1 00:08:22.807 00:08:22.807 ' 00:08:22.807 11:51:56 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:22.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.807 --rc genhtml_branch_coverage=1 00:08:22.807 --rc genhtml_function_coverage=1 00:08:22.807 --rc genhtml_legend=1 00:08:22.807 --rc geninfo_all_blocks=1 00:08:22.807 --rc geninfo_unexecuted_blocks=1 00:08:22.807 00:08:22.807 ' 00:08:22.807 11:51:56 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:22.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.807 --rc genhtml_branch_coverage=1 00:08:22.807 --rc genhtml_function_coverage=1 00:08:22.807 --rc genhtml_legend=1 00:08:22.807 --rc geninfo_all_blocks=1 00:08:22.807 --rc geninfo_unexecuted_blocks=1 00:08:22.807 00:08:22.807 ' 00:08:22.807 11:51:56 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:22.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.807 --rc genhtml_branch_coverage=1 00:08:22.807 --rc genhtml_function_coverage=1 00:08:22.807 --rc genhtml_legend=1 00:08:22.807 --rc geninfo_all_blocks=1 00:08:22.807 --rc geninfo_unexecuted_blocks=1 00:08:22.807 00:08:22.807 ' 00:08:22.807 11:51:56 rpc -- rpc/rpc.sh@65 -- # spdk_pid=4084970 00:08:22.807 11:51:56 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:22.807 11:51:56 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:08:22.807 11:51:56 rpc -- rpc/rpc.sh@67 -- # waitforlisten 4084970 00:08:22.807 11:51:56 rpc -- common/autotest_common.sh@835 -- # '[' -z 4084970 ']' 00:08:22.807 11:51:56 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.807 11:51:56 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:22.807 11:51:56 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.807 11:51:56 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:22.807 11:51:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:23.066 [2024-12-05 11:51:57.009731] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:08:23.066 [2024-12-05 11:51:57.009783] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4084970 ] 00:08:23.066 [2024-12-05 11:51:57.082835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.066 [2024-12-05 11:51:57.122445] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:08:23.066 [2024-12-05 11:51:57.122481] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 4084970' to capture a snapshot of events at runtime. 00:08:23.066 [2024-12-05 11:51:57.122488] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:23.066 [2024-12-05 11:51:57.122494] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:23.066 [2024-12-05 11:51:57.122498] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid4084970 for offline analysis/debug. 00:08:23.066 [2024-12-05 11:51:57.123080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.325 11:51:57 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:23.325 11:51:57 rpc -- common/autotest_common.sh@868 -- # return 0 00:08:23.325 11:51:57 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:08:23.325 11:51:57 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:08:23.325 11:51:57 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:08:23.325 11:51:57 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:08:23.325 11:51:57 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:23.325 11:51:57 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:23.325 11:51:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:23.325 ************************************ 00:08:23.325 START TEST rpc_integrity 00:08:23.325 ************************************ 00:08:23.325 11:51:57 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:08:23.325 11:51:57 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:23.325 11:51:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.325 11:51:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:23.325 11:51:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.325 11:51:57 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:23.325 11:51:57 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:23.325 11:51:57 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:23.325 11:51:57 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:23.325 11:51:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.325 11:51:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:23.325 11:51:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.325 11:51:57 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:08:23.325 11:51:57 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:23.325 11:51:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.325 11:51:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:23.325 11:51:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.325 11:51:57 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:23.325 { 00:08:23.325 "name": "Malloc0", 00:08:23.325 "aliases": [ 00:08:23.325 "540107c2-1249-4dc9-ab85-d326ca65db44" 00:08:23.325 ], 00:08:23.325 "product_name": "Malloc disk", 00:08:23.325 "block_size": 512, 00:08:23.325 "num_blocks": 16384, 00:08:23.325 "uuid": "540107c2-1249-4dc9-ab85-d326ca65db44", 00:08:23.325 "assigned_rate_limits": { 00:08:23.325 "rw_ios_per_sec": 0, 00:08:23.325 "rw_mbytes_per_sec": 0, 00:08:23.325 "r_mbytes_per_sec": 0, 00:08:23.325 "w_mbytes_per_sec": 0 00:08:23.325 }, 00:08:23.325 "claimed": false, 00:08:23.325 "zoned": false, 00:08:23.325 "supported_io_types": { 00:08:23.325 "read": true, 00:08:23.325 "write": true, 00:08:23.325 "unmap": true, 00:08:23.325 "flush": true, 00:08:23.325 "reset": true, 00:08:23.325 "nvme_admin": false, 00:08:23.325 "nvme_io": false, 00:08:23.325 "nvme_io_md": false, 00:08:23.325 "write_zeroes": true, 00:08:23.325 "zcopy": true, 00:08:23.325 "get_zone_info": false, 00:08:23.325 "zone_management": false, 00:08:23.325 "zone_append": false, 00:08:23.325 "compare": false, 00:08:23.325 "compare_and_write": false, 00:08:23.325 "abort": true, 00:08:23.325 "seek_hole": false, 00:08:23.325 "seek_data": false, 00:08:23.325 "copy": true, 00:08:23.325 "nvme_iov_md": false 00:08:23.325 }, 00:08:23.325 "memory_domains": [ 00:08:23.325 { 00:08:23.325 "dma_device_id": "system", 00:08:23.325 "dma_device_type": 1 00:08:23.325 }, 00:08:23.325 { 00:08:23.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.325 "dma_device_type": 2 00:08:23.325 } 00:08:23.325 ], 00:08:23.325 "driver_specific": {} 00:08:23.325 } 00:08:23.325 ]' 00:08:23.325 11:51:57 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:23.325 11:51:57 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:23.325 11:51:57 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:08:23.325 11:51:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.325 11:51:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:23.325 [2024-12-05 11:51:57.515863] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:08:23.325 [2024-12-05 11:51:57.515891] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:23.325 [2024-12-05 11:51:57.515903] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c94c00 00:08:23.325 [2024-12-05 11:51:57.515909] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:23.325 [2024-12-05 11:51:57.516983] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:23.325 [2024-12-05 11:51:57.517002] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:23.325 Passthru0 00:08:23.325 11:51:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.325 11:51:57 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:23.325 11:51:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.585 11:51:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:23.585 11:51:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.585 11:51:57 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:23.585 { 00:08:23.585 "name": "Malloc0", 00:08:23.585 "aliases": [ 00:08:23.585 "540107c2-1249-4dc9-ab85-d326ca65db44" 00:08:23.585 ], 00:08:23.585 "product_name": "Malloc disk", 00:08:23.585 "block_size": 512, 00:08:23.585 "num_blocks": 16384, 00:08:23.585 "uuid": "540107c2-1249-4dc9-ab85-d326ca65db44", 00:08:23.585 "assigned_rate_limits": { 00:08:23.585 "rw_ios_per_sec": 0, 00:08:23.585 "rw_mbytes_per_sec": 0, 00:08:23.585 "r_mbytes_per_sec": 0, 00:08:23.585 "w_mbytes_per_sec": 0 00:08:23.585 }, 00:08:23.585 "claimed": true, 00:08:23.585 "claim_type": "exclusive_write", 00:08:23.585 "zoned": false, 00:08:23.585 "supported_io_types": { 00:08:23.585 "read": true, 00:08:23.585 "write": true, 00:08:23.585 "unmap": true, 00:08:23.585 "flush": true, 00:08:23.585 "reset": true, 00:08:23.585 "nvme_admin": false, 00:08:23.585 "nvme_io": false, 00:08:23.585 "nvme_io_md": false, 00:08:23.585 "write_zeroes": true, 00:08:23.585 "zcopy": true, 00:08:23.585 "get_zone_info": false, 00:08:23.585 "zone_management": false, 00:08:23.585 "zone_append": false, 00:08:23.585 "compare": false, 00:08:23.585 "compare_and_write": false, 00:08:23.585 "abort": true, 00:08:23.585 "seek_hole": false, 00:08:23.585 "seek_data": false, 00:08:23.585 "copy": true, 00:08:23.585 "nvme_iov_md": false 00:08:23.585 }, 00:08:23.585 "memory_domains": [ 00:08:23.585 { 00:08:23.585 "dma_device_id": "system", 00:08:23.585 "dma_device_type": 1 00:08:23.585 }, 00:08:23.585 { 00:08:23.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.585 "dma_device_type": 2 00:08:23.585 } 00:08:23.585 ], 00:08:23.585 "driver_specific": {} 00:08:23.585 }, 00:08:23.585 { 00:08:23.585 "name": "Passthru0", 00:08:23.585 "aliases": [ 00:08:23.585 "67db4807-c17b-51e4-bc2c-29aeb1a86836" 00:08:23.585 ], 00:08:23.585 "product_name": "passthru", 00:08:23.585 "block_size": 512, 00:08:23.585 "num_blocks": 16384, 00:08:23.585 "uuid": "67db4807-c17b-51e4-bc2c-29aeb1a86836", 00:08:23.585 "assigned_rate_limits": { 00:08:23.585 "rw_ios_per_sec": 0, 00:08:23.585 "rw_mbytes_per_sec": 0, 00:08:23.585 "r_mbytes_per_sec": 0, 00:08:23.585 "w_mbytes_per_sec": 0 00:08:23.585 }, 00:08:23.585 "claimed": false, 00:08:23.585 "zoned": false, 00:08:23.585 "supported_io_types": { 00:08:23.585 "read": true, 00:08:23.585 "write": true, 00:08:23.585 "unmap": true, 00:08:23.585 "flush": true, 00:08:23.585 "reset": true, 00:08:23.585 "nvme_admin": false, 00:08:23.585 "nvme_io": false, 00:08:23.585 "nvme_io_md": false, 00:08:23.585 "write_zeroes": true, 00:08:23.585 "zcopy": true, 00:08:23.585 "get_zone_info": false, 00:08:23.585 "zone_management": false, 00:08:23.585 "zone_append": false, 00:08:23.585 "compare": false, 00:08:23.585 "compare_and_write": false, 00:08:23.585 "abort": true, 00:08:23.585 "seek_hole": false, 00:08:23.585 "seek_data": false, 00:08:23.585 "copy": true, 00:08:23.585 "nvme_iov_md": false 00:08:23.585 }, 00:08:23.585 "memory_domains": [ 00:08:23.585 { 00:08:23.585 "dma_device_id": "system", 00:08:23.585 "dma_device_type": 1 00:08:23.585 }, 00:08:23.585 { 00:08:23.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.585 "dma_device_type": 2 00:08:23.585 } 00:08:23.585 ], 00:08:23.585 "driver_specific": { 00:08:23.585 "passthru": { 00:08:23.585 "name": "Passthru0", 00:08:23.585 "base_bdev_name": "Malloc0" 00:08:23.585 } 00:08:23.585 } 00:08:23.585 } 00:08:23.585 ]' 00:08:23.585 11:51:57 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:23.585 11:51:57 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:23.585 11:51:57 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:23.585 11:51:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.585 11:51:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:23.585 11:51:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.585 11:51:57 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:08:23.585 11:51:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.585 11:51:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:23.585 11:51:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.585 11:51:57 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:23.585 11:51:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.585 11:51:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:23.585 11:51:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.585 11:51:57 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:23.585 11:51:57 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:23.585 11:51:57 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:23.585 00:08:23.585 real 0m0.280s 00:08:23.585 user 0m0.174s 00:08:23.585 sys 0m0.034s 00:08:23.585 11:51:57 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:23.585 11:51:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:23.585 ************************************ 00:08:23.585 END TEST rpc_integrity 00:08:23.585 ************************************ 00:08:23.585 11:51:57 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:08:23.585 11:51:57 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:23.585 11:51:57 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:23.585 11:51:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:23.585 ************************************ 00:08:23.585 START TEST rpc_plugins 00:08:23.585 ************************************ 00:08:23.585 11:51:57 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:08:23.585 11:51:57 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:08:23.585 11:51:57 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.585 11:51:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:23.585 11:51:57 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.585 11:51:57 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:08:23.585 11:51:57 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:08:23.585 11:51:57 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.585 11:51:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:23.585 11:51:57 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.585 11:51:57 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:08:23.585 { 00:08:23.585 "name": "Malloc1", 00:08:23.585 "aliases": [ 00:08:23.585 "95b6c185-f3f6-4893-b4c3-5851b5957b27" 00:08:23.585 ], 00:08:23.585 "product_name": "Malloc disk", 00:08:23.585 "block_size": 4096, 00:08:23.585 "num_blocks": 256, 00:08:23.585 "uuid": "95b6c185-f3f6-4893-b4c3-5851b5957b27", 00:08:23.585 "assigned_rate_limits": { 00:08:23.585 "rw_ios_per_sec": 0, 00:08:23.585 "rw_mbytes_per_sec": 0, 00:08:23.585 "r_mbytes_per_sec": 0, 00:08:23.585 "w_mbytes_per_sec": 0 00:08:23.585 }, 00:08:23.585 "claimed": false, 00:08:23.585 "zoned": false, 00:08:23.585 "supported_io_types": { 00:08:23.585 "read": true, 00:08:23.585 "write": true, 00:08:23.585 "unmap": true, 00:08:23.585 "flush": true, 00:08:23.585 "reset": true, 00:08:23.585 "nvme_admin": false, 00:08:23.585 "nvme_io": false, 00:08:23.585 "nvme_io_md": false, 00:08:23.585 "write_zeroes": true, 00:08:23.585 "zcopy": true, 00:08:23.585 "get_zone_info": false, 00:08:23.585 "zone_management": false, 00:08:23.585 "zone_append": false, 00:08:23.585 "compare": false, 00:08:23.585 "compare_and_write": false, 00:08:23.585 "abort": true, 00:08:23.585 "seek_hole": false, 00:08:23.585 "seek_data": false, 00:08:23.585 "copy": true, 00:08:23.585 "nvme_iov_md": false 00:08:23.585 }, 00:08:23.585 "memory_domains": [ 00:08:23.585 { 00:08:23.585 "dma_device_id": "system", 00:08:23.585 "dma_device_type": 1 00:08:23.585 }, 00:08:23.585 { 00:08:23.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.585 "dma_device_type": 2 00:08:23.585 } 00:08:23.585 ], 00:08:23.585 "driver_specific": {} 00:08:23.585 } 00:08:23.585 ]' 00:08:23.585 11:51:57 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:08:23.845 11:51:57 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:08:23.845 11:51:57 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:08:23.845 11:51:57 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.845 11:51:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:23.845 11:51:57 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.845 11:51:57 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:08:23.845 11:51:57 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.845 11:51:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:23.845 11:51:57 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.845 11:51:57 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:08:23.845 11:51:57 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:08:23.845 11:51:57 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:08:23.845 00:08:23.845 real 0m0.142s 00:08:23.845 user 0m0.089s 00:08:23.845 sys 0m0.016s 00:08:23.845 11:51:57 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:23.845 11:51:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:23.845 ************************************ 00:08:23.845 END TEST rpc_plugins 00:08:23.845 ************************************ 00:08:23.845 11:51:57 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:08:23.845 11:51:57 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:23.845 11:51:57 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:23.845 11:51:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:23.845 ************************************ 00:08:23.845 START TEST rpc_trace_cmd_test 00:08:23.845 ************************************ 00:08:23.845 11:51:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:08:23.845 11:51:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:08:23.845 11:51:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:08:23.845 11:51:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.845 11:51:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.845 11:51:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.845 11:51:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:08:23.845 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid4084970", 00:08:23.845 "tpoint_group_mask": "0x8", 00:08:23.845 "iscsi_conn": { 00:08:23.845 "mask": "0x2", 00:08:23.845 "tpoint_mask": "0x0" 00:08:23.845 }, 00:08:23.845 "scsi": { 00:08:23.845 "mask": "0x4", 00:08:23.845 "tpoint_mask": "0x0" 00:08:23.845 }, 00:08:23.845 "bdev": { 00:08:23.845 "mask": "0x8", 00:08:23.845 "tpoint_mask": "0xffffffffffffffff" 00:08:23.845 }, 00:08:23.845 "nvmf_rdma": { 00:08:23.845 "mask": "0x10", 00:08:23.845 "tpoint_mask": "0x0" 00:08:23.845 }, 00:08:23.845 "nvmf_tcp": { 00:08:23.845 "mask": "0x20", 00:08:23.845 "tpoint_mask": "0x0" 00:08:23.845 }, 00:08:23.845 "ftl": { 00:08:23.845 "mask": "0x40", 00:08:23.845 "tpoint_mask": "0x0" 00:08:23.845 }, 00:08:23.845 "blobfs": { 00:08:23.845 "mask": "0x80", 00:08:23.845 "tpoint_mask": "0x0" 00:08:23.845 }, 00:08:23.845 "dsa": { 00:08:23.845 "mask": "0x200", 00:08:23.845 "tpoint_mask": "0x0" 00:08:23.845 }, 00:08:23.845 "thread": { 00:08:23.845 "mask": "0x400", 00:08:23.845 "tpoint_mask": "0x0" 00:08:23.845 }, 00:08:23.845 "nvme_pcie": { 00:08:23.845 "mask": "0x800", 00:08:23.845 "tpoint_mask": "0x0" 00:08:23.845 }, 00:08:23.845 "iaa": { 00:08:23.845 "mask": "0x1000", 00:08:23.845 "tpoint_mask": "0x0" 00:08:23.845 }, 00:08:23.845 "nvme_tcp": { 00:08:23.845 "mask": "0x2000", 00:08:23.845 "tpoint_mask": "0x0" 00:08:23.845 }, 00:08:23.845 "bdev_nvme": { 00:08:23.845 "mask": "0x4000", 00:08:23.845 "tpoint_mask": "0x0" 00:08:23.845 }, 00:08:23.845 "sock": { 00:08:23.845 "mask": "0x8000", 00:08:23.845 "tpoint_mask": "0x0" 00:08:23.845 }, 00:08:23.845 "blob": { 00:08:23.845 "mask": "0x10000", 00:08:23.845 "tpoint_mask": "0x0" 00:08:23.845 }, 00:08:23.845 "bdev_raid": { 00:08:23.845 "mask": "0x20000", 00:08:23.845 "tpoint_mask": "0x0" 00:08:23.845 }, 00:08:23.845 "scheduler": { 00:08:23.845 "mask": "0x40000", 00:08:23.845 "tpoint_mask": "0x0" 00:08:23.845 } 00:08:23.845 }' 00:08:23.845 11:51:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:08:23.845 11:51:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:08:23.845 11:51:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:08:24.104 11:51:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:08:24.104 11:51:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:08:24.104 11:51:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:08:24.104 11:51:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:08:24.104 11:51:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:08:24.104 11:51:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:08:24.104 11:51:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:08:24.104 00:08:24.104 real 0m0.222s 00:08:24.104 user 0m0.189s 00:08:24.104 sys 0m0.027s 00:08:24.104 11:51:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:24.104 11:51:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.104 ************************************ 00:08:24.104 END TEST rpc_trace_cmd_test 00:08:24.104 ************************************ 00:08:24.104 11:51:58 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:08:24.104 11:51:58 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:08:24.104 11:51:58 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:08:24.104 11:51:58 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:24.104 11:51:58 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:24.104 11:51:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:24.104 ************************************ 00:08:24.104 START TEST rpc_daemon_integrity 00:08:24.104 ************************************ 00:08:24.104 11:51:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:08:24.104 11:51:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:24.104 11:51:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.104 11:51:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:24.104 11:51:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.104 11:51:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:24.104 11:51:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:24.104 11:51:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:24.104 11:51:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:24.104 11:51:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.104 11:51:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:24.363 11:51:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.363 11:51:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:08:24.363 11:51:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:24.363 11:51:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.363 11:51:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:24.363 11:51:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.363 11:51:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:24.363 { 00:08:24.363 "name": "Malloc2", 00:08:24.363 "aliases": [ 00:08:24.363 "c9eabbf8-9916-4125-b6a3-aaad3343828f" 00:08:24.363 ], 00:08:24.363 "product_name": "Malloc disk", 00:08:24.363 "block_size": 512, 00:08:24.363 "num_blocks": 16384, 00:08:24.363 "uuid": "c9eabbf8-9916-4125-b6a3-aaad3343828f", 00:08:24.363 "assigned_rate_limits": { 00:08:24.363 "rw_ios_per_sec": 0, 00:08:24.363 "rw_mbytes_per_sec": 0, 00:08:24.363 "r_mbytes_per_sec": 0, 00:08:24.363 "w_mbytes_per_sec": 0 00:08:24.363 }, 00:08:24.363 "claimed": false, 00:08:24.363 "zoned": false, 00:08:24.363 "supported_io_types": { 00:08:24.363 "read": true, 00:08:24.363 "write": true, 00:08:24.363 "unmap": true, 00:08:24.363 "flush": true, 00:08:24.363 "reset": true, 00:08:24.363 "nvme_admin": false, 00:08:24.363 "nvme_io": false, 00:08:24.363 "nvme_io_md": false, 00:08:24.363 "write_zeroes": true, 00:08:24.363 "zcopy": true, 00:08:24.363 "get_zone_info": false, 00:08:24.363 "zone_management": false, 00:08:24.363 "zone_append": false, 00:08:24.363 "compare": false, 00:08:24.363 "compare_and_write": false, 00:08:24.363 "abort": true, 00:08:24.363 "seek_hole": false, 00:08:24.363 "seek_data": false, 00:08:24.363 "copy": true, 00:08:24.363 "nvme_iov_md": false 00:08:24.363 }, 00:08:24.363 "memory_domains": [ 00:08:24.363 { 00:08:24.363 "dma_device_id": "system", 00:08:24.363 "dma_device_type": 1 00:08:24.363 }, 00:08:24.363 { 00:08:24.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.363 "dma_device_type": 2 00:08:24.363 } 00:08:24.363 ], 00:08:24.363 "driver_specific": {} 00:08:24.363 } 00:08:24.363 ]' 00:08:24.363 11:51:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:24.363 11:51:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:24.363 11:51:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:08:24.363 11:51:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.363 11:51:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:24.363 [2024-12-05 11:51:58.374168] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:08:24.363 [2024-12-05 11:51:58.374197] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:24.363 [2024-12-05 11:51:58.374210] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c624e0 00:08:24.363 [2024-12-05 11:51:58.374216] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:24.363 [2024-12-05 11:51:58.375188] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:24.363 [2024-12-05 11:51:58.375206] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:24.363 Passthru0 00:08:24.363 11:51:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.363 11:51:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:24.363 11:51:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.363 11:51:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:24.363 11:51:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.363 11:51:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:24.363 { 00:08:24.363 "name": "Malloc2", 00:08:24.363 "aliases": [ 00:08:24.363 "c9eabbf8-9916-4125-b6a3-aaad3343828f" 00:08:24.363 ], 00:08:24.363 "product_name": "Malloc disk", 00:08:24.363 "block_size": 512, 00:08:24.363 "num_blocks": 16384, 00:08:24.363 "uuid": "c9eabbf8-9916-4125-b6a3-aaad3343828f", 00:08:24.363 "assigned_rate_limits": { 00:08:24.363 "rw_ios_per_sec": 0, 00:08:24.363 "rw_mbytes_per_sec": 0, 00:08:24.363 "r_mbytes_per_sec": 0, 00:08:24.363 "w_mbytes_per_sec": 0 00:08:24.363 }, 00:08:24.363 "claimed": true, 00:08:24.363 "claim_type": "exclusive_write", 00:08:24.363 "zoned": false, 00:08:24.363 "supported_io_types": { 00:08:24.363 "read": true, 00:08:24.363 "write": true, 00:08:24.363 "unmap": true, 00:08:24.363 "flush": true, 00:08:24.364 "reset": true, 00:08:24.364 "nvme_admin": false, 00:08:24.364 "nvme_io": false, 00:08:24.364 "nvme_io_md": false, 00:08:24.364 "write_zeroes": true, 00:08:24.364 "zcopy": true, 00:08:24.364 "get_zone_info": false, 00:08:24.364 "zone_management": false, 00:08:24.364 "zone_append": false, 00:08:24.364 "compare": false, 00:08:24.364 "compare_and_write": false, 00:08:24.364 "abort": true, 00:08:24.364 "seek_hole": false, 00:08:24.364 "seek_data": false, 00:08:24.364 "copy": true, 00:08:24.364 "nvme_iov_md": false 00:08:24.364 }, 00:08:24.364 "memory_domains": [ 00:08:24.364 { 00:08:24.364 "dma_device_id": "system", 00:08:24.364 "dma_device_type": 1 00:08:24.364 }, 00:08:24.364 { 00:08:24.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.364 "dma_device_type": 2 00:08:24.364 } 00:08:24.364 ], 00:08:24.364 "driver_specific": {} 00:08:24.364 }, 00:08:24.364 { 00:08:24.364 "name": "Passthru0", 00:08:24.364 "aliases": [ 00:08:24.364 "a6f3b3dd-7a34-5050-8850-729fbf151673" 00:08:24.364 ], 00:08:24.364 "product_name": "passthru", 00:08:24.364 "block_size": 512, 00:08:24.364 "num_blocks": 16384, 00:08:24.364 "uuid": "a6f3b3dd-7a34-5050-8850-729fbf151673", 00:08:24.364 "assigned_rate_limits": { 00:08:24.364 "rw_ios_per_sec": 0, 00:08:24.364 "rw_mbytes_per_sec": 0, 00:08:24.364 "r_mbytes_per_sec": 0, 00:08:24.364 "w_mbytes_per_sec": 0 00:08:24.364 }, 00:08:24.364 "claimed": false, 00:08:24.364 "zoned": false, 00:08:24.364 "supported_io_types": { 00:08:24.364 "read": true, 00:08:24.364 "write": true, 00:08:24.364 "unmap": true, 00:08:24.364 "flush": true, 00:08:24.364 "reset": true, 00:08:24.364 "nvme_admin": false, 00:08:24.364 "nvme_io": false, 00:08:24.364 "nvme_io_md": false, 00:08:24.364 "write_zeroes": true, 00:08:24.364 "zcopy": true, 00:08:24.364 "get_zone_info": false, 00:08:24.364 "zone_management": false, 00:08:24.364 "zone_append": false, 00:08:24.364 "compare": false, 00:08:24.364 "compare_and_write": false, 00:08:24.364 "abort": true, 00:08:24.364 "seek_hole": false, 00:08:24.364 "seek_data": false, 00:08:24.364 "copy": true, 00:08:24.364 "nvme_iov_md": false 00:08:24.364 }, 00:08:24.364 "memory_domains": [ 00:08:24.364 { 00:08:24.364 "dma_device_id": "system", 00:08:24.364 "dma_device_type": 1 00:08:24.364 }, 00:08:24.364 { 00:08:24.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.364 "dma_device_type": 2 00:08:24.364 } 00:08:24.364 ], 00:08:24.364 "driver_specific": { 00:08:24.364 "passthru": { 00:08:24.364 "name": "Passthru0", 00:08:24.364 "base_bdev_name": "Malloc2" 00:08:24.364 } 00:08:24.364 } 00:08:24.364 } 00:08:24.364 ]' 00:08:24.364 11:51:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:24.364 11:51:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:24.364 11:51:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:24.364 11:51:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.364 11:51:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:24.364 11:51:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.364 11:51:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:08:24.364 11:51:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.364 11:51:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:24.364 11:51:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.364 11:51:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:24.364 11:51:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.364 11:51:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:24.364 11:51:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.364 11:51:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:24.364 11:51:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:24.364 11:51:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:24.364 00:08:24.364 real 0m0.272s 00:08:24.364 user 0m0.182s 00:08:24.364 sys 0m0.030s 00:08:24.364 11:51:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:24.364 11:51:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:24.364 ************************************ 00:08:24.364 END TEST rpc_daemon_integrity 00:08:24.364 ************************************ 00:08:24.364 11:51:58 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:08:24.364 11:51:58 rpc -- rpc/rpc.sh@84 -- # killprocess 4084970 00:08:24.364 11:51:58 rpc -- common/autotest_common.sh@954 -- # '[' -z 4084970 ']' 00:08:24.364 11:51:58 rpc -- common/autotest_common.sh@958 -- # kill -0 4084970 00:08:24.364 11:51:58 rpc -- common/autotest_common.sh@959 -- # uname 00:08:24.364 11:51:58 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:24.364 11:51:58 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4084970 00:08:24.623 11:51:58 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:24.623 11:51:58 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:24.623 11:51:58 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4084970' 00:08:24.623 killing process with pid 4084970 00:08:24.623 11:51:58 rpc -- common/autotest_common.sh@973 -- # kill 4084970 00:08:24.623 11:51:58 rpc -- common/autotest_common.sh@978 -- # wait 4084970 00:08:24.882 00:08:24.882 real 0m2.109s 00:08:24.882 user 0m2.703s 00:08:24.882 sys 0m0.695s 00:08:24.882 11:51:58 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:24.882 11:51:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:24.882 ************************************ 00:08:24.882 END TEST rpc 00:08:24.882 ************************************ 00:08:24.882 11:51:58 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:08:24.882 11:51:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:24.882 11:51:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:24.882 11:51:58 -- common/autotest_common.sh@10 -- # set +x 00:08:24.882 ************************************ 00:08:24.882 START TEST skip_rpc 00:08:24.882 ************************************ 00:08:24.882 11:51:58 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:08:24.882 * Looking for test storage... 00:08:24.882 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:08:24.882 11:51:59 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:24.882 11:51:59 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:08:24.882 11:51:59 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:25.141 11:51:59 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:25.141 11:51:59 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:25.141 11:51:59 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:25.141 11:51:59 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:25.141 11:51:59 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:25.141 11:51:59 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:25.141 11:51:59 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:25.141 11:51:59 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:25.141 11:51:59 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:25.141 11:51:59 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:25.141 11:51:59 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:25.141 11:51:59 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:25.141 11:51:59 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:25.142 11:51:59 skip_rpc -- scripts/common.sh@345 -- # : 1 00:08:25.142 11:51:59 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:25.142 11:51:59 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:25.142 11:51:59 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:08:25.142 11:51:59 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:08:25.142 11:51:59 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:25.142 11:51:59 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:08:25.142 11:51:59 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:25.142 11:51:59 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:08:25.142 11:51:59 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:08:25.142 11:51:59 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:25.142 11:51:59 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:08:25.142 11:51:59 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:25.142 11:51:59 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:25.142 11:51:59 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:25.142 11:51:59 skip_rpc -- scripts/common.sh@368 -- # return 0 00:08:25.142 11:51:59 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:25.142 11:51:59 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:25.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.142 --rc genhtml_branch_coverage=1 00:08:25.142 --rc genhtml_function_coverage=1 00:08:25.142 --rc genhtml_legend=1 00:08:25.142 --rc geninfo_all_blocks=1 00:08:25.142 --rc geninfo_unexecuted_blocks=1 00:08:25.142 00:08:25.142 ' 00:08:25.142 11:51:59 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:25.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.142 --rc genhtml_branch_coverage=1 00:08:25.142 --rc genhtml_function_coverage=1 00:08:25.142 --rc genhtml_legend=1 00:08:25.142 --rc geninfo_all_blocks=1 00:08:25.142 --rc geninfo_unexecuted_blocks=1 00:08:25.142 00:08:25.142 ' 00:08:25.142 11:51:59 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:25.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.142 --rc genhtml_branch_coverage=1 00:08:25.142 --rc genhtml_function_coverage=1 00:08:25.142 --rc genhtml_legend=1 00:08:25.142 --rc geninfo_all_blocks=1 00:08:25.142 --rc geninfo_unexecuted_blocks=1 00:08:25.142 00:08:25.142 ' 00:08:25.142 11:51:59 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:25.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.142 --rc genhtml_branch_coverage=1 00:08:25.142 --rc genhtml_function_coverage=1 00:08:25.142 --rc genhtml_legend=1 00:08:25.142 --rc geninfo_all_blocks=1 00:08:25.142 --rc geninfo_unexecuted_blocks=1 00:08:25.142 00:08:25.142 ' 00:08:25.142 11:51:59 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:08:25.142 11:51:59 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:08:25.142 11:51:59 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:08:25.142 11:51:59 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:25.142 11:51:59 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:25.142 11:51:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:25.142 ************************************ 00:08:25.142 START TEST skip_rpc 00:08:25.142 ************************************ 00:08:25.142 11:51:59 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:08:25.142 11:51:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=4085541 00:08:25.142 11:51:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:25.142 11:51:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:08:25.142 11:51:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:08:25.142 [2024-12-05 11:51:59.229132] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:08:25.142 [2024-12-05 11:51:59.229171] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4085541 ] 00:08:25.142 [2024-12-05 11:51:59.302406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.401 [2024-12-05 11:51:59.342844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.663 11:52:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:08:30.663 11:52:04 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:08:30.663 11:52:04 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:08:30.663 11:52:04 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:30.663 11:52:04 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:30.663 11:52:04 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:30.663 11:52:04 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:30.663 11:52:04 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:08:30.663 11:52:04 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.663 11:52:04 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.663 11:52:04 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:30.663 11:52:04 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:08:30.663 11:52:04 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:30.663 11:52:04 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:30.663 11:52:04 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:30.663 11:52:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:08:30.663 11:52:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 4085541 00:08:30.663 11:52:04 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 4085541 ']' 00:08:30.663 11:52:04 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 4085541 00:08:30.663 11:52:04 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:08:30.663 11:52:04 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:30.663 11:52:04 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4085541 00:08:30.664 11:52:04 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:30.664 11:52:04 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:30.664 11:52:04 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4085541' 00:08:30.664 killing process with pid 4085541 00:08:30.664 11:52:04 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 4085541 00:08:30.664 11:52:04 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 4085541 00:08:30.664 00:08:30.664 real 0m5.365s 00:08:30.664 user 0m5.115s 00:08:30.664 sys 0m0.290s 00:08:30.664 11:52:04 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:30.664 11:52:04 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.664 ************************************ 00:08:30.664 END TEST skip_rpc 00:08:30.664 ************************************ 00:08:30.664 11:52:04 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:08:30.664 11:52:04 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:30.664 11:52:04 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:30.664 11:52:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.664 ************************************ 00:08:30.664 START TEST skip_rpc_with_json 00:08:30.664 ************************************ 00:08:30.664 11:52:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:08:30.664 11:52:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:08:30.664 11:52:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=4086450 00:08:30.664 11:52:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:30.664 11:52:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:30.664 11:52:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 4086450 00:08:30.664 11:52:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 4086450 ']' 00:08:30.664 11:52:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.664 11:52:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:30.664 11:52:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.664 11:52:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:30.664 11:52:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:30.664 [2024-12-05 11:52:04.660132] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:08:30.664 [2024-12-05 11:52:04.660172] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4086450 ] 00:08:30.664 [2024-12-05 11:52:04.735110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.664 [2024-12-05 11:52:04.778651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.923 11:52:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:30.923 11:52:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:08:30.923 11:52:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:08:30.923 11:52:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.923 11:52:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:30.923 [2024-12-05 11:52:04.987700] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:08:30.923 request: 00:08:30.923 { 00:08:30.923 "trtype": "tcp", 00:08:30.923 "method": "nvmf_get_transports", 00:08:30.923 "req_id": 1 00:08:30.923 } 00:08:30.923 Got JSON-RPC error response 00:08:30.923 response: 00:08:30.923 { 00:08:30.923 "code": -19, 00:08:30.923 "message": "No such device" 00:08:30.923 } 00:08:30.923 11:52:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:30.923 11:52:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:08:30.923 11:52:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.923 11:52:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:30.923 [2024-12-05 11:52:04.999803] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:30.923 11:52:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.923 11:52:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:08:30.923 11:52:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.923 11:52:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:31.181 11:52:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.181 11:52:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:08:31.181 { 00:08:31.181 "subsystems": [ 00:08:31.181 { 00:08:31.181 "subsystem": "fsdev", 00:08:31.181 "config": [ 00:08:31.181 { 00:08:31.181 "method": "fsdev_set_opts", 00:08:31.181 "params": { 00:08:31.181 "fsdev_io_pool_size": 65535, 00:08:31.181 "fsdev_io_cache_size": 256 00:08:31.181 } 00:08:31.181 } 00:08:31.181 ] 00:08:31.181 }, 00:08:31.181 { 00:08:31.181 "subsystem": "vfio_user_target", 00:08:31.181 "config": null 00:08:31.181 }, 00:08:31.181 { 00:08:31.181 "subsystem": "keyring", 00:08:31.181 "config": [] 00:08:31.181 }, 00:08:31.181 { 00:08:31.181 "subsystem": "iobuf", 00:08:31.181 "config": [ 00:08:31.181 { 00:08:31.181 "method": "iobuf_set_options", 00:08:31.181 "params": { 00:08:31.181 "small_pool_count": 8192, 00:08:31.181 "large_pool_count": 1024, 00:08:31.181 "small_bufsize": 8192, 00:08:31.181 "large_bufsize": 135168, 00:08:31.181 "enable_numa": false 00:08:31.181 } 00:08:31.181 } 00:08:31.181 ] 00:08:31.181 }, 00:08:31.181 { 00:08:31.181 "subsystem": "sock", 00:08:31.181 "config": [ 00:08:31.181 { 00:08:31.181 "method": "sock_set_default_impl", 00:08:31.181 "params": { 00:08:31.181 "impl_name": "posix" 00:08:31.181 } 00:08:31.181 }, 00:08:31.181 { 00:08:31.181 "method": "sock_impl_set_options", 00:08:31.181 "params": { 00:08:31.181 "impl_name": "ssl", 00:08:31.181 "recv_buf_size": 4096, 00:08:31.181 "send_buf_size": 4096, 00:08:31.181 "enable_recv_pipe": true, 00:08:31.181 "enable_quickack": false, 00:08:31.181 "enable_placement_id": 0, 00:08:31.181 "enable_zerocopy_send_server": true, 00:08:31.181 "enable_zerocopy_send_client": false, 00:08:31.181 "zerocopy_threshold": 0, 00:08:31.181 "tls_version": 0, 00:08:31.181 "enable_ktls": false 00:08:31.181 } 00:08:31.181 }, 00:08:31.181 { 00:08:31.181 "method": "sock_impl_set_options", 00:08:31.181 "params": { 00:08:31.181 "impl_name": "posix", 00:08:31.181 "recv_buf_size": 2097152, 00:08:31.181 "send_buf_size": 2097152, 00:08:31.181 "enable_recv_pipe": true, 00:08:31.181 "enable_quickack": false, 00:08:31.181 "enable_placement_id": 0, 00:08:31.181 "enable_zerocopy_send_server": true, 00:08:31.181 "enable_zerocopy_send_client": false, 00:08:31.181 "zerocopy_threshold": 0, 00:08:31.181 "tls_version": 0, 00:08:31.181 "enable_ktls": false 00:08:31.181 } 00:08:31.181 } 00:08:31.181 ] 00:08:31.181 }, 00:08:31.181 { 00:08:31.181 "subsystem": "vmd", 00:08:31.181 "config": [] 00:08:31.181 }, 00:08:31.181 { 00:08:31.181 "subsystem": "accel", 00:08:31.181 "config": [ 00:08:31.181 { 00:08:31.181 "method": "accel_set_options", 00:08:31.181 "params": { 00:08:31.181 "small_cache_size": 128, 00:08:31.181 "large_cache_size": 16, 00:08:31.181 "task_count": 2048, 00:08:31.181 "sequence_count": 2048, 00:08:31.181 "buf_count": 2048 00:08:31.181 } 00:08:31.181 } 00:08:31.181 ] 00:08:31.181 }, 00:08:31.181 { 00:08:31.181 "subsystem": "bdev", 00:08:31.181 "config": [ 00:08:31.181 { 00:08:31.181 "method": "bdev_set_options", 00:08:31.181 "params": { 00:08:31.181 "bdev_io_pool_size": 65535, 00:08:31.181 "bdev_io_cache_size": 256, 00:08:31.181 "bdev_auto_examine": true, 00:08:31.181 "iobuf_small_cache_size": 128, 00:08:31.181 "iobuf_large_cache_size": 16 00:08:31.181 } 00:08:31.181 }, 00:08:31.181 { 00:08:31.181 "method": "bdev_raid_set_options", 00:08:31.181 "params": { 00:08:31.181 "process_window_size_kb": 1024, 00:08:31.181 "process_max_bandwidth_mb_sec": 0 00:08:31.181 } 00:08:31.181 }, 00:08:31.181 { 00:08:31.181 "method": "bdev_iscsi_set_options", 00:08:31.181 "params": { 00:08:31.181 "timeout_sec": 30 00:08:31.181 } 00:08:31.181 }, 00:08:31.181 { 00:08:31.181 "method": "bdev_nvme_set_options", 00:08:31.181 "params": { 00:08:31.181 "action_on_timeout": "none", 00:08:31.181 "timeout_us": 0, 00:08:31.181 "timeout_admin_us": 0, 00:08:31.181 "keep_alive_timeout_ms": 10000, 00:08:31.181 "arbitration_burst": 0, 00:08:31.181 "low_priority_weight": 0, 00:08:31.181 "medium_priority_weight": 0, 00:08:31.181 "high_priority_weight": 0, 00:08:31.181 "nvme_adminq_poll_period_us": 10000, 00:08:31.181 "nvme_ioq_poll_period_us": 0, 00:08:31.181 "io_queue_requests": 0, 00:08:31.181 "delay_cmd_submit": true, 00:08:31.181 "transport_retry_count": 4, 00:08:31.181 "bdev_retry_count": 3, 00:08:31.181 "transport_ack_timeout": 0, 00:08:31.181 "ctrlr_loss_timeout_sec": 0, 00:08:31.181 "reconnect_delay_sec": 0, 00:08:31.181 "fast_io_fail_timeout_sec": 0, 00:08:31.181 "disable_auto_failback": false, 00:08:31.181 "generate_uuids": false, 00:08:31.181 "transport_tos": 0, 00:08:31.181 "nvme_error_stat": false, 00:08:31.181 "rdma_srq_size": 0, 00:08:31.181 "io_path_stat": false, 00:08:31.181 "allow_accel_sequence": false, 00:08:31.181 "rdma_max_cq_size": 0, 00:08:31.181 "rdma_cm_event_timeout_ms": 0, 00:08:31.181 "dhchap_digests": [ 00:08:31.181 "sha256", 00:08:31.181 "sha384", 00:08:31.181 "sha512" 00:08:31.181 ], 00:08:31.181 "dhchap_dhgroups": [ 00:08:31.181 "null", 00:08:31.181 "ffdhe2048", 00:08:31.181 "ffdhe3072", 00:08:31.181 "ffdhe4096", 00:08:31.181 "ffdhe6144", 00:08:31.181 "ffdhe8192" 00:08:31.181 ] 00:08:31.181 } 00:08:31.181 }, 00:08:31.181 { 00:08:31.181 "method": "bdev_nvme_set_hotplug", 00:08:31.182 "params": { 00:08:31.182 "period_us": 100000, 00:08:31.182 "enable": false 00:08:31.182 } 00:08:31.182 }, 00:08:31.182 { 00:08:31.182 "method": "bdev_wait_for_examine" 00:08:31.182 } 00:08:31.182 ] 00:08:31.182 }, 00:08:31.182 { 00:08:31.182 "subsystem": "scsi", 00:08:31.182 "config": null 00:08:31.182 }, 00:08:31.182 { 00:08:31.182 "subsystem": "scheduler", 00:08:31.182 "config": [ 00:08:31.182 { 00:08:31.182 "method": "framework_set_scheduler", 00:08:31.182 "params": { 00:08:31.182 "name": "static" 00:08:31.182 } 00:08:31.182 } 00:08:31.182 ] 00:08:31.182 }, 00:08:31.182 { 00:08:31.182 "subsystem": "vhost_scsi", 00:08:31.182 "config": [] 00:08:31.182 }, 00:08:31.182 { 00:08:31.182 "subsystem": "vhost_blk", 00:08:31.182 "config": [] 00:08:31.182 }, 00:08:31.182 { 00:08:31.182 "subsystem": "ublk", 00:08:31.182 "config": [] 00:08:31.182 }, 00:08:31.182 { 00:08:31.182 "subsystem": "nbd", 00:08:31.182 "config": [] 00:08:31.182 }, 00:08:31.182 { 00:08:31.182 "subsystem": "nvmf", 00:08:31.182 "config": [ 00:08:31.182 { 00:08:31.182 "method": "nvmf_set_config", 00:08:31.182 "params": { 00:08:31.182 "discovery_filter": "match_any", 00:08:31.182 "admin_cmd_passthru": { 00:08:31.182 "identify_ctrlr": false 00:08:31.182 }, 00:08:31.182 "dhchap_digests": [ 00:08:31.182 "sha256", 00:08:31.182 "sha384", 00:08:31.182 "sha512" 00:08:31.182 ], 00:08:31.182 "dhchap_dhgroups": [ 00:08:31.182 "null", 00:08:31.182 "ffdhe2048", 00:08:31.182 "ffdhe3072", 00:08:31.182 "ffdhe4096", 00:08:31.182 "ffdhe6144", 00:08:31.182 "ffdhe8192" 00:08:31.182 ] 00:08:31.182 } 00:08:31.182 }, 00:08:31.182 { 00:08:31.182 "method": "nvmf_set_max_subsystems", 00:08:31.182 "params": { 00:08:31.182 "max_subsystems": 1024 00:08:31.182 } 00:08:31.182 }, 00:08:31.182 { 00:08:31.182 "method": "nvmf_set_crdt", 00:08:31.182 "params": { 00:08:31.182 "crdt1": 0, 00:08:31.182 "crdt2": 0, 00:08:31.182 "crdt3": 0 00:08:31.182 } 00:08:31.182 }, 00:08:31.182 { 00:08:31.182 "method": "nvmf_create_transport", 00:08:31.182 "params": { 00:08:31.182 "trtype": "TCP", 00:08:31.182 "max_queue_depth": 128, 00:08:31.182 "max_io_qpairs_per_ctrlr": 127, 00:08:31.182 "in_capsule_data_size": 4096, 00:08:31.182 "max_io_size": 131072, 00:08:31.182 "io_unit_size": 131072, 00:08:31.182 "max_aq_depth": 128, 00:08:31.182 "num_shared_buffers": 511, 00:08:31.182 "buf_cache_size": 4294967295, 00:08:31.182 "dif_insert_or_strip": false, 00:08:31.182 "zcopy": false, 00:08:31.182 "c2h_success": true, 00:08:31.182 "sock_priority": 0, 00:08:31.182 "abort_timeout_sec": 1, 00:08:31.182 "ack_timeout": 0, 00:08:31.182 "data_wr_pool_size": 0 00:08:31.182 } 00:08:31.182 } 00:08:31.182 ] 00:08:31.182 }, 00:08:31.182 { 00:08:31.182 "subsystem": "iscsi", 00:08:31.182 "config": [ 00:08:31.182 { 00:08:31.182 "method": "iscsi_set_options", 00:08:31.182 "params": { 00:08:31.182 "node_base": "iqn.2016-06.io.spdk", 00:08:31.182 "max_sessions": 128, 00:08:31.182 "max_connections_per_session": 2, 00:08:31.182 "max_queue_depth": 64, 00:08:31.182 "default_time2wait": 2, 00:08:31.182 "default_time2retain": 20, 00:08:31.182 "first_burst_length": 8192, 00:08:31.182 "immediate_data": true, 00:08:31.182 "allow_duplicated_isid": false, 00:08:31.182 "error_recovery_level": 0, 00:08:31.182 "nop_timeout": 60, 00:08:31.182 "nop_in_interval": 30, 00:08:31.182 "disable_chap": false, 00:08:31.182 "require_chap": false, 00:08:31.182 "mutual_chap": false, 00:08:31.182 "chap_group": 0, 00:08:31.182 "max_large_datain_per_connection": 64, 00:08:31.182 "max_r2t_per_connection": 4, 00:08:31.182 "pdu_pool_size": 36864, 00:08:31.182 "immediate_data_pool_size": 16384, 00:08:31.182 "data_out_pool_size": 2048 00:08:31.182 } 00:08:31.182 } 00:08:31.182 ] 00:08:31.182 } 00:08:31.182 ] 00:08:31.182 } 00:08:31.182 11:52:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:31.182 11:52:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 4086450 00:08:31.182 11:52:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 4086450 ']' 00:08:31.182 11:52:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 4086450 00:08:31.182 11:52:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:08:31.182 11:52:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:31.182 11:52:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4086450 00:08:31.182 11:52:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:31.182 11:52:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:31.182 11:52:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4086450' 00:08:31.182 killing process with pid 4086450 00:08:31.182 11:52:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 4086450 00:08:31.182 11:52:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 4086450 00:08:31.440 11:52:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=4086568 00:08:31.440 11:52:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:08:31.440 11:52:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:08:36.696 11:52:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 4086568 00:08:36.696 11:52:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 4086568 ']' 00:08:36.696 11:52:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 4086568 00:08:36.696 11:52:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:08:36.696 11:52:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:36.696 11:52:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4086568 00:08:36.696 11:52:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:36.696 11:52:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:36.696 11:52:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4086568' 00:08:36.696 killing process with pid 4086568 00:08:36.696 11:52:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 4086568 00:08:36.696 11:52:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 4086568 00:08:36.696 11:52:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:08:36.696 11:52:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:08:36.696 00:08:36.696 real 0m6.279s 00:08:36.696 user 0m5.978s 00:08:36.696 sys 0m0.591s 00:08:36.696 11:52:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:36.696 11:52:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:36.696 ************************************ 00:08:36.696 END TEST skip_rpc_with_json 00:08:36.696 ************************************ 00:08:36.955 11:52:10 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:08:36.955 11:52:10 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:36.955 11:52:10 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:36.955 11:52:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:36.955 ************************************ 00:08:36.955 START TEST skip_rpc_with_delay 00:08:36.955 ************************************ 00:08:36.955 11:52:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:08:36.955 11:52:10 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:36.955 11:52:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:08:36.955 11:52:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:36.955 11:52:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:36.955 11:52:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:36.955 11:52:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:36.955 11:52:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:36.955 11:52:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:36.955 11:52:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:36.955 11:52:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:36.955 11:52:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:08:36.955 11:52:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:36.955 [2024-12-05 11:52:11.018847] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:08:36.955 11:52:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:08:36.955 11:52:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:36.955 11:52:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:36.955 11:52:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:36.955 00:08:36.955 real 0m0.071s 00:08:36.955 user 0m0.049s 00:08:36.955 sys 0m0.022s 00:08:36.955 11:52:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:36.955 11:52:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:08:36.955 ************************************ 00:08:36.955 END TEST skip_rpc_with_delay 00:08:36.955 ************************************ 00:08:36.955 11:52:11 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:08:36.955 11:52:11 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:08:36.955 11:52:11 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:08:36.955 11:52:11 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:36.955 11:52:11 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:36.955 11:52:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:36.955 ************************************ 00:08:36.955 START TEST exit_on_failed_rpc_init 00:08:36.955 ************************************ 00:08:36.955 11:52:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:08:36.955 11:52:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=4087542 00:08:36.955 11:52:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 4087542 00:08:36.955 11:52:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:36.955 11:52:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 4087542 ']' 00:08:36.955 11:52:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:36.955 11:52:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:36.955 11:52:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:36.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:36.955 11:52:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:36.955 11:52:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:37.214 [2024-12-05 11:52:11.161643] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:08:37.214 [2024-12-05 11:52:11.161689] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4087542 ] 00:08:37.214 [2024-12-05 11:52:11.235124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.214 [2024-12-05 11:52:11.276980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.471 11:52:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:37.471 11:52:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:08:37.471 11:52:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:37.471 11:52:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:08:37.471 11:52:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:08:37.471 11:52:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:08:37.471 11:52:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:37.471 11:52:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:37.471 11:52:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:37.471 11:52:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:37.471 11:52:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:37.471 11:52:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:37.471 11:52:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:37.472 11:52:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:08:37.472 11:52:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:08:37.472 [2024-12-05 11:52:11.564245] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:08:37.472 [2024-12-05 11:52:11.564290] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4087689 ] 00:08:37.472 [2024-12-05 11:52:11.639120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.731 [2024-12-05 11:52:11.681058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:37.731 [2024-12-05 11:52:11.681110] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:08:37.731 [2024-12-05 11:52:11.681135] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:08:37.731 [2024-12-05 11:52:11.681142] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:37.731 11:52:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:08:37.731 11:52:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:37.731 11:52:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:08:37.731 11:52:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:08:37.731 11:52:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:08:37.731 11:52:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:37.731 11:52:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:37.731 11:52:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 4087542 00:08:37.731 11:52:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 4087542 ']' 00:08:37.731 11:52:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 4087542 00:08:37.731 11:52:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:08:37.731 11:52:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:37.731 11:52:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4087542 00:08:37.731 11:52:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:37.731 11:52:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:37.731 11:52:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4087542' 00:08:37.731 killing process with pid 4087542 00:08:37.731 11:52:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 4087542 00:08:37.731 11:52:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 4087542 00:08:37.990 00:08:37.990 real 0m0.974s 00:08:37.990 user 0m1.030s 00:08:37.990 sys 0m0.401s 00:08:37.990 11:52:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:37.990 11:52:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:37.990 ************************************ 00:08:37.990 END TEST exit_on_failed_rpc_init 00:08:37.990 ************************************ 00:08:37.990 11:52:12 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:08:37.990 00:08:37.990 real 0m13.163s 00:08:37.990 user 0m12.372s 00:08:37.990 sys 0m1.609s 00:08:37.990 11:52:12 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:37.990 11:52:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.990 ************************************ 00:08:37.990 END TEST skip_rpc 00:08:37.990 ************************************ 00:08:37.990 11:52:12 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:08:37.990 11:52:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:37.990 11:52:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.990 11:52:12 -- common/autotest_common.sh@10 -- # set +x 00:08:38.249 ************************************ 00:08:38.249 START TEST rpc_client 00:08:38.249 ************************************ 00:08:38.249 11:52:12 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:08:38.249 * Looking for test storage... 00:08:38.249 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:08:38.249 11:52:12 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:38.249 11:52:12 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:08:38.249 11:52:12 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:38.249 11:52:12 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:38.249 11:52:12 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:38.249 11:52:12 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:38.249 11:52:12 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:38.249 11:52:12 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:08:38.249 11:52:12 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:08:38.249 11:52:12 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:08:38.249 11:52:12 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:08:38.249 11:52:12 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:08:38.249 11:52:12 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:08:38.249 11:52:12 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:08:38.249 11:52:12 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:38.249 11:52:12 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:08:38.249 11:52:12 rpc_client -- scripts/common.sh@345 -- # : 1 00:08:38.249 11:52:12 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:38.249 11:52:12 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:38.249 11:52:12 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:08:38.249 11:52:12 rpc_client -- scripts/common.sh@353 -- # local d=1 00:08:38.249 11:52:12 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:38.249 11:52:12 rpc_client -- scripts/common.sh@355 -- # echo 1 00:08:38.249 11:52:12 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:08:38.249 11:52:12 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:08:38.249 11:52:12 rpc_client -- scripts/common.sh@353 -- # local d=2 00:08:38.249 11:52:12 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:38.249 11:52:12 rpc_client -- scripts/common.sh@355 -- # echo 2 00:08:38.249 11:52:12 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:08:38.249 11:52:12 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:38.249 11:52:12 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:38.249 11:52:12 rpc_client -- scripts/common.sh@368 -- # return 0 00:08:38.249 11:52:12 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:38.249 11:52:12 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:38.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.249 --rc genhtml_branch_coverage=1 00:08:38.249 --rc genhtml_function_coverage=1 00:08:38.249 --rc genhtml_legend=1 00:08:38.249 --rc geninfo_all_blocks=1 00:08:38.249 --rc geninfo_unexecuted_blocks=1 00:08:38.249 00:08:38.249 ' 00:08:38.249 11:52:12 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:38.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.249 --rc genhtml_branch_coverage=1 00:08:38.249 --rc genhtml_function_coverage=1 00:08:38.249 --rc genhtml_legend=1 00:08:38.249 --rc geninfo_all_blocks=1 00:08:38.249 --rc geninfo_unexecuted_blocks=1 00:08:38.249 00:08:38.249 ' 00:08:38.249 11:52:12 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:38.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.249 --rc genhtml_branch_coverage=1 00:08:38.249 --rc genhtml_function_coverage=1 00:08:38.249 --rc genhtml_legend=1 00:08:38.249 --rc geninfo_all_blocks=1 00:08:38.249 --rc geninfo_unexecuted_blocks=1 00:08:38.249 00:08:38.249 ' 00:08:38.249 11:52:12 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:38.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.249 --rc genhtml_branch_coverage=1 00:08:38.249 --rc genhtml_function_coverage=1 00:08:38.249 --rc genhtml_legend=1 00:08:38.249 --rc geninfo_all_blocks=1 00:08:38.249 --rc geninfo_unexecuted_blocks=1 00:08:38.249 00:08:38.249 ' 00:08:38.249 11:52:12 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:08:38.249 OK 00:08:38.249 11:52:12 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:08:38.249 00:08:38.249 real 0m0.200s 00:08:38.249 user 0m0.123s 00:08:38.249 sys 0m0.091s 00:08:38.249 11:52:12 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:38.249 11:52:12 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:08:38.249 ************************************ 00:08:38.249 END TEST rpc_client 00:08:38.249 ************************************ 00:08:38.249 11:52:12 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:08:38.249 11:52:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:38.249 11:52:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:38.249 11:52:12 -- common/autotest_common.sh@10 -- # set +x 00:08:38.509 ************************************ 00:08:38.509 START TEST json_config 00:08:38.509 ************************************ 00:08:38.509 11:52:12 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:08:38.509 11:52:12 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:38.509 11:52:12 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:08:38.509 11:52:12 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:38.509 11:52:12 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:38.509 11:52:12 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:38.509 11:52:12 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:38.509 11:52:12 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:38.509 11:52:12 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:08:38.509 11:52:12 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:08:38.509 11:52:12 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:08:38.509 11:52:12 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:08:38.509 11:52:12 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:08:38.509 11:52:12 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:08:38.509 11:52:12 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:08:38.509 11:52:12 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:38.509 11:52:12 json_config -- scripts/common.sh@344 -- # case "$op" in 00:08:38.509 11:52:12 json_config -- scripts/common.sh@345 -- # : 1 00:08:38.509 11:52:12 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:38.509 11:52:12 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:38.509 11:52:12 json_config -- scripts/common.sh@365 -- # decimal 1 00:08:38.509 11:52:12 json_config -- scripts/common.sh@353 -- # local d=1 00:08:38.509 11:52:12 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:38.509 11:52:12 json_config -- scripts/common.sh@355 -- # echo 1 00:08:38.509 11:52:12 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:08:38.509 11:52:12 json_config -- scripts/common.sh@366 -- # decimal 2 00:08:38.509 11:52:12 json_config -- scripts/common.sh@353 -- # local d=2 00:08:38.509 11:52:12 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:38.509 11:52:12 json_config -- scripts/common.sh@355 -- # echo 2 00:08:38.509 11:52:12 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:08:38.509 11:52:12 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:38.509 11:52:12 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:38.509 11:52:12 json_config -- scripts/common.sh@368 -- # return 0 00:08:38.509 11:52:12 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:38.509 11:52:12 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:38.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.509 --rc genhtml_branch_coverage=1 00:08:38.509 --rc genhtml_function_coverage=1 00:08:38.509 --rc genhtml_legend=1 00:08:38.509 --rc geninfo_all_blocks=1 00:08:38.509 --rc geninfo_unexecuted_blocks=1 00:08:38.509 00:08:38.509 ' 00:08:38.509 11:52:12 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:38.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.509 --rc genhtml_branch_coverage=1 00:08:38.509 --rc genhtml_function_coverage=1 00:08:38.509 --rc genhtml_legend=1 00:08:38.509 --rc geninfo_all_blocks=1 00:08:38.509 --rc geninfo_unexecuted_blocks=1 00:08:38.509 00:08:38.509 ' 00:08:38.509 11:52:12 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:38.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.509 --rc genhtml_branch_coverage=1 00:08:38.509 --rc genhtml_function_coverage=1 00:08:38.509 --rc genhtml_legend=1 00:08:38.509 --rc geninfo_all_blocks=1 00:08:38.509 --rc geninfo_unexecuted_blocks=1 00:08:38.509 00:08:38.509 ' 00:08:38.509 11:52:12 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:38.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.509 --rc genhtml_branch_coverage=1 00:08:38.509 --rc genhtml_function_coverage=1 00:08:38.509 --rc genhtml_legend=1 00:08:38.509 --rc geninfo_all_blocks=1 00:08:38.509 --rc geninfo_unexecuted_blocks=1 00:08:38.509 00:08:38.509 ' 00:08:38.509 11:52:12 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:38.509 11:52:12 json_config -- nvmf/common.sh@7 -- # uname -s 00:08:38.509 11:52:12 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:38.509 11:52:12 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:38.509 11:52:12 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:38.509 11:52:12 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:38.509 11:52:12 json_config -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:38.509 11:52:12 json_config -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:08:38.509 11:52:12 json_config -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:38.509 11:52:12 json_config -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:08:38.509 11:52:12 json_config -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:38.509 11:52:12 json_config -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:38.509 11:52:12 json_config -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:38.509 11:52:12 json_config -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:08:38.509 11:52:12 json_config -- nvmf/common.sh@19 -- # NET_TYPE=phy-fallback 00:08:38.509 11:52:12 json_config -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:38.510 11:52:12 json_config -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:38.510 11:52:12 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:08:38.510 11:52:12 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:38.510 11:52:12 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:38.510 11:52:12 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:38.510 11:52:12 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.510 11:52:12 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.510 11:52:12 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.510 11:52:12 json_config -- paths/export.sh@5 -- # export PATH 00:08:38.510 11:52:12 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.510 11:52:12 json_config -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:08:38.510 11:52:12 json_config -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:08:38.510 11:52:12 json_config -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:08:38.510 11:52:12 json_config -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:08:38.510 11:52:12 json_config -- nvmf/common.sh@50 -- # : 0 00:08:38.510 11:52:12 json_config -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:08:38.510 11:52:12 json_config -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:08:38.510 11:52:12 json_config -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:08:38.510 11:52:12 json_config -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:38.510 11:52:12 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:38.510 11:52:12 json_config -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:08:38.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:08:38.510 11:52:12 json_config -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:08:38.510 11:52:12 json_config -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:08:38.510 11:52:12 json_config -- nvmf/common.sh@54 -- # have_pci_nics=0 00:08:38.510 11:52:12 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:08:38.510 11:52:12 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:08:38.510 11:52:12 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:08:38.510 11:52:12 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:08:38.510 11:52:12 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:08:38.510 11:52:12 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:08:38.510 11:52:12 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:08:38.510 11:52:12 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:08:38.510 11:52:12 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:08:38.510 11:52:12 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:08:38.510 11:52:12 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:08:38.510 11:52:12 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:08:38.510 11:52:12 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:08:38.510 11:52:12 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:08:38.510 11:52:12 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:38.510 11:52:12 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:08:38.510 INFO: JSON configuration test init 00:08:38.510 11:52:12 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:08:38.510 11:52:12 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:08:38.510 11:52:12 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:38.510 11:52:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:38.510 11:52:12 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:08:38.510 11:52:12 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:38.510 11:52:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:38.510 11:52:12 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:08:38.510 11:52:12 json_config -- json_config/common.sh@9 -- # local app=target 00:08:38.510 11:52:12 json_config -- json_config/common.sh@10 -- # shift 00:08:38.510 11:52:12 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:38.510 11:52:12 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:38.510 11:52:12 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:08:38.510 11:52:12 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:38.510 11:52:12 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:38.510 11:52:12 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=4087916 00:08:38.510 11:52:12 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:38.510 Waiting for target to run... 00:08:38.510 11:52:12 json_config -- json_config/common.sh@25 -- # waitforlisten 4087916 /var/tmp/spdk_tgt.sock 00:08:38.510 11:52:12 json_config -- common/autotest_common.sh@835 -- # '[' -z 4087916 ']' 00:08:38.510 11:52:12 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:38.510 11:52:12 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:38.510 11:52:12 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:38.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:38.510 11:52:12 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:08:38.510 11:52:12 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:38.510 11:52:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:38.510 [2024-12-05 11:52:12.704590] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:08:38.510 [2024-12-05 11:52:12.704637] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4087916 ] 00:08:39.077 [2024-12-05 11:52:13.006572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.077 [2024-12-05 11:52:13.041107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.335 11:52:13 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:39.335 11:52:13 json_config -- common/autotest_common.sh@868 -- # return 0 00:08:39.335 11:52:13 json_config -- json_config/common.sh@26 -- # echo '' 00:08:39.335 00:08:39.335 11:52:13 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:08:39.335 11:52:13 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:08:39.335 11:52:13 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:39.335 11:52:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:39.335 11:52:13 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:08:39.335 11:52:13 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:08:39.335 11:52:13 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:39.335 11:52:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:39.594 11:52:13 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:08:39.594 11:52:13 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:08:39.594 11:52:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:08:42.986 11:52:16 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:08:42.986 11:52:16 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:08:42.986 11:52:16 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:42.986 11:52:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:42.986 11:52:16 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:08:42.986 11:52:16 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:08:42.986 11:52:16 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:08:42.986 11:52:16 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:08:42.986 11:52:16 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:08:42.986 11:52:16 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:08:42.986 11:52:16 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:08:42.986 11:52:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:08:42.986 11:52:16 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:08:42.986 11:52:16 json_config -- json_config/json_config.sh@51 -- # local get_types 00:08:42.986 11:52:16 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:08:42.986 11:52:16 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:08:42.986 11:52:16 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:08:42.986 11:52:16 json_config -- json_config/json_config.sh@54 -- # sort 00:08:42.986 11:52:16 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:08:42.986 11:52:16 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:08:42.986 11:52:16 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:08:42.986 11:52:16 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:08:42.986 11:52:16 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:42.986 11:52:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:42.986 11:52:16 json_config -- json_config/json_config.sh@62 -- # return 0 00:08:42.986 11:52:16 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:08:42.986 11:52:16 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:08:42.986 11:52:16 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:08:42.986 11:52:16 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:08:42.986 11:52:16 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:08:42.986 11:52:16 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:08:42.986 11:52:16 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:42.986 11:52:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:42.986 11:52:16 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:08:42.986 11:52:16 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:08:42.986 11:52:16 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:08:42.986 11:52:16 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:08:42.986 11:52:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:08:42.986 MallocForNvmf0 00:08:42.986 11:52:17 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:08:42.986 11:52:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:08:43.245 MallocForNvmf1 00:08:43.245 11:52:17 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:08:43.245 11:52:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:08:43.503 [2024-12-05 11:52:17.455176] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:43.503 11:52:17 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:43.503 11:52:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:43.503 11:52:17 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:08:43.503 11:52:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:08:43.762 11:52:17 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:08:43.762 11:52:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:08:44.020 11:52:18 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:08:44.020 11:52:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:08:44.020 [2024-12-05 11:52:18.173455] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:08:44.020 11:52:18 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:08:44.020 11:52:18 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:44.020 11:52:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:44.279 11:52:18 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:08:44.279 11:52:18 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:44.279 11:52:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:44.279 11:52:18 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:08:44.279 11:52:18 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:08:44.279 11:52:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:08:44.279 MallocBdevForConfigChangeCheck 00:08:44.279 11:52:18 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:08:44.279 11:52:18 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:44.279 11:52:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:44.279 11:52:18 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:08:44.279 11:52:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:44.846 11:52:18 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:08:44.846 INFO: shutting down applications... 00:08:44.846 11:52:18 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:08:44.846 11:52:18 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:08:44.846 11:52:18 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:08:44.846 11:52:18 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:08:46.745 Calling clear_iscsi_subsystem 00:08:46.745 Calling clear_nvmf_subsystem 00:08:46.745 Calling clear_nbd_subsystem 00:08:46.745 Calling clear_ublk_subsystem 00:08:46.745 Calling clear_vhost_blk_subsystem 00:08:46.745 Calling clear_vhost_scsi_subsystem 00:08:46.745 Calling clear_bdev_subsystem 00:08:46.745 11:52:20 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:08:46.745 11:52:20 json_config -- json_config/json_config.sh@350 -- # count=100 00:08:46.745 11:52:20 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:08:46.745 11:52:20 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:46.745 11:52:20 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:08:46.745 11:52:20 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:08:47.312 11:52:21 json_config -- json_config/json_config.sh@352 -- # break 00:08:47.312 11:52:21 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:08:47.312 11:52:21 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:08:47.312 11:52:21 json_config -- json_config/common.sh@31 -- # local app=target 00:08:47.312 11:52:21 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:08:47.312 11:52:21 json_config -- json_config/common.sh@35 -- # [[ -n 4087916 ]] 00:08:47.312 11:52:21 json_config -- json_config/common.sh@38 -- # kill -SIGINT 4087916 00:08:47.312 11:52:21 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:08:47.312 11:52:21 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:47.312 11:52:21 json_config -- json_config/common.sh@41 -- # kill -0 4087916 00:08:47.312 11:52:21 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:08:47.570 11:52:21 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:08:47.570 11:52:21 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:47.570 11:52:21 json_config -- json_config/common.sh@41 -- # kill -0 4087916 00:08:47.570 11:52:21 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:08:47.570 11:52:21 json_config -- json_config/common.sh@43 -- # break 00:08:47.570 11:52:21 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:08:47.570 11:52:21 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:08:47.570 SPDK target shutdown done 00:08:47.570 11:52:21 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:08:47.570 INFO: relaunching applications... 00:08:47.570 11:52:21 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:47.570 11:52:21 json_config -- json_config/common.sh@9 -- # local app=target 00:08:47.829 11:52:21 json_config -- json_config/common.sh@10 -- # shift 00:08:47.829 11:52:21 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:47.829 11:52:21 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:47.829 11:52:21 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:08:47.829 11:52:21 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:47.829 11:52:21 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:47.829 11:52:21 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=4089650 00:08:47.829 11:52:21 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:47.829 11:52:21 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:47.829 Waiting for target to run... 00:08:47.829 11:52:21 json_config -- json_config/common.sh@25 -- # waitforlisten 4089650 /var/tmp/spdk_tgt.sock 00:08:47.829 11:52:21 json_config -- common/autotest_common.sh@835 -- # '[' -z 4089650 ']' 00:08:47.829 11:52:21 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:47.829 11:52:21 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:47.829 11:52:21 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:47.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:47.829 11:52:21 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:47.829 11:52:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:47.829 [2024-12-05 11:52:21.816181] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:08:47.829 [2024-12-05 11:52:21.816229] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4089650 ] 00:08:48.087 [2024-12-05 11:52:22.100532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.087 [2024-12-05 11:52:22.134232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.366 [2024-12-05 11:52:25.167211] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:51.366 [2024-12-05 11:52:25.199581] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:08:51.366 11:52:25 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:51.366 11:52:25 json_config -- common/autotest_common.sh@868 -- # return 0 00:08:51.366 11:52:25 json_config -- json_config/common.sh@26 -- # echo '' 00:08:51.366 00:08:51.366 11:52:25 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:08:51.366 11:52:25 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:08:51.366 INFO: Checking if target configuration is the same... 00:08:51.366 11:52:25 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:51.366 11:52:25 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:08:51.366 11:52:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:51.366 + '[' 2 -ne 2 ']' 00:08:51.366 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:08:51.366 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:08:51.366 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:51.366 +++ basename /dev/fd/62 00:08:51.366 ++ mktemp /tmp/62.XXX 00:08:51.366 + tmp_file_1=/tmp/62.BCe 00:08:51.366 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:51.366 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:08:51.366 + tmp_file_2=/tmp/spdk_tgt_config.json.4M6 00:08:51.366 + ret=0 00:08:51.366 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:08:51.624 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:08:51.624 + diff -u /tmp/62.BCe /tmp/spdk_tgt_config.json.4M6 00:08:51.624 + echo 'INFO: JSON config files are the same' 00:08:51.624 INFO: JSON config files are the same 00:08:51.624 + rm /tmp/62.BCe /tmp/spdk_tgt_config.json.4M6 00:08:51.624 + exit 0 00:08:51.624 11:52:25 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:08:51.624 11:52:25 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:08:51.624 INFO: changing configuration and checking if this can be detected... 00:08:51.624 11:52:25 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:08:51.624 11:52:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:08:51.624 11:52:25 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:51.624 11:52:25 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:08:51.624 11:52:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:51.882 + '[' 2 -ne 2 ']' 00:08:51.882 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:08:51.882 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:08:51.882 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:51.882 +++ basename /dev/fd/62 00:08:51.882 ++ mktemp /tmp/62.XXX 00:08:51.882 + tmp_file_1=/tmp/62.OYQ 00:08:51.882 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:51.882 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:08:51.882 + tmp_file_2=/tmp/spdk_tgt_config.json.bNw 00:08:51.882 + ret=0 00:08:51.882 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:08:52.140 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:08:52.140 + diff -u /tmp/62.OYQ /tmp/spdk_tgt_config.json.bNw 00:08:52.140 + ret=1 00:08:52.140 + echo '=== Start of file: /tmp/62.OYQ ===' 00:08:52.140 + cat /tmp/62.OYQ 00:08:52.140 + echo '=== End of file: /tmp/62.OYQ ===' 00:08:52.140 + echo '' 00:08:52.140 + echo '=== Start of file: /tmp/spdk_tgt_config.json.bNw ===' 00:08:52.140 + cat /tmp/spdk_tgt_config.json.bNw 00:08:52.140 + echo '=== End of file: /tmp/spdk_tgt_config.json.bNw ===' 00:08:52.140 + echo '' 00:08:52.140 + rm /tmp/62.OYQ /tmp/spdk_tgt_config.json.bNw 00:08:52.140 + exit 1 00:08:52.140 11:52:26 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:08:52.140 INFO: configuration change detected. 00:08:52.140 11:52:26 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:08:52.140 11:52:26 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:08:52.140 11:52:26 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:52.140 11:52:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:52.140 11:52:26 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:08:52.140 11:52:26 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:08:52.140 11:52:26 json_config -- json_config/json_config.sh@324 -- # [[ -n 4089650 ]] 00:08:52.140 11:52:26 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:08:52.140 11:52:26 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:08:52.140 11:52:26 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:52.140 11:52:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:52.140 11:52:26 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:08:52.140 11:52:26 json_config -- json_config/json_config.sh@200 -- # uname -s 00:08:52.140 11:52:26 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:08:52.140 11:52:26 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:08:52.140 11:52:26 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:08:52.140 11:52:26 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:08:52.140 11:52:26 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:52.140 11:52:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:52.140 11:52:26 json_config -- json_config/json_config.sh@330 -- # killprocess 4089650 00:08:52.140 11:52:26 json_config -- common/autotest_common.sh@954 -- # '[' -z 4089650 ']' 00:08:52.140 11:52:26 json_config -- common/autotest_common.sh@958 -- # kill -0 4089650 00:08:52.140 11:52:26 json_config -- common/autotest_common.sh@959 -- # uname 00:08:52.140 11:52:26 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:52.140 11:52:26 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4089650 00:08:52.140 11:52:26 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:52.140 11:52:26 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:52.140 11:52:26 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4089650' 00:08:52.140 killing process with pid 4089650 00:08:52.140 11:52:26 json_config -- common/autotest_common.sh@973 -- # kill 4089650 00:08:52.140 11:52:26 json_config -- common/autotest_common.sh@978 -- # wait 4089650 00:08:54.672 11:52:28 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:54.672 11:52:28 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:08:54.672 11:52:28 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:54.672 11:52:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:54.672 11:52:28 json_config -- json_config/json_config.sh@335 -- # return 0 00:08:54.672 11:52:28 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:08:54.672 INFO: Success 00:08:54.672 00:08:54.672 real 0m15.897s 00:08:54.672 user 0m16.397s 00:08:54.672 sys 0m2.354s 00:08:54.672 11:52:28 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:54.672 11:52:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:54.672 ************************************ 00:08:54.672 END TEST json_config 00:08:54.672 ************************************ 00:08:54.672 11:52:28 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:08:54.672 11:52:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:54.672 11:52:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:54.672 11:52:28 -- common/autotest_common.sh@10 -- # set +x 00:08:54.672 ************************************ 00:08:54.672 START TEST json_config_extra_key 00:08:54.672 ************************************ 00:08:54.672 11:52:28 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:08:54.672 11:52:28 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:54.672 11:52:28 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:08:54.672 11:52:28 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:54.673 11:52:28 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:54.673 11:52:28 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:54.673 11:52:28 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:54.673 11:52:28 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:54.673 11:52:28 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:08:54.673 11:52:28 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:08:54.673 11:52:28 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:08:54.673 11:52:28 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:08:54.673 11:52:28 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:08:54.673 11:52:28 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:08:54.673 11:52:28 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:08:54.673 11:52:28 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:54.673 11:52:28 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:08:54.673 11:52:28 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:08:54.673 11:52:28 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:54.673 11:52:28 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:54.673 11:52:28 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:08:54.673 11:52:28 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:08:54.673 11:52:28 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:54.673 11:52:28 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:08:54.673 11:52:28 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:08:54.673 11:52:28 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:08:54.673 11:52:28 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:08:54.673 11:52:28 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:54.673 11:52:28 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:08:54.673 11:52:28 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:08:54.673 11:52:28 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:54.673 11:52:28 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:54.673 11:52:28 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:08:54.673 11:52:28 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:54.673 11:52:28 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:54.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.673 --rc genhtml_branch_coverage=1 00:08:54.673 --rc genhtml_function_coverage=1 00:08:54.673 --rc genhtml_legend=1 00:08:54.673 --rc geninfo_all_blocks=1 00:08:54.673 --rc geninfo_unexecuted_blocks=1 00:08:54.673 00:08:54.673 ' 00:08:54.673 11:52:28 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:54.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.673 --rc genhtml_branch_coverage=1 00:08:54.673 --rc genhtml_function_coverage=1 00:08:54.673 --rc genhtml_legend=1 00:08:54.673 --rc geninfo_all_blocks=1 00:08:54.673 --rc geninfo_unexecuted_blocks=1 00:08:54.673 00:08:54.673 ' 00:08:54.673 11:52:28 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:54.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.673 --rc genhtml_branch_coverage=1 00:08:54.673 --rc genhtml_function_coverage=1 00:08:54.673 --rc genhtml_legend=1 00:08:54.673 --rc geninfo_all_blocks=1 00:08:54.673 --rc geninfo_unexecuted_blocks=1 00:08:54.673 00:08:54.673 ' 00:08:54.673 11:52:28 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:54.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.673 --rc genhtml_branch_coverage=1 00:08:54.673 --rc genhtml_function_coverage=1 00:08:54.673 --rc genhtml_legend=1 00:08:54.673 --rc geninfo_all_blocks=1 00:08:54.673 --rc geninfo_unexecuted_blocks=1 00:08:54.673 00:08:54.673 ' 00:08:54.673 11:52:28 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:54.673 11:52:28 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:08:54.673 11:52:28 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:54.673 11:52:28 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:54.673 11:52:28 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:54.673 11:52:28 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:54.673 11:52:28 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:54.673 11:52:28 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:08:54.673 11:52:28 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:54.673 11:52:28 json_config_extra_key -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:08:54.673 11:52:28 json_config_extra_key -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:54.673 11:52:28 json_config_extra_key -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:54.673 11:52:28 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:54.673 11:52:28 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:08:54.673 11:52:28 json_config_extra_key -- nvmf/common.sh@19 -- # NET_TYPE=phy-fallback 00:08:54.673 11:52:28 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:54.673 11:52:28 json_config_extra_key -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:54.673 11:52:28 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:08:54.673 11:52:28 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:54.673 11:52:28 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:54.673 11:52:28 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:54.673 11:52:28 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.673 11:52:28 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.673 11:52:28 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.673 11:52:28 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:08:54.673 11:52:28 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.673 11:52:28 json_config_extra_key -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:08:54.673 11:52:28 json_config_extra_key -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:08:54.673 11:52:28 json_config_extra_key -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:08:54.673 11:52:28 json_config_extra_key -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:08:54.673 11:52:28 json_config_extra_key -- nvmf/common.sh@50 -- # : 0 00:08:54.673 11:52:28 json_config_extra_key -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:08:54.673 11:52:28 json_config_extra_key -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:08:54.673 11:52:28 json_config_extra_key -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:08:54.673 11:52:28 json_config_extra_key -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:54.673 11:52:28 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:54.673 11:52:28 json_config_extra_key -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:08:54.673 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:08:54.673 11:52:28 json_config_extra_key -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:08:54.673 11:52:28 json_config_extra_key -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:08:54.673 11:52:28 json_config_extra_key -- nvmf/common.sh@54 -- # have_pci_nics=0 00:08:54.673 11:52:28 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:08:54.673 11:52:28 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:08:54.673 11:52:28 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:08:54.673 11:52:28 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:08:54.673 11:52:28 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:08:54.673 11:52:28 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:08:54.673 11:52:28 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:08:54.673 11:52:28 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:08:54.673 11:52:28 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:08:54.673 11:52:28 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:54.673 11:52:28 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:08:54.673 INFO: launching applications... 00:08:54.674 11:52:28 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:08:54.674 11:52:28 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:08:54.674 11:52:28 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:08:54.674 11:52:28 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:54.674 11:52:28 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:54.674 11:52:28 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:08:54.674 11:52:28 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:54.674 11:52:28 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:54.674 11:52:28 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=4090922 00:08:54.674 11:52:28 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:54.674 Waiting for target to run... 00:08:54.674 11:52:28 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 4090922 /var/tmp/spdk_tgt.sock 00:08:54.674 11:52:28 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 4090922 ']' 00:08:54.674 11:52:28 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:08:54.674 11:52:28 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:54.674 11:52:28 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:54.674 11:52:28 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:54.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:54.674 11:52:28 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:54.674 11:52:28 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:54.674 [2024-12-05 11:52:28.662846] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:08:54.674 [2024-12-05 11:52:28.662894] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4090922 ] 00:08:54.933 [2024-12-05 11:52:28.948540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.933 [2024-12-05 11:52:28.982327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.500 11:52:29 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:55.500 11:52:29 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:08:55.500 11:52:29 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:08:55.500 00:08:55.500 11:52:29 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:08:55.500 INFO: shutting down applications... 00:08:55.500 11:52:29 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:08:55.500 11:52:29 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:08:55.500 11:52:29 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:08:55.500 11:52:29 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 4090922 ]] 00:08:55.500 11:52:29 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 4090922 00:08:55.500 11:52:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:08:55.500 11:52:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:55.500 11:52:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 4090922 00:08:55.500 11:52:29 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:56.068 11:52:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:56.068 11:52:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:56.068 11:52:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 4090922 00:08:56.068 11:52:30 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:08:56.068 11:52:30 json_config_extra_key -- json_config/common.sh@43 -- # break 00:08:56.068 11:52:30 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:08:56.068 11:52:30 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:08:56.068 SPDK target shutdown done 00:08:56.068 11:52:30 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:08:56.068 Success 00:08:56.068 00:08:56.068 real 0m1.584s 00:08:56.068 user 0m1.384s 00:08:56.068 sys 0m0.395s 00:08:56.068 11:52:30 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:56.068 11:52:30 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:56.068 ************************************ 00:08:56.068 END TEST json_config_extra_key 00:08:56.068 ************************************ 00:08:56.068 11:52:30 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:56.068 11:52:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:56.068 11:52:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:56.068 11:52:30 -- common/autotest_common.sh@10 -- # set +x 00:08:56.068 ************************************ 00:08:56.068 START TEST alias_rpc 00:08:56.068 ************************************ 00:08:56.068 11:52:30 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:56.068 * Looking for test storage... 00:08:56.068 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:08:56.068 11:52:30 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:56.068 11:52:30 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:08:56.068 11:52:30 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:56.068 11:52:30 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:56.068 11:52:30 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:56.068 11:52:30 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:56.068 11:52:30 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:56.068 11:52:30 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:56.068 11:52:30 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:56.068 11:52:30 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:56.068 11:52:30 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:56.068 11:52:30 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:56.068 11:52:30 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:56.068 11:52:30 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:56.068 11:52:30 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:56.068 11:52:30 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:56.068 11:52:30 alias_rpc -- scripts/common.sh@345 -- # : 1 00:08:56.068 11:52:30 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:56.068 11:52:30 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:56.068 11:52:30 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:08:56.068 11:52:30 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:08:56.068 11:52:30 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:56.068 11:52:30 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:08:56.068 11:52:30 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:56.068 11:52:30 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:08:56.068 11:52:30 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:08:56.068 11:52:30 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:56.068 11:52:30 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:08:56.068 11:52:30 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:56.068 11:52:30 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:56.068 11:52:30 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:56.068 11:52:30 alias_rpc -- scripts/common.sh@368 -- # return 0 00:08:56.069 11:52:30 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:56.069 11:52:30 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:56.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.069 --rc genhtml_branch_coverage=1 00:08:56.069 --rc genhtml_function_coverage=1 00:08:56.069 --rc genhtml_legend=1 00:08:56.069 --rc geninfo_all_blocks=1 00:08:56.069 --rc geninfo_unexecuted_blocks=1 00:08:56.069 00:08:56.069 ' 00:08:56.069 11:52:30 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:56.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.069 --rc genhtml_branch_coverage=1 00:08:56.069 --rc genhtml_function_coverage=1 00:08:56.069 --rc genhtml_legend=1 00:08:56.069 --rc geninfo_all_blocks=1 00:08:56.069 --rc geninfo_unexecuted_blocks=1 00:08:56.069 00:08:56.069 ' 00:08:56.069 11:52:30 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:56.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.069 --rc genhtml_branch_coverage=1 00:08:56.069 --rc genhtml_function_coverage=1 00:08:56.069 --rc genhtml_legend=1 00:08:56.069 --rc geninfo_all_blocks=1 00:08:56.069 --rc geninfo_unexecuted_blocks=1 00:08:56.069 00:08:56.069 ' 00:08:56.069 11:52:30 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:56.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.069 --rc genhtml_branch_coverage=1 00:08:56.069 --rc genhtml_function_coverage=1 00:08:56.069 --rc genhtml_legend=1 00:08:56.069 --rc geninfo_all_blocks=1 00:08:56.069 --rc geninfo_unexecuted_blocks=1 00:08:56.069 00:08:56.069 ' 00:08:56.069 11:52:30 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:56.069 11:52:30 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=4091213 00:08:56.069 11:52:30 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 4091213 00:08:56.069 11:52:30 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:56.069 11:52:30 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 4091213 ']' 00:08:56.069 11:52:30 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:56.069 11:52:30 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:56.069 11:52:30 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:56.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:56.069 11:52:30 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:56.069 11:52:30 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.328 [2024-12-05 11:52:30.306115] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:08:56.328 [2024-12-05 11:52:30.306163] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4091213 ] 00:08:56.328 [2024-12-05 11:52:30.366350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.328 [2024-12-05 11:52:30.409611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.588 11:52:30 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:56.588 11:52:30 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:56.588 11:52:30 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:08:56.847 11:52:30 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 4091213 00:08:56.847 11:52:30 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 4091213 ']' 00:08:56.847 11:52:30 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 4091213 00:08:56.847 11:52:30 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:08:56.847 11:52:30 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:56.847 11:52:30 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4091213 00:08:56.847 11:52:30 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:56.847 11:52:30 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:56.847 11:52:30 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4091213' 00:08:56.847 killing process with pid 4091213 00:08:56.847 11:52:30 alias_rpc -- common/autotest_common.sh@973 -- # kill 4091213 00:08:56.847 11:52:30 alias_rpc -- common/autotest_common.sh@978 -- # wait 4091213 00:08:57.106 00:08:57.106 real 0m1.126s 00:08:57.106 user 0m1.183s 00:08:57.106 sys 0m0.403s 00:08:57.106 11:52:31 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:57.106 11:52:31 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.106 ************************************ 00:08:57.106 END TEST alias_rpc 00:08:57.106 ************************************ 00:08:57.106 11:52:31 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:08:57.106 11:52:31 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:08:57.106 11:52:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:57.106 11:52:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:57.106 11:52:31 -- common/autotest_common.sh@10 -- # set +x 00:08:57.106 ************************************ 00:08:57.106 START TEST spdkcli_tcp 00:08:57.106 ************************************ 00:08:57.106 11:52:31 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:08:57.365 * Looking for test storage... 00:08:57.365 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:08:57.365 11:52:31 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:57.365 11:52:31 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:08:57.365 11:52:31 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:57.365 11:52:31 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:57.365 11:52:31 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:57.365 11:52:31 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:57.365 11:52:31 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:57.365 11:52:31 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:08:57.365 11:52:31 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:08:57.365 11:52:31 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:08:57.365 11:52:31 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:08:57.365 11:52:31 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:08:57.365 11:52:31 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:08:57.365 11:52:31 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:08:57.365 11:52:31 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:57.365 11:52:31 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:08:57.365 11:52:31 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:08:57.365 11:52:31 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:57.365 11:52:31 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:57.365 11:52:31 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:08:57.365 11:52:31 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:08:57.365 11:52:31 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:57.365 11:52:31 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:08:57.365 11:52:31 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:57.365 11:52:31 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:08:57.365 11:52:31 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:08:57.365 11:52:31 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:57.365 11:52:31 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:08:57.365 11:52:31 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:57.365 11:52:31 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:57.365 11:52:31 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:57.365 11:52:31 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:08:57.365 11:52:31 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:57.365 11:52:31 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:57.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.365 --rc genhtml_branch_coverage=1 00:08:57.365 --rc genhtml_function_coverage=1 00:08:57.365 --rc genhtml_legend=1 00:08:57.365 --rc geninfo_all_blocks=1 00:08:57.365 --rc geninfo_unexecuted_blocks=1 00:08:57.365 00:08:57.365 ' 00:08:57.366 11:52:31 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:57.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.366 --rc genhtml_branch_coverage=1 00:08:57.366 --rc genhtml_function_coverage=1 00:08:57.366 --rc genhtml_legend=1 00:08:57.366 --rc geninfo_all_blocks=1 00:08:57.366 --rc geninfo_unexecuted_blocks=1 00:08:57.366 00:08:57.366 ' 00:08:57.366 11:52:31 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:57.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.366 --rc genhtml_branch_coverage=1 00:08:57.366 --rc genhtml_function_coverage=1 00:08:57.366 --rc genhtml_legend=1 00:08:57.366 --rc geninfo_all_blocks=1 00:08:57.366 --rc geninfo_unexecuted_blocks=1 00:08:57.366 00:08:57.366 ' 00:08:57.366 11:52:31 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:57.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.366 --rc genhtml_branch_coverage=1 00:08:57.366 --rc genhtml_function_coverage=1 00:08:57.366 --rc genhtml_legend=1 00:08:57.366 --rc geninfo_all_blocks=1 00:08:57.366 --rc geninfo_unexecuted_blocks=1 00:08:57.366 00:08:57.366 ' 00:08:57.366 11:52:31 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:08:57.366 11:52:31 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:08:57.366 11:52:31 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:08:57.366 11:52:31 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:08:57.366 11:52:31 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:08:57.366 11:52:31 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:08:57.366 11:52:31 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:08:57.366 11:52:31 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:57.366 11:52:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:57.366 11:52:31 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:08:57.366 11:52:31 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=4091500 00:08:57.366 11:52:31 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 4091500 00:08:57.366 11:52:31 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 4091500 ']' 00:08:57.366 11:52:31 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.366 11:52:31 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:57.366 11:52:31 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.366 11:52:31 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:57.366 11:52:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:57.366 [2024-12-05 11:52:31.503676] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:08:57.366 [2024-12-05 11:52:31.503722] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4091500 ] 00:08:57.625 [2024-12-05 11:52:31.575291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:57.625 [2024-12-05 11:52:31.616489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:57.625 [2024-12-05 11:52:31.616490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.884 11:52:31 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:57.884 11:52:31 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:08:57.884 11:52:31 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=4091513 00:08:57.884 11:52:31 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:08:57.884 11:52:31 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:08:57.884 [ 00:08:57.884 "bdev_malloc_delete", 00:08:57.884 "bdev_malloc_create", 00:08:57.884 "bdev_null_resize", 00:08:57.884 "bdev_null_delete", 00:08:57.884 "bdev_null_create", 00:08:57.884 "bdev_nvme_cuse_unregister", 00:08:57.884 "bdev_nvme_cuse_register", 00:08:57.884 "bdev_opal_new_user", 00:08:57.884 "bdev_opal_set_lock_state", 00:08:57.884 "bdev_opal_delete", 00:08:57.884 "bdev_opal_get_info", 00:08:57.884 "bdev_opal_create", 00:08:57.884 "bdev_nvme_opal_revert", 00:08:57.884 "bdev_nvme_opal_init", 00:08:57.884 "bdev_nvme_send_cmd", 00:08:57.884 "bdev_nvme_set_keys", 00:08:57.884 "bdev_nvme_get_path_iostat", 00:08:57.884 "bdev_nvme_get_mdns_discovery_info", 00:08:57.884 "bdev_nvme_stop_mdns_discovery", 00:08:57.884 "bdev_nvme_start_mdns_discovery", 00:08:57.884 "bdev_nvme_set_multipath_policy", 00:08:57.884 "bdev_nvme_set_preferred_path", 00:08:57.884 "bdev_nvme_get_io_paths", 00:08:57.884 "bdev_nvme_remove_error_injection", 00:08:57.884 "bdev_nvme_add_error_injection", 00:08:57.884 "bdev_nvme_get_discovery_info", 00:08:57.884 "bdev_nvme_stop_discovery", 00:08:57.884 "bdev_nvme_start_discovery", 00:08:57.884 "bdev_nvme_get_controller_health_info", 00:08:57.884 "bdev_nvme_disable_controller", 00:08:57.884 "bdev_nvme_enable_controller", 00:08:57.884 "bdev_nvme_reset_controller", 00:08:57.884 "bdev_nvme_get_transport_statistics", 00:08:57.884 "bdev_nvme_apply_firmware", 00:08:57.884 "bdev_nvme_detach_controller", 00:08:57.884 "bdev_nvme_get_controllers", 00:08:57.884 "bdev_nvme_attach_controller", 00:08:57.884 "bdev_nvme_set_hotplug", 00:08:57.884 "bdev_nvme_set_options", 00:08:57.884 "bdev_passthru_delete", 00:08:57.884 "bdev_passthru_create", 00:08:57.884 "bdev_lvol_set_parent_bdev", 00:08:57.884 "bdev_lvol_set_parent", 00:08:57.884 "bdev_lvol_check_shallow_copy", 00:08:57.884 "bdev_lvol_start_shallow_copy", 00:08:57.884 "bdev_lvol_grow_lvstore", 00:08:57.884 "bdev_lvol_get_lvols", 00:08:57.884 "bdev_lvol_get_lvstores", 00:08:57.884 "bdev_lvol_delete", 00:08:57.884 "bdev_lvol_set_read_only", 00:08:57.884 "bdev_lvol_resize", 00:08:57.884 "bdev_lvol_decouple_parent", 00:08:57.884 "bdev_lvol_inflate", 00:08:57.884 "bdev_lvol_rename", 00:08:57.884 "bdev_lvol_clone_bdev", 00:08:57.884 "bdev_lvol_clone", 00:08:57.884 "bdev_lvol_snapshot", 00:08:57.884 "bdev_lvol_create", 00:08:57.884 "bdev_lvol_delete_lvstore", 00:08:57.884 "bdev_lvol_rename_lvstore", 00:08:57.884 "bdev_lvol_create_lvstore", 00:08:57.884 "bdev_raid_set_options", 00:08:57.884 "bdev_raid_remove_base_bdev", 00:08:57.884 "bdev_raid_add_base_bdev", 00:08:57.884 "bdev_raid_delete", 00:08:57.884 "bdev_raid_create", 00:08:57.884 "bdev_raid_get_bdevs", 00:08:57.884 "bdev_error_inject_error", 00:08:57.884 "bdev_error_delete", 00:08:57.884 "bdev_error_create", 00:08:57.885 "bdev_split_delete", 00:08:57.885 "bdev_split_create", 00:08:57.885 "bdev_delay_delete", 00:08:57.885 "bdev_delay_create", 00:08:57.885 "bdev_delay_update_latency", 00:08:57.885 "bdev_zone_block_delete", 00:08:57.885 "bdev_zone_block_create", 00:08:57.885 "blobfs_create", 00:08:57.885 "blobfs_detect", 00:08:57.885 "blobfs_set_cache_size", 00:08:57.885 "bdev_aio_delete", 00:08:57.885 "bdev_aio_rescan", 00:08:57.885 "bdev_aio_create", 00:08:57.885 "bdev_ftl_set_property", 00:08:57.885 "bdev_ftl_get_properties", 00:08:57.885 "bdev_ftl_get_stats", 00:08:57.885 "bdev_ftl_unmap", 00:08:57.885 "bdev_ftl_unload", 00:08:57.885 "bdev_ftl_delete", 00:08:57.885 "bdev_ftl_load", 00:08:57.885 "bdev_ftl_create", 00:08:57.885 "bdev_virtio_attach_controller", 00:08:57.885 "bdev_virtio_scsi_get_devices", 00:08:57.885 "bdev_virtio_detach_controller", 00:08:57.885 "bdev_virtio_blk_set_hotplug", 00:08:57.885 "bdev_iscsi_delete", 00:08:57.885 "bdev_iscsi_create", 00:08:57.885 "bdev_iscsi_set_options", 00:08:57.885 "accel_error_inject_error", 00:08:57.885 "ioat_scan_accel_module", 00:08:57.885 "dsa_scan_accel_module", 00:08:57.885 "iaa_scan_accel_module", 00:08:57.885 "vfu_virtio_create_fs_endpoint", 00:08:57.885 "vfu_virtio_create_scsi_endpoint", 00:08:57.885 "vfu_virtio_scsi_remove_target", 00:08:57.885 "vfu_virtio_scsi_add_target", 00:08:57.885 "vfu_virtio_create_blk_endpoint", 00:08:57.885 "vfu_virtio_delete_endpoint", 00:08:57.885 "keyring_file_remove_key", 00:08:57.885 "keyring_file_add_key", 00:08:57.885 "keyring_linux_set_options", 00:08:57.885 "fsdev_aio_delete", 00:08:57.885 "fsdev_aio_create", 00:08:57.885 "iscsi_get_histogram", 00:08:57.885 "iscsi_enable_histogram", 00:08:57.885 "iscsi_set_options", 00:08:57.885 "iscsi_get_auth_groups", 00:08:57.885 "iscsi_auth_group_remove_secret", 00:08:57.885 "iscsi_auth_group_add_secret", 00:08:57.885 "iscsi_delete_auth_group", 00:08:57.885 "iscsi_create_auth_group", 00:08:57.885 "iscsi_set_discovery_auth", 00:08:57.885 "iscsi_get_options", 00:08:57.885 "iscsi_target_node_request_logout", 00:08:57.885 "iscsi_target_node_set_redirect", 00:08:57.885 "iscsi_target_node_set_auth", 00:08:57.885 "iscsi_target_node_add_lun", 00:08:57.885 "iscsi_get_stats", 00:08:57.885 "iscsi_get_connections", 00:08:57.885 "iscsi_portal_group_set_auth", 00:08:57.885 "iscsi_start_portal_group", 00:08:57.885 "iscsi_delete_portal_group", 00:08:57.885 "iscsi_create_portal_group", 00:08:57.885 "iscsi_get_portal_groups", 00:08:57.885 "iscsi_delete_target_node", 00:08:57.885 "iscsi_target_node_remove_pg_ig_maps", 00:08:57.885 "iscsi_target_node_add_pg_ig_maps", 00:08:57.885 "iscsi_create_target_node", 00:08:57.885 "iscsi_get_target_nodes", 00:08:57.885 "iscsi_delete_initiator_group", 00:08:57.885 "iscsi_initiator_group_remove_initiators", 00:08:57.885 "iscsi_initiator_group_add_initiators", 00:08:57.885 "iscsi_create_initiator_group", 00:08:57.885 "iscsi_get_initiator_groups", 00:08:57.885 "nvmf_set_crdt", 00:08:57.885 "nvmf_set_config", 00:08:57.885 "nvmf_set_max_subsystems", 00:08:57.885 "nvmf_stop_mdns_prr", 00:08:57.885 "nvmf_publish_mdns_prr", 00:08:57.885 "nvmf_subsystem_get_listeners", 00:08:57.885 "nvmf_subsystem_get_qpairs", 00:08:57.885 "nvmf_subsystem_get_controllers", 00:08:57.885 "nvmf_get_stats", 00:08:57.885 "nvmf_get_transports", 00:08:57.885 "nvmf_create_transport", 00:08:57.885 "nvmf_get_targets", 00:08:57.885 "nvmf_delete_target", 00:08:57.885 "nvmf_create_target", 00:08:57.885 "nvmf_subsystem_allow_any_host", 00:08:57.885 "nvmf_subsystem_set_keys", 00:08:57.885 "nvmf_subsystem_remove_host", 00:08:57.885 "nvmf_subsystem_add_host", 00:08:57.885 "nvmf_ns_remove_host", 00:08:57.885 "nvmf_ns_add_host", 00:08:57.885 "nvmf_subsystem_remove_ns", 00:08:57.885 "nvmf_subsystem_set_ns_ana_group", 00:08:57.885 "nvmf_subsystem_add_ns", 00:08:57.885 "nvmf_subsystem_listener_set_ana_state", 00:08:57.885 "nvmf_discovery_get_referrals", 00:08:57.885 "nvmf_discovery_remove_referral", 00:08:57.885 "nvmf_discovery_add_referral", 00:08:57.885 "nvmf_subsystem_remove_listener", 00:08:57.885 "nvmf_subsystem_add_listener", 00:08:57.885 "nvmf_delete_subsystem", 00:08:57.885 "nvmf_create_subsystem", 00:08:57.885 "nvmf_get_subsystems", 00:08:57.885 "env_dpdk_get_mem_stats", 00:08:57.885 "nbd_get_disks", 00:08:57.885 "nbd_stop_disk", 00:08:57.885 "nbd_start_disk", 00:08:57.885 "ublk_recover_disk", 00:08:57.885 "ublk_get_disks", 00:08:57.885 "ublk_stop_disk", 00:08:57.885 "ublk_start_disk", 00:08:57.885 "ublk_destroy_target", 00:08:57.885 "ublk_create_target", 00:08:57.885 "virtio_blk_create_transport", 00:08:57.885 "virtio_blk_get_transports", 00:08:57.885 "vhost_controller_set_coalescing", 00:08:57.885 "vhost_get_controllers", 00:08:57.885 "vhost_delete_controller", 00:08:57.885 "vhost_create_blk_controller", 00:08:57.885 "vhost_scsi_controller_remove_target", 00:08:57.885 "vhost_scsi_controller_add_target", 00:08:57.885 "vhost_start_scsi_controller", 00:08:57.885 "vhost_create_scsi_controller", 00:08:57.885 "thread_set_cpumask", 00:08:57.885 "scheduler_set_options", 00:08:57.885 "framework_get_governor", 00:08:57.885 "framework_get_scheduler", 00:08:57.885 "framework_set_scheduler", 00:08:57.885 "framework_get_reactors", 00:08:57.885 "thread_get_io_channels", 00:08:57.885 "thread_get_pollers", 00:08:57.885 "thread_get_stats", 00:08:57.885 "framework_monitor_context_switch", 00:08:57.885 "spdk_kill_instance", 00:08:57.885 "log_enable_timestamps", 00:08:57.885 "log_get_flags", 00:08:57.885 "log_clear_flag", 00:08:57.885 "log_set_flag", 00:08:57.885 "log_get_level", 00:08:57.885 "log_set_level", 00:08:57.885 "log_get_print_level", 00:08:57.885 "log_set_print_level", 00:08:57.885 "framework_enable_cpumask_locks", 00:08:57.885 "framework_disable_cpumask_locks", 00:08:57.885 "framework_wait_init", 00:08:57.885 "framework_start_init", 00:08:57.885 "scsi_get_devices", 00:08:57.885 "bdev_get_histogram", 00:08:57.885 "bdev_enable_histogram", 00:08:57.885 "bdev_set_qos_limit", 00:08:57.885 "bdev_set_qd_sampling_period", 00:08:57.885 "bdev_get_bdevs", 00:08:57.885 "bdev_reset_iostat", 00:08:57.885 "bdev_get_iostat", 00:08:57.885 "bdev_examine", 00:08:57.885 "bdev_wait_for_examine", 00:08:57.885 "bdev_set_options", 00:08:57.885 "accel_get_stats", 00:08:57.885 "accel_set_options", 00:08:57.885 "accel_set_driver", 00:08:57.885 "accel_crypto_key_destroy", 00:08:57.885 "accel_crypto_keys_get", 00:08:57.885 "accel_crypto_key_create", 00:08:57.885 "accel_assign_opc", 00:08:57.885 "accel_get_module_info", 00:08:57.885 "accel_get_opc_assignments", 00:08:57.885 "vmd_rescan", 00:08:57.885 "vmd_remove_device", 00:08:57.885 "vmd_enable", 00:08:57.885 "sock_get_default_impl", 00:08:57.885 "sock_set_default_impl", 00:08:57.885 "sock_impl_set_options", 00:08:57.885 "sock_impl_get_options", 00:08:57.885 "iobuf_get_stats", 00:08:57.885 "iobuf_set_options", 00:08:57.885 "keyring_get_keys", 00:08:57.885 "vfu_tgt_set_base_path", 00:08:57.886 "framework_get_pci_devices", 00:08:57.886 "framework_get_config", 00:08:57.886 "framework_get_subsystems", 00:08:57.886 "fsdev_set_opts", 00:08:57.886 "fsdev_get_opts", 00:08:57.886 "trace_get_info", 00:08:57.886 "trace_get_tpoint_group_mask", 00:08:57.886 "trace_disable_tpoint_group", 00:08:57.886 "trace_enable_tpoint_group", 00:08:57.886 "trace_clear_tpoint_mask", 00:08:57.886 "trace_set_tpoint_mask", 00:08:57.886 "notify_get_notifications", 00:08:57.886 "notify_get_types", 00:08:57.886 "spdk_get_version", 00:08:57.886 "rpc_get_methods" 00:08:57.886 ] 00:08:57.886 11:52:32 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:08:57.886 11:52:32 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:57.886 11:52:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:57.886 11:52:32 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:08:57.886 11:52:32 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 4091500 00:08:57.886 11:52:32 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 4091500 ']' 00:08:57.886 11:52:32 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 4091500 00:08:57.886 11:52:32 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:08:58.145 11:52:32 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:58.145 11:52:32 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4091500 00:08:58.145 11:52:32 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:58.145 11:52:32 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:58.145 11:52:32 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4091500' 00:08:58.145 killing process with pid 4091500 00:08:58.145 11:52:32 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 4091500 00:08:58.145 11:52:32 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 4091500 00:08:58.404 00:08:58.404 real 0m1.160s 00:08:58.404 user 0m1.973s 00:08:58.404 sys 0m0.434s 00:08:58.404 11:52:32 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:58.404 11:52:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:58.404 ************************************ 00:08:58.404 END TEST spdkcli_tcp 00:08:58.404 ************************************ 00:08:58.404 11:52:32 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:58.404 11:52:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:58.404 11:52:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:58.404 11:52:32 -- common/autotest_common.sh@10 -- # set +x 00:08:58.404 ************************************ 00:08:58.404 START TEST dpdk_mem_utility 00:08:58.404 ************************************ 00:08:58.404 11:52:32 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:58.404 * Looking for test storage... 00:08:58.404 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:08:58.404 11:52:32 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:58.404 11:52:32 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:08:58.404 11:52:32 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:58.664 11:52:32 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:58.664 11:52:32 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:58.664 11:52:32 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:58.664 11:52:32 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:58.664 11:52:32 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:08:58.664 11:52:32 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:08:58.664 11:52:32 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:08:58.664 11:52:32 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:08:58.664 11:52:32 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:08:58.664 11:52:32 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:08:58.664 11:52:32 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:08:58.664 11:52:32 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:58.664 11:52:32 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:08:58.664 11:52:32 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:08:58.664 11:52:32 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:58.664 11:52:32 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:58.664 11:52:32 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:08:58.664 11:52:32 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:08:58.664 11:52:32 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:58.664 11:52:32 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:08:58.664 11:52:32 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:08:58.664 11:52:32 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:08:58.664 11:52:32 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:08:58.664 11:52:32 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:58.664 11:52:32 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:08:58.664 11:52:32 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:08:58.664 11:52:32 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:58.664 11:52:32 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:58.664 11:52:32 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:08:58.664 11:52:32 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:58.664 11:52:32 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:58.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.664 --rc genhtml_branch_coverage=1 00:08:58.664 --rc genhtml_function_coverage=1 00:08:58.664 --rc genhtml_legend=1 00:08:58.664 --rc geninfo_all_blocks=1 00:08:58.664 --rc geninfo_unexecuted_blocks=1 00:08:58.664 00:08:58.664 ' 00:08:58.664 11:52:32 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:58.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.664 --rc genhtml_branch_coverage=1 00:08:58.664 --rc genhtml_function_coverage=1 00:08:58.664 --rc genhtml_legend=1 00:08:58.664 --rc geninfo_all_blocks=1 00:08:58.664 --rc geninfo_unexecuted_blocks=1 00:08:58.664 00:08:58.664 ' 00:08:58.664 11:52:32 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:58.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.664 --rc genhtml_branch_coverage=1 00:08:58.664 --rc genhtml_function_coverage=1 00:08:58.664 --rc genhtml_legend=1 00:08:58.664 --rc geninfo_all_blocks=1 00:08:58.664 --rc geninfo_unexecuted_blocks=1 00:08:58.664 00:08:58.664 ' 00:08:58.664 11:52:32 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:58.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.664 --rc genhtml_branch_coverage=1 00:08:58.664 --rc genhtml_function_coverage=1 00:08:58.664 --rc genhtml_legend=1 00:08:58.664 --rc geninfo_all_blocks=1 00:08:58.664 --rc geninfo_unexecuted_blocks=1 00:08:58.664 00:08:58.664 ' 00:08:58.665 11:52:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:08:58.665 11:52:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=4091802 00:08:58.665 11:52:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 4091802 00:08:58.665 11:52:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:58.665 11:52:32 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 4091802 ']' 00:08:58.665 11:52:32 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:58.665 11:52:32 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:58.665 11:52:32 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:58.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:58.665 11:52:32 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:58.665 11:52:32 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:58.665 [2024-12-05 11:52:32.731078] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:08:58.665 [2024-12-05 11:52:32.731125] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4091802 ] 00:08:58.665 [2024-12-05 11:52:32.806966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.665 [2024-12-05 11:52:32.848821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.933 11:52:33 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:58.933 11:52:33 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:08:58.933 11:52:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:08:58.933 11:52:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:08:58.933 11:52:33 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.933 11:52:33 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:58.933 { 00:08:58.933 "filename": "/tmp/spdk_mem_dump.txt" 00:08:58.933 } 00:08:58.933 11:52:33 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.933 11:52:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:08:58.933 DPDK memory size 818.000000 MiB in 1 heap(s) 00:08:58.933 1 heaps totaling size 818.000000 MiB 00:08:58.933 size: 818.000000 MiB heap id: 0 00:08:58.933 end heaps---------- 00:08:58.933 9 mempools totaling size 603.782043 MiB 00:08:58.933 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:08:58.933 size: 158.602051 MiB name: PDU_data_out_Pool 00:08:58.933 size: 100.555481 MiB name: bdev_io_4091802 00:08:58.933 size: 50.003479 MiB name: msgpool_4091802 00:08:58.933 size: 36.509338 MiB name: fsdev_io_4091802 00:08:58.933 size: 21.763794 MiB name: PDU_Pool 00:08:58.933 size: 19.513306 MiB name: SCSI_TASK_Pool 00:08:58.933 size: 4.133484 MiB name: evtpool_4091802 00:08:58.933 size: 0.026123 MiB name: Session_Pool 00:08:58.933 end mempools------- 00:08:58.933 6 memzones totaling size 4.142822 MiB 00:08:58.933 size: 1.000366 MiB name: RG_ring_0_4091802 00:08:58.933 size: 1.000366 MiB name: RG_ring_1_4091802 00:08:58.933 size: 1.000366 MiB name: RG_ring_4_4091802 00:08:58.933 size: 1.000366 MiB name: RG_ring_5_4091802 00:08:58.933 size: 0.125366 MiB name: RG_ring_2_4091802 00:08:58.933 size: 0.015991 MiB name: RG_ring_3_4091802 00:08:58.933 end memzones------- 00:08:58.933 11:52:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:08:59.192 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:08:59.192 list of free elements. size: 10.852478 MiB 00:08:59.192 element at address: 0x200019200000 with size: 0.999878 MiB 00:08:59.192 element at address: 0x200019400000 with size: 0.999878 MiB 00:08:59.192 element at address: 0x200000400000 with size: 0.998535 MiB 00:08:59.192 element at address: 0x200032000000 with size: 0.994446 MiB 00:08:59.192 element at address: 0x200006400000 with size: 0.959839 MiB 00:08:59.192 element at address: 0x200012c00000 with size: 0.944275 MiB 00:08:59.192 element at address: 0x200019600000 with size: 0.936584 MiB 00:08:59.192 element at address: 0x200000200000 with size: 0.717346 MiB 00:08:59.192 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:08:59.192 element at address: 0x200000c00000 with size: 0.495422 MiB 00:08:59.192 element at address: 0x20000a600000 with size: 0.490723 MiB 00:08:59.192 element at address: 0x200019800000 with size: 0.485657 MiB 00:08:59.192 element at address: 0x200003e00000 with size: 0.481934 MiB 00:08:59.192 element at address: 0x200028200000 with size: 0.410034 MiB 00:08:59.192 element at address: 0x200000800000 with size: 0.355042 MiB 00:08:59.192 list of standard malloc elements. size: 199.218628 MiB 00:08:59.192 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:08:59.192 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:08:59.192 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:08:59.192 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:08:59.192 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:08:59.192 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:08:59.192 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:08:59.192 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:08:59.192 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:08:59.192 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:08:59.192 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:08:59.192 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:08:59.192 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:08:59.192 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:08:59.192 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:08:59.192 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:08:59.192 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:08:59.192 element at address: 0x20000085b040 with size: 0.000183 MiB 00:08:59.192 element at address: 0x20000085f300 with size: 0.000183 MiB 00:08:59.192 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:08:59.192 element at address: 0x20000087f680 with size: 0.000183 MiB 00:08:59.192 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:08:59.192 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:08:59.192 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:08:59.192 element at address: 0x200000cff000 with size: 0.000183 MiB 00:08:59.192 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:08:59.192 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:08:59.192 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:08:59.192 element at address: 0x200003efb980 with size: 0.000183 MiB 00:08:59.192 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:08:59.192 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:08:59.192 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:08:59.192 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:08:59.192 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:08:59.192 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:08:59.192 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:08:59.192 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:08:59.192 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:08:59.192 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:08:59.192 element at address: 0x200028268f80 with size: 0.000183 MiB 00:08:59.192 element at address: 0x200028269040 with size: 0.000183 MiB 00:08:59.192 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:08:59.192 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:08:59.192 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:08:59.192 list of memzone associated elements. size: 607.928894 MiB 00:08:59.192 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:08:59.192 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:08:59.192 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:08:59.192 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:08:59.192 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:08:59.192 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_4091802_0 00:08:59.192 element at address: 0x200000dff380 with size: 48.003052 MiB 00:08:59.192 associated memzone info: size: 48.002930 MiB name: MP_msgpool_4091802_0 00:08:59.192 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:08:59.192 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_4091802_0 00:08:59.192 element at address: 0x2000199be940 with size: 20.255554 MiB 00:08:59.192 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:08:59.192 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:08:59.192 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:08:59.192 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:08:59.192 associated memzone info: size: 3.000122 MiB name: MP_evtpool_4091802_0 00:08:59.192 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:08:59.192 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_4091802 00:08:59.192 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:08:59.192 associated memzone info: size: 1.007996 MiB name: MP_evtpool_4091802 00:08:59.192 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:08:59.192 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:08:59.192 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:08:59.192 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:08:59.192 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:08:59.192 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:08:59.192 element at address: 0x200003efba40 with size: 1.008118 MiB 00:08:59.192 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:08:59.192 element at address: 0x200000cff180 with size: 1.000488 MiB 00:08:59.192 associated memzone info: size: 1.000366 MiB name: RG_ring_0_4091802 00:08:59.192 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:08:59.192 associated memzone info: size: 1.000366 MiB name: RG_ring_1_4091802 00:08:59.192 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:08:59.192 associated memzone info: size: 1.000366 MiB name: RG_ring_4_4091802 00:08:59.192 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:08:59.192 associated memzone info: size: 1.000366 MiB name: RG_ring_5_4091802 00:08:59.192 element at address: 0x20000087f740 with size: 0.500488 MiB 00:08:59.192 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_4091802 00:08:59.192 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:08:59.192 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_4091802 00:08:59.192 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:08:59.192 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:08:59.192 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:08:59.192 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:08:59.192 element at address: 0x20001987c540 with size: 0.250488 MiB 00:08:59.192 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:08:59.192 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:08:59.192 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_4091802 00:08:59.192 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:08:59.193 associated memzone info: size: 0.125366 MiB name: RG_ring_2_4091802 00:08:59.193 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:08:59.193 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:08:59.193 element at address: 0x200028269100 with size: 0.023743 MiB 00:08:59.193 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:08:59.193 element at address: 0x20000085b100 with size: 0.016113 MiB 00:08:59.193 associated memzone info: size: 0.015991 MiB name: RG_ring_3_4091802 00:08:59.193 element at address: 0x20002826f240 with size: 0.002441 MiB 00:08:59.193 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:08:59.193 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:08:59.193 associated memzone info: size: 0.000183 MiB name: MP_msgpool_4091802 00:08:59.193 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:08:59.193 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_4091802 00:08:59.193 element at address: 0x20000085af00 with size: 0.000305 MiB 00:08:59.193 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_4091802 00:08:59.193 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:08:59.193 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:08:59.193 11:52:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:08:59.193 11:52:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 4091802 00:08:59.193 11:52:33 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 4091802 ']' 00:08:59.193 11:52:33 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 4091802 00:08:59.193 11:52:33 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:08:59.193 11:52:33 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:59.193 11:52:33 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4091802 00:08:59.193 11:52:33 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:59.193 11:52:33 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:59.193 11:52:33 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4091802' 00:08:59.193 killing process with pid 4091802 00:08:59.193 11:52:33 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 4091802 00:08:59.193 11:52:33 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 4091802 00:08:59.451 00:08:59.451 real 0m1.007s 00:08:59.451 user 0m0.932s 00:08:59.451 sys 0m0.408s 00:08:59.451 11:52:33 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:59.451 11:52:33 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:59.451 ************************************ 00:08:59.451 END TEST dpdk_mem_utility 00:08:59.451 ************************************ 00:08:59.451 11:52:33 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:08:59.451 11:52:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:59.451 11:52:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:59.451 11:52:33 -- common/autotest_common.sh@10 -- # set +x 00:08:59.451 ************************************ 00:08:59.451 START TEST event 00:08:59.451 ************************************ 00:08:59.451 11:52:33 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:08:59.709 * Looking for test storage... 00:08:59.709 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:08:59.709 11:52:33 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:59.709 11:52:33 event -- common/autotest_common.sh@1711 -- # lcov --version 00:08:59.709 11:52:33 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:59.709 11:52:33 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:59.709 11:52:33 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:59.709 11:52:33 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:59.710 11:52:33 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:59.710 11:52:33 event -- scripts/common.sh@336 -- # IFS=.-: 00:08:59.710 11:52:33 event -- scripts/common.sh@336 -- # read -ra ver1 00:08:59.710 11:52:33 event -- scripts/common.sh@337 -- # IFS=.-: 00:08:59.710 11:52:33 event -- scripts/common.sh@337 -- # read -ra ver2 00:08:59.710 11:52:33 event -- scripts/common.sh@338 -- # local 'op=<' 00:08:59.710 11:52:33 event -- scripts/common.sh@340 -- # ver1_l=2 00:08:59.710 11:52:33 event -- scripts/common.sh@341 -- # ver2_l=1 00:08:59.710 11:52:33 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:59.710 11:52:33 event -- scripts/common.sh@344 -- # case "$op" in 00:08:59.710 11:52:33 event -- scripts/common.sh@345 -- # : 1 00:08:59.710 11:52:33 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:59.710 11:52:33 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:59.710 11:52:33 event -- scripts/common.sh@365 -- # decimal 1 00:08:59.710 11:52:33 event -- scripts/common.sh@353 -- # local d=1 00:08:59.710 11:52:33 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:59.710 11:52:33 event -- scripts/common.sh@355 -- # echo 1 00:08:59.710 11:52:33 event -- scripts/common.sh@365 -- # ver1[v]=1 00:08:59.710 11:52:33 event -- scripts/common.sh@366 -- # decimal 2 00:08:59.710 11:52:33 event -- scripts/common.sh@353 -- # local d=2 00:08:59.710 11:52:33 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:59.710 11:52:33 event -- scripts/common.sh@355 -- # echo 2 00:08:59.710 11:52:33 event -- scripts/common.sh@366 -- # ver2[v]=2 00:08:59.710 11:52:33 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:59.710 11:52:33 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:59.710 11:52:33 event -- scripts/common.sh@368 -- # return 0 00:08:59.710 11:52:33 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:59.710 11:52:33 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:59.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.710 --rc genhtml_branch_coverage=1 00:08:59.710 --rc genhtml_function_coverage=1 00:08:59.710 --rc genhtml_legend=1 00:08:59.710 --rc geninfo_all_blocks=1 00:08:59.710 --rc geninfo_unexecuted_blocks=1 00:08:59.710 00:08:59.710 ' 00:08:59.710 11:52:33 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:59.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.710 --rc genhtml_branch_coverage=1 00:08:59.710 --rc genhtml_function_coverage=1 00:08:59.710 --rc genhtml_legend=1 00:08:59.710 --rc geninfo_all_blocks=1 00:08:59.710 --rc geninfo_unexecuted_blocks=1 00:08:59.710 00:08:59.710 ' 00:08:59.710 11:52:33 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:59.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.710 --rc genhtml_branch_coverage=1 00:08:59.710 --rc genhtml_function_coverage=1 00:08:59.710 --rc genhtml_legend=1 00:08:59.710 --rc geninfo_all_blocks=1 00:08:59.710 --rc geninfo_unexecuted_blocks=1 00:08:59.710 00:08:59.710 ' 00:08:59.710 11:52:33 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:59.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.710 --rc genhtml_branch_coverage=1 00:08:59.710 --rc genhtml_function_coverage=1 00:08:59.710 --rc genhtml_legend=1 00:08:59.710 --rc geninfo_all_blocks=1 00:08:59.710 --rc geninfo_unexecuted_blocks=1 00:08:59.710 00:08:59.710 ' 00:08:59.710 11:52:33 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:08:59.710 11:52:33 event -- bdev/nbd_common.sh@6 -- # set -e 00:08:59.710 11:52:33 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:59.710 11:52:33 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:59.710 11:52:33 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:59.710 11:52:33 event -- common/autotest_common.sh@10 -- # set +x 00:08:59.710 ************************************ 00:08:59.710 START TEST event_perf 00:08:59.710 ************************************ 00:08:59.710 11:52:33 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:59.710 Running I/O for 1 seconds...[2024-12-05 11:52:33.822218] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:08:59.710 [2024-12-05 11:52:33.822288] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4092097 ] 00:08:59.710 [2024-12-05 11:52:33.900355] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:59.967 [2024-12-05 11:52:33.944267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:59.967 [2024-12-05 11:52:33.944391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:59.967 [2024-12-05 11:52:33.944459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.967 [2024-12-05 11:52:33.944460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:01.062 Running I/O for 1 seconds... 00:09:01.062 lcore 0: 206235 00:09:01.062 lcore 1: 206234 00:09:01.062 lcore 2: 206233 00:09:01.062 lcore 3: 206234 00:09:01.062 done. 00:09:01.062 00:09:01.062 real 0m1.186s 00:09:01.062 user 0m4.108s 00:09:01.062 sys 0m0.075s 00:09:01.062 11:52:34 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:01.062 11:52:34 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:09:01.062 ************************************ 00:09:01.062 END TEST event_perf 00:09:01.062 ************************************ 00:09:01.062 11:52:35 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:09:01.062 11:52:35 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:01.062 11:52:35 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:01.062 11:52:35 event -- common/autotest_common.sh@10 -- # set +x 00:09:01.062 ************************************ 00:09:01.062 START TEST event_reactor 00:09:01.062 ************************************ 00:09:01.062 11:52:35 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:09:01.062 [2024-12-05 11:52:35.082306] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:09:01.063 [2024-12-05 11:52:35.082366] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4092355 ] 00:09:01.063 [2024-12-05 11:52:35.163043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.063 [2024-12-05 11:52:35.204639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.435 test_start 00:09:02.435 oneshot 00:09:02.435 tick 100 00:09:02.435 tick 100 00:09:02.435 tick 250 00:09:02.435 tick 100 00:09:02.435 tick 100 00:09:02.435 tick 250 00:09:02.435 tick 100 00:09:02.435 tick 500 00:09:02.435 tick 100 00:09:02.435 tick 100 00:09:02.435 tick 250 00:09:02.435 tick 100 00:09:02.435 tick 100 00:09:02.435 test_end 00:09:02.435 00:09:02.435 real 0m1.183s 00:09:02.435 user 0m1.094s 00:09:02.435 sys 0m0.084s 00:09:02.435 11:52:36 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:02.435 11:52:36 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:09:02.435 ************************************ 00:09:02.435 END TEST event_reactor 00:09:02.435 ************************************ 00:09:02.435 11:52:36 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:02.435 11:52:36 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:02.435 11:52:36 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:02.435 11:52:36 event -- common/autotest_common.sh@10 -- # set +x 00:09:02.435 ************************************ 00:09:02.435 START TEST event_reactor_perf 00:09:02.435 ************************************ 00:09:02.435 11:52:36 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:02.435 [2024-12-05 11:52:36.339201] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:09:02.435 [2024-12-05 11:52:36.339273] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4092544 ] 00:09:02.435 [2024-12-05 11:52:36.417736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.435 [2024-12-05 11:52:36.457761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.368 test_start 00:09:03.368 test_end 00:09:03.368 Performance: 521007 events per second 00:09:03.368 00:09:03.368 real 0m1.178s 00:09:03.368 user 0m1.102s 00:09:03.368 sys 0m0.072s 00:09:03.368 11:52:37 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:03.368 11:52:37 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:09:03.368 ************************************ 00:09:03.368 END TEST event_reactor_perf 00:09:03.368 ************************************ 00:09:03.368 11:52:37 event -- event/event.sh@49 -- # uname -s 00:09:03.368 11:52:37 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:09:03.368 11:52:37 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:09:03.368 11:52:37 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:03.368 11:52:37 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:03.368 11:52:37 event -- common/autotest_common.sh@10 -- # set +x 00:09:03.626 ************************************ 00:09:03.626 START TEST event_scheduler 00:09:03.626 ************************************ 00:09:03.626 11:52:37 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:09:03.626 * Looking for test storage... 00:09:03.626 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:09:03.626 11:52:37 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:03.626 11:52:37 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:09:03.626 11:52:37 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:03.626 11:52:37 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:03.626 11:52:37 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:03.626 11:52:37 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:03.626 11:52:37 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:03.627 11:52:37 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:09:03.627 11:52:37 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:09:03.627 11:52:37 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:09:03.627 11:52:37 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:09:03.627 11:52:37 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:09:03.627 11:52:37 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:09:03.627 11:52:37 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:09:03.627 11:52:37 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:03.627 11:52:37 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:09:03.627 11:52:37 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:09:03.627 11:52:37 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:03.627 11:52:37 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:03.627 11:52:37 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:09:03.627 11:52:37 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:09:03.627 11:52:37 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:03.627 11:52:37 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:09:03.627 11:52:37 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:09:03.627 11:52:37 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:09:03.627 11:52:37 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:09:03.627 11:52:37 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:03.627 11:52:37 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:09:03.627 11:52:37 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:09:03.627 11:52:37 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:03.627 11:52:37 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:03.627 11:52:37 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:09:03.627 11:52:37 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:03.627 11:52:37 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:03.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.627 --rc genhtml_branch_coverage=1 00:09:03.627 --rc genhtml_function_coverage=1 00:09:03.627 --rc genhtml_legend=1 00:09:03.627 --rc geninfo_all_blocks=1 00:09:03.627 --rc geninfo_unexecuted_blocks=1 00:09:03.627 00:09:03.627 ' 00:09:03.627 11:52:37 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:03.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.627 --rc genhtml_branch_coverage=1 00:09:03.627 --rc genhtml_function_coverage=1 00:09:03.627 --rc genhtml_legend=1 00:09:03.627 --rc geninfo_all_blocks=1 00:09:03.627 --rc geninfo_unexecuted_blocks=1 00:09:03.627 00:09:03.627 ' 00:09:03.627 11:52:37 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:03.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.627 --rc genhtml_branch_coverage=1 00:09:03.627 --rc genhtml_function_coverage=1 00:09:03.627 --rc genhtml_legend=1 00:09:03.627 --rc geninfo_all_blocks=1 00:09:03.627 --rc geninfo_unexecuted_blocks=1 00:09:03.627 00:09:03.627 ' 00:09:03.627 11:52:37 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:03.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.627 --rc genhtml_branch_coverage=1 00:09:03.627 --rc genhtml_function_coverage=1 00:09:03.627 --rc genhtml_legend=1 00:09:03.627 --rc geninfo_all_blocks=1 00:09:03.627 --rc geninfo_unexecuted_blocks=1 00:09:03.627 00:09:03.627 ' 00:09:03.627 11:52:37 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:09:03.627 11:52:37 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=4092858 00:09:03.627 11:52:37 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:09:03.627 11:52:37 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:09:03.627 11:52:37 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 4092858 00:09:03.627 11:52:37 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 4092858 ']' 00:09:03.627 11:52:37 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:03.627 11:52:37 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:03.627 11:52:37 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:03.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:03.627 11:52:37 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:03.627 11:52:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:03.627 [2024-12-05 11:52:37.796266] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:09:03.627 [2024-12-05 11:52:37.796316] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4092858 ] 00:09:03.885 [2024-12-05 11:52:37.874335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:03.885 [2024-12-05 11:52:37.917333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.885 [2024-12-05 11:52:37.917453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:03.885 [2024-12-05 11:52:37.917489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:03.885 [2024-12-05 11:52:37.917489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:03.885 11:52:37 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:03.885 11:52:37 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:09:03.885 11:52:37 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:09:03.885 11:52:37 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.885 11:52:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:03.885 [2024-12-05 11:52:37.958167] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:09:03.885 [2024-12-05 11:52:37.958184] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:09:03.885 [2024-12-05 11:52:37.958194] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:09:03.885 [2024-12-05 11:52:37.958200] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:09:03.885 [2024-12-05 11:52:37.958205] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:09:03.885 11:52:37 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.885 11:52:37 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:09:03.885 11:52:37 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.885 11:52:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:03.885 [2024-12-05 11:52:38.033020] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:09:03.885 11:52:38 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.885 11:52:38 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:09:03.885 11:52:38 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:03.885 11:52:38 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:03.885 11:52:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:03.885 ************************************ 00:09:03.885 START TEST scheduler_create_thread 00:09:03.885 ************************************ 00:09:03.885 11:52:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:09:03.885 11:52:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:09:03.885 11:52:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.885 11:52:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:03.885 2 00:09:03.885 11:52:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.885 11:52:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:09:03.885 11:52:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.886 11:52:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:04.143 3 00:09:04.143 11:52:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.143 11:52:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:09:04.143 11:52:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.143 11:52:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:04.143 4 00:09:04.143 11:52:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.143 11:52:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:09:04.143 11:52:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.143 11:52:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:04.143 5 00:09:04.143 11:52:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.143 11:52:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:09:04.143 11:52:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.143 11:52:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:04.143 6 00:09:04.143 11:52:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.143 11:52:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:09:04.143 11:52:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.143 11:52:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:04.143 7 00:09:04.143 11:52:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.143 11:52:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:09:04.143 11:52:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.143 11:52:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:04.143 8 00:09:04.143 11:52:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.143 11:52:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:09:04.143 11:52:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.143 11:52:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:04.143 9 00:09:04.143 11:52:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.143 11:52:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:09:04.143 11:52:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.143 11:52:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:04.143 10 00:09:04.143 11:52:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.143 11:52:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:09:04.143 11:52:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.143 11:52:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:04.143 11:52:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.143 11:52:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:09:04.143 11:52:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:09:04.143 11:52:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.143 11:52:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:05.077 11:52:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.077 11:52:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:09:05.077 11:52:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.077 11:52:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:06.452 11:52:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.452 11:52:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:09:06.452 11:52:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:09:06.452 11:52:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.452 11:52:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:07.384 11:52:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.384 00:09:07.384 real 0m3.380s 00:09:07.384 user 0m0.024s 00:09:07.384 sys 0m0.006s 00:09:07.384 11:52:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:07.384 11:52:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:07.384 ************************************ 00:09:07.384 END TEST scheduler_create_thread 00:09:07.384 ************************************ 00:09:07.384 11:52:41 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:09:07.384 11:52:41 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 4092858 00:09:07.384 11:52:41 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 4092858 ']' 00:09:07.384 11:52:41 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 4092858 00:09:07.384 11:52:41 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:09:07.384 11:52:41 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:07.384 11:52:41 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4092858 00:09:07.384 11:52:41 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:09:07.384 11:52:41 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:09:07.384 11:52:41 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4092858' 00:09:07.384 killing process with pid 4092858 00:09:07.384 11:52:41 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 4092858 00:09:07.384 11:52:41 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 4092858 00:09:07.641 [2024-12-05 11:52:41.829231] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:09:07.899 00:09:07.899 real 0m4.467s 00:09:07.899 user 0m7.788s 00:09:07.899 sys 0m0.386s 00:09:07.899 11:52:42 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:07.899 11:52:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:07.899 ************************************ 00:09:07.899 END TEST event_scheduler 00:09:07.899 ************************************ 00:09:07.899 11:52:42 event -- event/event.sh@51 -- # modprobe -n nbd 00:09:07.899 11:52:42 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:09:07.899 11:52:42 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:07.899 11:52:42 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:07.899 11:52:42 event -- common/autotest_common.sh@10 -- # set +x 00:09:08.157 ************************************ 00:09:08.157 START TEST app_repeat 00:09:08.157 ************************************ 00:09:08.157 11:52:42 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:09:08.157 11:52:42 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:08.157 11:52:42 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:08.158 11:52:42 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:09:08.158 11:52:42 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:08.158 11:52:42 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:09:08.158 11:52:42 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:09:08.158 11:52:42 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:09:08.158 11:52:42 event.app_repeat -- event/event.sh@19 -- # repeat_pid=4093634 00:09:08.158 11:52:42 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:09:08.158 11:52:42 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:09:08.158 11:52:42 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 4093634' 00:09:08.158 Process app_repeat pid: 4093634 00:09:08.158 11:52:42 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:08.158 11:52:42 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:09:08.158 spdk_app_start Round 0 00:09:08.158 11:52:42 event.app_repeat -- event/event.sh@25 -- # waitforlisten 4093634 /var/tmp/spdk-nbd.sock 00:09:08.158 11:52:42 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 4093634 ']' 00:09:08.158 11:52:42 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:08.158 11:52:42 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:08.158 11:52:42 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:08.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:08.158 11:52:42 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:08.158 11:52:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:08.158 [2024-12-05 11:52:42.147445] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:09:08.158 [2024-12-05 11:52:42.147492] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4093634 ] 00:09:08.158 [2024-12-05 11:52:42.222686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:08.158 [2024-12-05 11:52:42.266490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:08.158 [2024-12-05 11:52:42.266492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.158 11:52:42 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:08.158 11:52:42 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:08.158 11:52:42 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:08.417 Malloc0 00:09:08.417 11:52:42 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:08.676 Malloc1 00:09:08.676 11:52:42 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:08.676 11:52:42 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:08.676 11:52:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:08.676 11:52:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:08.676 11:52:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:08.676 11:52:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:08.676 11:52:42 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:08.676 11:52:42 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:08.677 11:52:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:08.677 11:52:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:08.677 11:52:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:08.677 11:52:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:08.677 11:52:42 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:08.677 11:52:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:08.677 11:52:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:08.677 11:52:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:08.935 /dev/nbd0 00:09:08.935 11:52:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:08.935 11:52:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:08.935 11:52:42 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:08.935 11:52:42 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:08.935 11:52:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:08.935 11:52:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:08.935 11:52:42 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:08.935 11:52:42 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:08.935 11:52:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:08.935 11:52:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:08.935 11:52:42 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:08.935 1+0 records in 00:09:08.935 1+0 records out 00:09:08.935 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000179428 s, 22.8 MB/s 00:09:08.935 11:52:43 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:08.935 11:52:43 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:08.935 11:52:43 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:08.935 11:52:43 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:08.935 11:52:43 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:08.935 11:52:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:08.935 11:52:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:08.935 11:52:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:09.193 /dev/nbd1 00:09:09.193 11:52:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:09.193 11:52:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:09.193 11:52:43 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:09.193 11:52:43 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:09.193 11:52:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:09.193 11:52:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:09.193 11:52:43 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:09.193 11:52:43 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:09.193 11:52:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:09.193 11:52:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:09.193 11:52:43 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:09.193 1+0 records in 00:09:09.193 1+0 records out 00:09:09.193 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000244325 s, 16.8 MB/s 00:09:09.193 11:52:43 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:09.193 11:52:43 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:09.193 11:52:43 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:09.193 11:52:43 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:09.193 11:52:43 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:09.193 11:52:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:09.193 11:52:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:09.193 11:52:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:09.193 11:52:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:09.194 11:52:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:09.452 11:52:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:09.452 { 00:09:09.452 "nbd_device": "/dev/nbd0", 00:09:09.452 "bdev_name": "Malloc0" 00:09:09.452 }, 00:09:09.452 { 00:09:09.452 "nbd_device": "/dev/nbd1", 00:09:09.452 "bdev_name": "Malloc1" 00:09:09.452 } 00:09:09.452 ]' 00:09:09.452 11:52:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:09.452 { 00:09:09.452 "nbd_device": "/dev/nbd0", 00:09:09.452 "bdev_name": "Malloc0" 00:09:09.452 }, 00:09:09.452 { 00:09:09.452 "nbd_device": "/dev/nbd1", 00:09:09.452 "bdev_name": "Malloc1" 00:09:09.452 } 00:09:09.452 ]' 00:09:09.452 11:52:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:09.452 11:52:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:09.452 /dev/nbd1' 00:09:09.452 11:52:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:09.452 /dev/nbd1' 00:09:09.452 11:52:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:09.452 11:52:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:09.452 11:52:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:09.452 11:52:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:09.452 11:52:43 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:09.452 11:52:43 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:09.452 11:52:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:09.452 11:52:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:09.452 11:52:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:09.452 11:52:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:09.452 11:52:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:09.452 11:52:43 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:09.452 256+0 records in 00:09:09.452 256+0 records out 00:09:09.452 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106524 s, 98.4 MB/s 00:09:09.452 11:52:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:09.452 11:52:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:09.452 256+0 records in 00:09:09.452 256+0 records out 00:09:09.452 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0133792 s, 78.4 MB/s 00:09:09.452 11:52:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:09.452 11:52:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:09.452 256+0 records in 00:09:09.452 256+0 records out 00:09:09.452 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0149076 s, 70.3 MB/s 00:09:09.452 11:52:43 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:09.452 11:52:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:09.452 11:52:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:09.452 11:52:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:09.452 11:52:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:09.452 11:52:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:09.452 11:52:43 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:09.452 11:52:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:09.452 11:52:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:09:09.452 11:52:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:09.452 11:52:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:09:09.453 11:52:43 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:09.453 11:52:43 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:09.453 11:52:43 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:09.453 11:52:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:09.453 11:52:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:09.453 11:52:43 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:09.453 11:52:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:09.453 11:52:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:09.711 11:52:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:09.711 11:52:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:09.711 11:52:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:09.711 11:52:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:09.711 11:52:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:09.711 11:52:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:09.711 11:52:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:09.711 11:52:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:09.711 11:52:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:09.711 11:52:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:09.970 11:52:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:09.970 11:52:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:09.970 11:52:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:09.970 11:52:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:09.970 11:52:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:09.970 11:52:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:09.970 11:52:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:09.970 11:52:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:09.970 11:52:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:09.971 11:52:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:09.971 11:52:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:10.230 11:52:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:10.230 11:52:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:10.230 11:52:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:10.230 11:52:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:10.230 11:52:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:10.230 11:52:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:10.230 11:52:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:10.230 11:52:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:10.230 11:52:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:10.230 11:52:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:10.230 11:52:44 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:10.230 11:52:44 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:10.230 11:52:44 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:10.488 11:52:44 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:10.488 [2024-12-05 11:52:44.619199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:10.488 [2024-12-05 11:52:44.662466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:10.488 [2024-12-05 11:52:44.662467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.745 [2024-12-05 11:52:44.703211] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:10.745 [2024-12-05 11:52:44.703249] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:14.027 11:52:47 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:14.027 11:52:47 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:09:14.027 spdk_app_start Round 1 00:09:14.027 11:52:47 event.app_repeat -- event/event.sh@25 -- # waitforlisten 4093634 /var/tmp/spdk-nbd.sock 00:09:14.027 11:52:47 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 4093634 ']' 00:09:14.027 11:52:47 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:14.027 11:52:47 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:14.027 11:52:47 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:14.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:14.027 11:52:47 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:14.027 11:52:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:14.027 11:52:47 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:14.027 11:52:47 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:14.027 11:52:47 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:14.027 Malloc0 00:09:14.027 11:52:47 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:14.027 Malloc1 00:09:14.027 11:52:48 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:14.027 11:52:48 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:14.027 11:52:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:14.027 11:52:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:14.027 11:52:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:14.027 11:52:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:14.027 11:52:48 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:14.027 11:52:48 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:14.027 11:52:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:14.027 11:52:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:14.027 11:52:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:14.027 11:52:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:14.027 11:52:48 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:14.027 11:52:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:14.027 11:52:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:14.027 11:52:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:14.286 /dev/nbd0 00:09:14.286 11:52:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:14.286 11:52:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:14.286 11:52:48 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:14.286 11:52:48 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:14.286 11:52:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:14.286 11:52:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:14.286 11:52:48 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:14.286 11:52:48 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:14.286 11:52:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:14.286 11:52:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:14.286 11:52:48 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:14.286 1+0 records in 00:09:14.286 1+0 records out 00:09:14.286 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000174666 s, 23.5 MB/s 00:09:14.286 11:52:48 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:14.286 11:52:48 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:14.286 11:52:48 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:14.286 11:52:48 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:14.286 11:52:48 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:14.286 11:52:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:14.286 11:52:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:14.286 11:52:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:14.545 /dev/nbd1 00:09:14.545 11:52:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:14.545 11:52:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:14.545 11:52:48 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:14.545 11:52:48 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:14.545 11:52:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:14.545 11:52:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:14.545 11:52:48 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:14.545 11:52:48 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:14.545 11:52:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:14.545 11:52:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:14.545 11:52:48 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:14.545 1+0 records in 00:09:14.545 1+0 records out 00:09:14.545 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00020545 s, 19.9 MB/s 00:09:14.545 11:52:48 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:14.545 11:52:48 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:14.545 11:52:48 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:14.545 11:52:48 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:14.545 11:52:48 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:14.545 11:52:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:14.545 11:52:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:14.545 11:52:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:14.545 11:52:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:14.545 11:52:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:14.805 11:52:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:14.805 { 00:09:14.805 "nbd_device": "/dev/nbd0", 00:09:14.805 "bdev_name": "Malloc0" 00:09:14.805 }, 00:09:14.805 { 00:09:14.805 "nbd_device": "/dev/nbd1", 00:09:14.805 "bdev_name": "Malloc1" 00:09:14.805 } 00:09:14.805 ]' 00:09:14.805 11:52:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:14.805 { 00:09:14.805 "nbd_device": "/dev/nbd0", 00:09:14.805 "bdev_name": "Malloc0" 00:09:14.805 }, 00:09:14.805 { 00:09:14.805 "nbd_device": "/dev/nbd1", 00:09:14.805 "bdev_name": "Malloc1" 00:09:14.805 } 00:09:14.805 ]' 00:09:14.805 11:52:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:14.805 11:52:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:14.805 /dev/nbd1' 00:09:14.805 11:52:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:14.805 /dev/nbd1' 00:09:14.805 11:52:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:14.805 11:52:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:14.805 11:52:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:14.805 11:52:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:14.805 11:52:48 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:14.805 11:52:48 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:14.805 11:52:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:14.805 11:52:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:14.805 11:52:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:14.805 11:52:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:14.805 11:52:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:14.805 11:52:48 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:14.805 256+0 records in 00:09:14.805 256+0 records out 00:09:14.805 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105907 s, 99.0 MB/s 00:09:14.805 11:52:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:14.805 11:52:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:14.805 256+0 records in 00:09:14.805 256+0 records out 00:09:14.805 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0144311 s, 72.7 MB/s 00:09:14.805 11:52:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:14.805 11:52:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:14.805 256+0 records in 00:09:14.805 256+0 records out 00:09:14.805 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.01492 s, 70.3 MB/s 00:09:14.805 11:52:48 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:14.805 11:52:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:14.805 11:52:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:14.805 11:52:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:14.805 11:52:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:14.805 11:52:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:14.805 11:52:48 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:14.805 11:52:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:14.805 11:52:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:09:14.805 11:52:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:14.806 11:52:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:09:14.806 11:52:48 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:14.806 11:52:48 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:14.806 11:52:48 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:14.806 11:52:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:14.806 11:52:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:14.806 11:52:48 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:14.806 11:52:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:14.806 11:52:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:15.064 11:52:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:15.064 11:52:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:15.064 11:52:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:15.064 11:52:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:15.064 11:52:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:15.064 11:52:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:15.064 11:52:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:15.064 11:52:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:15.064 11:52:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:15.064 11:52:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:15.323 11:52:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:15.323 11:52:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:15.323 11:52:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:15.323 11:52:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:15.323 11:52:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:15.323 11:52:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:15.323 11:52:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:15.323 11:52:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:15.323 11:52:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:15.323 11:52:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:15.323 11:52:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:15.323 11:52:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:15.323 11:52:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:15.323 11:52:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:15.581 11:52:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:15.581 11:52:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:15.581 11:52:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:15.581 11:52:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:15.581 11:52:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:15.581 11:52:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:15.581 11:52:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:15.581 11:52:49 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:15.581 11:52:49 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:15.581 11:52:49 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:15.581 11:52:49 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:15.840 [2024-12-05 11:52:49.901432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:15.840 [2024-12-05 11:52:49.938844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:15.840 [2024-12-05 11:52:49.938845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.840 [2024-12-05 11:52:49.980150] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:15.840 [2024-12-05 11:52:49.980188] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:19.121 11:52:52 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:19.121 11:52:52 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:09:19.121 spdk_app_start Round 2 00:09:19.121 11:52:52 event.app_repeat -- event/event.sh@25 -- # waitforlisten 4093634 /var/tmp/spdk-nbd.sock 00:09:19.121 11:52:52 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 4093634 ']' 00:09:19.121 11:52:52 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:19.121 11:52:52 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:19.121 11:52:52 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:19.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:19.121 11:52:52 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:19.121 11:52:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:19.121 11:52:52 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:19.121 11:52:52 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:19.121 11:52:52 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:19.121 Malloc0 00:09:19.121 11:52:53 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:19.379 Malloc1 00:09:19.379 11:52:53 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:19.379 11:52:53 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:19.379 11:52:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:19.379 11:52:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:19.379 11:52:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:19.379 11:52:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:19.379 11:52:53 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:19.379 11:52:53 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:19.379 11:52:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:19.379 11:52:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:19.379 11:52:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:19.379 11:52:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:19.379 11:52:53 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:19.379 11:52:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:19.379 11:52:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:19.379 11:52:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:19.379 /dev/nbd0 00:09:19.638 11:52:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:19.638 11:52:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:19.638 11:52:53 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:19.638 11:52:53 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:19.638 11:52:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:19.638 11:52:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:19.638 11:52:53 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:19.638 11:52:53 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:19.638 11:52:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:19.638 11:52:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:19.638 11:52:53 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:19.638 1+0 records in 00:09:19.638 1+0 records out 00:09:19.638 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000205677 s, 19.9 MB/s 00:09:19.638 11:52:53 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:19.638 11:52:53 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:19.638 11:52:53 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:19.638 11:52:53 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:19.638 11:52:53 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:19.638 11:52:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:19.638 11:52:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:19.638 11:52:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:19.638 /dev/nbd1 00:09:19.638 11:52:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:19.638 11:52:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:19.638 11:52:53 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:19.638 11:52:53 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:19.638 11:52:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:19.638 11:52:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:19.638 11:52:53 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:19.638 11:52:53 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:19.638 11:52:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:19.638 11:52:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:19.638 11:52:53 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:19.638 1+0 records in 00:09:19.638 1+0 records out 00:09:19.639 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00023116 s, 17.7 MB/s 00:09:19.899 11:52:53 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:19.899 11:52:53 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:19.899 11:52:53 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:19.899 11:52:53 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:19.899 11:52:53 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:19.899 11:52:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:19.899 11:52:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:19.899 11:52:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:19.899 11:52:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:19.899 11:52:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:19.899 11:52:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:19.899 { 00:09:19.899 "nbd_device": "/dev/nbd0", 00:09:19.899 "bdev_name": "Malloc0" 00:09:19.899 }, 00:09:19.899 { 00:09:19.899 "nbd_device": "/dev/nbd1", 00:09:19.899 "bdev_name": "Malloc1" 00:09:19.899 } 00:09:19.899 ]' 00:09:19.899 11:52:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:19.899 { 00:09:19.899 "nbd_device": "/dev/nbd0", 00:09:19.899 "bdev_name": "Malloc0" 00:09:19.899 }, 00:09:19.899 { 00:09:19.899 "nbd_device": "/dev/nbd1", 00:09:19.899 "bdev_name": "Malloc1" 00:09:19.899 } 00:09:19.899 ]' 00:09:19.899 11:52:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:19.899 11:52:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:19.899 /dev/nbd1' 00:09:19.899 11:52:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:19.899 /dev/nbd1' 00:09:19.899 11:52:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:19.899 11:52:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:19.899 11:52:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:19.899 11:52:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:19.899 11:52:54 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:19.899 11:52:54 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:20.158 11:52:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:20.158 11:52:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:20.158 11:52:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:20.158 11:52:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:20.158 11:52:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:20.158 11:52:54 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:20.158 256+0 records in 00:09:20.158 256+0 records out 00:09:20.158 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0101632 s, 103 MB/s 00:09:20.158 11:52:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:20.158 11:52:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:20.158 256+0 records in 00:09:20.158 256+0 records out 00:09:20.158 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0135793 s, 77.2 MB/s 00:09:20.158 11:52:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:20.158 11:52:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:20.158 256+0 records in 00:09:20.158 256+0 records out 00:09:20.158 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0145548 s, 72.0 MB/s 00:09:20.158 11:52:54 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:20.158 11:52:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:20.158 11:52:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:20.158 11:52:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:20.158 11:52:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:20.158 11:52:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:20.158 11:52:54 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:20.158 11:52:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:20.158 11:52:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:09:20.158 11:52:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:20.158 11:52:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:09:20.158 11:52:54 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:20.158 11:52:54 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:20.158 11:52:54 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:20.158 11:52:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:20.158 11:52:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:20.158 11:52:54 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:20.158 11:52:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:20.158 11:52:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:20.417 11:52:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:20.417 11:52:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:20.417 11:52:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:20.417 11:52:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:20.417 11:52:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:20.417 11:52:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:20.417 11:52:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:20.417 11:52:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:20.417 11:52:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:20.417 11:52:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:20.417 11:52:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:20.417 11:52:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:20.417 11:52:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:20.417 11:52:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:20.417 11:52:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:20.417 11:52:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:20.417 11:52:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:20.417 11:52:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:20.417 11:52:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:20.417 11:52:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:20.417 11:52:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:20.675 11:52:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:20.675 11:52:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:20.675 11:52:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:20.675 11:52:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:20.675 11:52:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:20.675 11:52:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:20.675 11:52:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:20.675 11:52:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:20.675 11:52:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:20.675 11:52:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:20.675 11:52:54 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:20.675 11:52:54 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:20.675 11:52:54 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:20.935 11:52:55 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:21.193 [2024-12-05 11:52:55.194983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:21.193 [2024-12-05 11:52:55.231507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:21.193 [2024-12-05 11:52:55.231508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.193 [2024-12-05 11:52:55.272106] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:21.193 [2024-12-05 11:52:55.272144] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:24.481 11:52:58 event.app_repeat -- event/event.sh@38 -- # waitforlisten 4093634 /var/tmp/spdk-nbd.sock 00:09:24.481 11:52:58 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 4093634 ']' 00:09:24.481 11:52:58 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:24.481 11:52:58 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:24.481 11:52:58 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:24.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:24.481 11:52:58 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:24.481 11:52:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:24.481 11:52:58 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:24.481 11:52:58 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:24.481 11:52:58 event.app_repeat -- event/event.sh@39 -- # killprocess 4093634 00:09:24.481 11:52:58 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 4093634 ']' 00:09:24.481 11:52:58 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 4093634 00:09:24.481 11:52:58 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:09:24.481 11:52:58 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:24.481 11:52:58 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4093634 00:09:24.481 11:52:58 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:24.481 11:52:58 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:24.481 11:52:58 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4093634' 00:09:24.481 killing process with pid 4093634 00:09:24.481 11:52:58 event.app_repeat -- common/autotest_common.sh@973 -- # kill 4093634 00:09:24.481 11:52:58 event.app_repeat -- common/autotest_common.sh@978 -- # wait 4093634 00:09:24.481 spdk_app_start is called in Round 0. 00:09:24.481 Shutdown signal received, stop current app iteration 00:09:24.481 Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 reinitialization... 00:09:24.481 spdk_app_start is called in Round 1. 00:09:24.481 Shutdown signal received, stop current app iteration 00:09:24.481 Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 reinitialization... 00:09:24.481 spdk_app_start is called in Round 2. 00:09:24.481 Shutdown signal received, stop current app iteration 00:09:24.481 Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 reinitialization... 00:09:24.481 spdk_app_start is called in Round 3. 00:09:24.481 Shutdown signal received, stop current app iteration 00:09:24.481 11:52:58 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:09:24.481 11:52:58 event.app_repeat -- event/event.sh@42 -- # return 0 00:09:24.481 00:09:24.481 real 0m16.342s 00:09:24.481 user 0m35.877s 00:09:24.481 sys 0m2.509s 00:09:24.481 11:52:58 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:24.481 11:52:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:24.481 ************************************ 00:09:24.481 END TEST app_repeat 00:09:24.481 ************************************ 00:09:24.481 11:52:58 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:09:24.481 11:52:58 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:09:24.481 11:52:58 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:24.481 11:52:58 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:24.482 11:52:58 event -- common/autotest_common.sh@10 -- # set +x 00:09:24.482 ************************************ 00:09:24.482 START TEST cpu_locks 00:09:24.482 ************************************ 00:09:24.482 11:52:58 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:09:24.482 * Looking for test storage... 00:09:24.482 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:09:24.482 11:52:58 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:24.482 11:52:58 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:09:24.482 11:52:58 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:24.742 11:52:58 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:24.742 11:52:58 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:24.742 11:52:58 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:24.742 11:52:58 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:24.742 11:52:58 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:09:24.742 11:52:58 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:09:24.742 11:52:58 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:09:24.742 11:52:58 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:09:24.742 11:52:58 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:09:24.742 11:52:58 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:09:24.742 11:52:58 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:09:24.742 11:52:58 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:24.742 11:52:58 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:09:24.742 11:52:58 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:09:24.742 11:52:58 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:24.742 11:52:58 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:24.742 11:52:58 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:09:24.742 11:52:58 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:09:24.742 11:52:58 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:24.742 11:52:58 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:09:24.742 11:52:58 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:09:24.742 11:52:58 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:09:24.742 11:52:58 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:09:24.742 11:52:58 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:24.742 11:52:58 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:09:24.742 11:52:58 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:09:24.742 11:52:58 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:24.742 11:52:58 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:24.742 11:52:58 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:09:24.742 11:52:58 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:24.742 11:52:58 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:24.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.743 --rc genhtml_branch_coverage=1 00:09:24.743 --rc genhtml_function_coverage=1 00:09:24.743 --rc genhtml_legend=1 00:09:24.743 --rc geninfo_all_blocks=1 00:09:24.743 --rc geninfo_unexecuted_blocks=1 00:09:24.743 00:09:24.743 ' 00:09:24.743 11:52:58 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:24.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.743 --rc genhtml_branch_coverage=1 00:09:24.743 --rc genhtml_function_coverage=1 00:09:24.743 --rc genhtml_legend=1 00:09:24.743 --rc geninfo_all_blocks=1 00:09:24.743 --rc geninfo_unexecuted_blocks=1 00:09:24.743 00:09:24.743 ' 00:09:24.743 11:52:58 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:24.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.743 --rc genhtml_branch_coverage=1 00:09:24.743 --rc genhtml_function_coverage=1 00:09:24.743 --rc genhtml_legend=1 00:09:24.743 --rc geninfo_all_blocks=1 00:09:24.743 --rc geninfo_unexecuted_blocks=1 00:09:24.743 00:09:24.743 ' 00:09:24.743 11:52:58 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:24.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.743 --rc genhtml_branch_coverage=1 00:09:24.743 --rc genhtml_function_coverage=1 00:09:24.743 --rc genhtml_legend=1 00:09:24.743 --rc geninfo_all_blocks=1 00:09:24.743 --rc geninfo_unexecuted_blocks=1 00:09:24.743 00:09:24.743 ' 00:09:24.743 11:52:58 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:09:24.743 11:52:58 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:09:24.743 11:52:58 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:09:24.743 11:52:58 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:09:24.743 11:52:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:24.743 11:52:58 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:24.743 11:52:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:24.743 ************************************ 00:09:24.743 START TEST default_locks 00:09:24.743 ************************************ 00:09:24.743 11:52:58 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:09:24.743 11:52:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=4096634 00:09:24.743 11:52:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 4096634 00:09:24.743 11:52:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:24.743 11:52:58 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 4096634 ']' 00:09:24.743 11:52:58 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.743 11:52:58 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:24.743 11:52:58 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.743 11:52:58 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:24.743 11:52:58 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:24.743 [2024-12-05 11:52:58.788702] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:09:24.743 [2024-12-05 11:52:58.788746] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4096634 ] 00:09:24.743 [2024-12-05 11:52:58.860685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.743 [2024-12-05 11:52:58.900243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.003 11:52:59 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:25.003 11:52:59 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:09:25.003 11:52:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 4096634 00:09:25.003 11:52:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 4096634 00:09:25.003 11:52:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:25.571 lslocks: write error 00:09:25.571 11:52:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 4096634 00:09:25.571 11:52:59 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 4096634 ']' 00:09:25.571 11:52:59 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 4096634 00:09:25.571 11:52:59 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:09:25.571 11:52:59 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:25.571 11:52:59 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4096634 00:09:25.571 11:52:59 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:25.571 11:52:59 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:25.571 11:52:59 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4096634' 00:09:25.571 killing process with pid 4096634 00:09:25.571 11:52:59 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 4096634 00:09:25.571 11:52:59 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 4096634 00:09:25.830 11:52:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 4096634 00:09:25.830 11:52:59 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:09:25.830 11:52:59 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 4096634 00:09:25.830 11:52:59 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:25.830 11:52:59 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:25.830 11:52:59 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:25.830 11:52:59 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:25.830 11:52:59 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 4096634 00:09:25.830 11:52:59 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 4096634 ']' 00:09:25.830 11:52:59 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:25.830 11:52:59 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:25.830 11:52:59 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:25.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:25.830 11:52:59 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:25.830 11:52:59 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:25.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (4096634) - No such process 00:09:25.830 ERROR: process (pid: 4096634) is no longer running 00:09:25.830 11:52:59 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:25.830 11:52:59 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:09:25.830 11:52:59 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:09:25.831 11:52:59 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:25.831 11:52:59 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:25.831 11:52:59 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:25.831 11:52:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:09:25.831 11:52:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:25.831 11:52:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:09:25.831 11:52:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:25.831 00:09:25.831 real 0m1.195s 00:09:25.831 user 0m1.162s 00:09:25.831 sys 0m0.544s 00:09:25.831 11:52:59 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:25.831 11:52:59 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:25.831 ************************************ 00:09:25.831 END TEST default_locks 00:09:25.831 ************************************ 00:09:25.831 11:52:59 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:09:25.831 11:52:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:25.831 11:52:59 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:25.831 11:52:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:25.831 ************************************ 00:09:25.831 START TEST default_locks_via_rpc 00:09:25.831 ************************************ 00:09:25.831 11:52:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:09:25.831 11:52:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=4096893 00:09:25.831 11:52:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 4096893 00:09:25.831 11:52:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:25.831 11:53:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 4096893 ']' 00:09:25.831 11:53:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:25.831 11:53:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:25.831 11:53:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:25.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:25.831 11:53:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:25.831 11:53:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:26.090 [2024-12-05 11:53:00.054534] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:09:26.090 [2024-12-05 11:53:00.054587] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4096893 ] 00:09:26.090 [2024-12-05 11:53:00.130779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.090 [2024-12-05 11:53:00.172327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.026 11:53:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:27.026 11:53:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:27.026 11:53:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:09:27.026 11:53:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.026 11:53:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:27.026 11:53:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.026 11:53:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:09:27.026 11:53:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:27.026 11:53:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:09:27.026 11:53:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:27.026 11:53:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:09:27.026 11:53:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.026 11:53:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:27.026 11:53:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.026 11:53:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 4096893 00:09:27.026 11:53:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 4096893 00:09:27.026 11:53:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:27.285 11:53:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 4096893 00:09:27.285 11:53:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 4096893 ']' 00:09:27.285 11:53:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 4096893 00:09:27.285 11:53:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:09:27.285 11:53:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:27.285 11:53:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4096893 00:09:27.285 11:53:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:27.285 11:53:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:27.286 11:53:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4096893' 00:09:27.286 killing process with pid 4096893 00:09:27.286 11:53:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 4096893 00:09:27.286 11:53:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 4096893 00:09:27.544 00:09:27.544 real 0m1.682s 00:09:27.544 user 0m1.778s 00:09:27.544 sys 0m0.559s 00:09:27.544 11:53:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:27.544 11:53:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:27.544 ************************************ 00:09:27.544 END TEST default_locks_via_rpc 00:09:27.544 ************************************ 00:09:27.544 11:53:01 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:09:27.544 11:53:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:27.544 11:53:01 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:27.544 11:53:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:27.803 ************************************ 00:09:27.803 START TEST non_locking_app_on_locked_coremask 00:09:27.803 ************************************ 00:09:27.803 11:53:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:09:27.803 11:53:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=4097155 00:09:27.803 11:53:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 4097155 /var/tmp/spdk.sock 00:09:27.803 11:53:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:27.803 11:53:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 4097155 ']' 00:09:27.803 11:53:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.803 11:53:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:27.803 11:53:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.803 11:53:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:27.803 11:53:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:27.803 [2024-12-05 11:53:01.802867] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:09:27.803 [2024-12-05 11:53:01.802908] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4097155 ] 00:09:27.803 [2024-12-05 11:53:01.876316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.803 [2024-12-05 11:53:01.917963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.063 11:53:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:28.063 11:53:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:28.063 11:53:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=4097166 00:09:28.063 11:53:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 4097166 /var/tmp/spdk2.sock 00:09:28.063 11:53:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:09:28.063 11:53:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 4097166 ']' 00:09:28.063 11:53:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:28.063 11:53:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:28.063 11:53:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:28.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:28.063 11:53:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:28.063 11:53:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:28.063 [2024-12-05 11:53:02.180453] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:09:28.063 [2024-12-05 11:53:02.180502] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4097166 ] 00:09:28.322 [2024-12-05 11:53:02.272471] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:28.322 [2024-12-05 11:53:02.272501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.322 [2024-12-05 11:53:02.360548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.889 11:53:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:28.889 11:53:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:28.889 11:53:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 4097155 00:09:28.889 11:53:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 4097155 00:09:28.889 11:53:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:29.148 lslocks: write error 00:09:29.148 11:53:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 4097155 00:09:29.148 11:53:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 4097155 ']' 00:09:29.148 11:53:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 4097155 00:09:29.148 11:53:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:29.148 11:53:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:29.148 11:53:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4097155 00:09:29.148 11:53:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:29.148 11:53:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:29.149 11:53:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4097155' 00:09:29.149 killing process with pid 4097155 00:09:29.149 11:53:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 4097155 00:09:29.149 11:53:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 4097155 00:09:29.715 11:53:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 4097166 00:09:29.715 11:53:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 4097166 ']' 00:09:29.715 11:53:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 4097166 00:09:29.715 11:53:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:29.715 11:53:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:29.715 11:53:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4097166 00:09:29.973 11:53:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:29.973 11:53:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:29.973 11:53:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4097166' 00:09:29.973 killing process with pid 4097166 00:09:29.973 11:53:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 4097166 00:09:29.973 11:53:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 4097166 00:09:30.232 00:09:30.232 real 0m2.509s 00:09:30.232 user 0m2.631s 00:09:30.232 sys 0m0.801s 00:09:30.232 11:53:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:30.232 11:53:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:30.232 ************************************ 00:09:30.232 END TEST non_locking_app_on_locked_coremask 00:09:30.232 ************************************ 00:09:30.232 11:53:04 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:09:30.232 11:53:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:30.232 11:53:04 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:30.232 11:53:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:30.232 ************************************ 00:09:30.232 START TEST locking_app_on_unlocked_coremask 00:09:30.232 ************************************ 00:09:30.232 11:53:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:09:30.232 11:53:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=4097654 00:09:30.232 11:53:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 4097654 /var/tmp/spdk.sock 00:09:30.232 11:53:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:09:30.232 11:53:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 4097654 ']' 00:09:30.232 11:53:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.232 11:53:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:30.232 11:53:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.232 11:53:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:30.232 11:53:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:30.232 [2024-12-05 11:53:04.380664] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:09:30.232 [2024-12-05 11:53:04.380702] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4097654 ] 00:09:30.490 [2024-12-05 11:53:04.454831] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:30.490 [2024-12-05 11:53:04.454858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.490 [2024-12-05 11:53:04.496606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.750 11:53:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:30.750 11:53:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:30.750 11:53:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=4097663 00:09:30.750 11:53:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 4097663 /var/tmp/spdk2.sock 00:09:30.750 11:53:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:30.750 11:53:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 4097663 ']' 00:09:30.750 11:53:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:30.750 11:53:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:30.750 11:53:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:30.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:30.750 11:53:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:30.750 11:53:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:30.750 [2024-12-05 11:53:04.763294] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:09:30.750 [2024-12-05 11:53:04.763347] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4097663 ] 00:09:30.750 [2024-12-05 11:53:04.847629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.750 [2024-12-05 11:53:04.928178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.682 11:53:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:31.682 11:53:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:31.682 11:53:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 4097663 00:09:31.682 11:53:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 4097663 00:09:31.682 11:53:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:32.248 lslocks: write error 00:09:32.248 11:53:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 4097654 00:09:32.248 11:53:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 4097654 ']' 00:09:32.248 11:53:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 4097654 00:09:32.248 11:53:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:32.248 11:53:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:32.248 11:53:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4097654 00:09:32.248 11:53:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:32.248 11:53:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:32.248 11:53:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4097654' 00:09:32.248 killing process with pid 4097654 00:09:32.248 11:53:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 4097654 00:09:32.248 11:53:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 4097654 00:09:32.815 11:53:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 4097663 00:09:32.815 11:53:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 4097663 ']' 00:09:32.815 11:53:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 4097663 00:09:32.815 11:53:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:32.815 11:53:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:32.815 11:53:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4097663 00:09:32.815 11:53:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:32.815 11:53:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:32.815 11:53:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4097663' 00:09:32.815 killing process with pid 4097663 00:09:32.815 11:53:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 4097663 00:09:32.815 11:53:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 4097663 00:09:33.074 00:09:33.074 real 0m2.844s 00:09:33.074 user 0m2.990s 00:09:33.074 sys 0m0.942s 00:09:33.074 11:53:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.074 11:53:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:33.074 ************************************ 00:09:33.074 END TEST locking_app_on_unlocked_coremask 00:09:33.074 ************************************ 00:09:33.074 11:53:07 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:09:33.074 11:53:07 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:33.074 11:53:07 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.074 11:53:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:33.074 ************************************ 00:09:33.074 START TEST locking_app_on_locked_coremask 00:09:33.074 ************************************ 00:09:33.074 11:53:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:09:33.074 11:53:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=4098151 00:09:33.074 11:53:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 4098151 /var/tmp/spdk.sock 00:09:33.074 11:53:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:33.074 11:53:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 4098151 ']' 00:09:33.074 11:53:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.074 11:53:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:33.074 11:53:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.074 11:53:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:33.074 11:53:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:33.332 [2024-12-05 11:53:07.294181] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:09:33.332 [2024-12-05 11:53:07.294222] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4098151 ] 00:09:33.332 [2024-12-05 11:53:07.370433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.332 [2024-12-05 11:53:07.412111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.591 11:53:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:33.591 11:53:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:33.591 11:53:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=4098157 00:09:33.591 11:53:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 4098157 /var/tmp/spdk2.sock 00:09:33.591 11:53:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:33.591 11:53:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:09:33.591 11:53:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 4098157 /var/tmp/spdk2.sock 00:09:33.591 11:53:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:33.591 11:53:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:33.591 11:53:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:33.591 11:53:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:33.591 11:53:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 4098157 /var/tmp/spdk2.sock 00:09:33.591 11:53:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 4098157 ']' 00:09:33.591 11:53:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:33.591 11:53:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:33.591 11:53:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:33.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:33.591 11:53:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:33.591 11:53:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:33.591 [2024-12-05 11:53:07.677388] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:09:33.591 [2024-12-05 11:53:07.677430] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4098157 ] 00:09:33.591 [2024-12-05 11:53:07.769631] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 4098151 has claimed it. 00:09:33.591 [2024-12-05 11:53:07.769674] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:34.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (4098157) - No such process 00:09:34.156 ERROR: process (pid: 4098157) is no longer running 00:09:34.156 11:53:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:34.156 11:53:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:09:34.156 11:53:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:09:34.156 11:53:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:34.156 11:53:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:34.156 11:53:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:34.156 11:53:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 4098151 00:09:34.156 11:53:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 4098151 00:09:34.156 11:53:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:34.722 lslocks: write error 00:09:34.722 11:53:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 4098151 00:09:34.722 11:53:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 4098151 ']' 00:09:34.722 11:53:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 4098151 00:09:34.722 11:53:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:34.722 11:53:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:34.722 11:53:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4098151 00:09:34.982 11:53:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:34.982 11:53:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:34.982 11:53:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4098151' 00:09:34.982 killing process with pid 4098151 00:09:34.982 11:53:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 4098151 00:09:34.982 11:53:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 4098151 00:09:35.241 00:09:35.241 real 0m2.004s 00:09:35.241 user 0m2.144s 00:09:35.241 sys 0m0.659s 00:09:35.241 11:53:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:35.241 11:53:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:35.241 ************************************ 00:09:35.241 END TEST locking_app_on_locked_coremask 00:09:35.241 ************************************ 00:09:35.241 11:53:09 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:09:35.241 11:53:09 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:35.241 11:53:09 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:35.241 11:53:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:35.241 ************************************ 00:09:35.241 START TEST locking_overlapped_coremask 00:09:35.241 ************************************ 00:09:35.241 11:53:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:09:35.241 11:53:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=4098539 00:09:35.241 11:53:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 4098539 /var/tmp/spdk.sock 00:09:35.241 11:53:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:09:35.241 11:53:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 4098539 ']' 00:09:35.241 11:53:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:35.241 11:53:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:35.241 11:53:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:35.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:35.241 11:53:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:35.241 11:53:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:35.241 [2024-12-05 11:53:09.370575] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:09:35.241 [2024-12-05 11:53:09.370620] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4098539 ] 00:09:35.500 [2024-12-05 11:53:09.445023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:35.500 [2024-12-05 11:53:09.489453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:35.500 [2024-12-05 11:53:09.489563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.500 [2024-12-05 11:53:09.489563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:36.066 11:53:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:36.066 11:53:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:36.066 11:53:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=4098658 00:09:36.066 11:53:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 4098658 /var/tmp/spdk2.sock 00:09:36.066 11:53:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:09:36.066 11:53:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:09:36.066 11:53:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 4098658 /var/tmp/spdk2.sock 00:09:36.066 11:53:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:36.066 11:53:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:36.066 11:53:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:36.066 11:53:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:36.066 11:53:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 4098658 /var/tmp/spdk2.sock 00:09:36.066 11:53:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 4098658 ']' 00:09:36.066 11:53:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:36.066 11:53:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:36.066 11:53:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:36.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:36.066 11:53:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:36.066 11:53:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:36.066 [2024-12-05 11:53:10.253716] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:09:36.066 [2024-12-05 11:53:10.253764] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4098658 ] 00:09:36.325 [2024-12-05 11:53:10.344899] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 4098539 has claimed it. 00:09:36.325 [2024-12-05 11:53:10.344939] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:36.894 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (4098658) - No such process 00:09:36.894 ERROR: process (pid: 4098658) is no longer running 00:09:36.894 11:53:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:36.894 11:53:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:09:36.894 11:53:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:09:36.894 11:53:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:36.894 11:53:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:36.894 11:53:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:36.894 11:53:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:09:36.894 11:53:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:36.894 11:53:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:36.894 11:53:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:36.894 11:53:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 4098539 00:09:36.894 11:53:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 4098539 ']' 00:09:36.894 11:53:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 4098539 00:09:36.894 11:53:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:09:36.894 11:53:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:36.894 11:53:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4098539 00:09:36.894 11:53:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:36.894 11:53:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:36.894 11:53:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4098539' 00:09:36.894 killing process with pid 4098539 00:09:36.894 11:53:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 4098539 00:09:36.894 11:53:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 4098539 00:09:37.153 00:09:37.153 real 0m1.936s 00:09:37.153 user 0m5.584s 00:09:37.153 sys 0m0.432s 00:09:37.153 11:53:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.153 11:53:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:37.153 ************************************ 00:09:37.153 END TEST locking_overlapped_coremask 00:09:37.153 ************************************ 00:09:37.153 11:53:11 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:09:37.153 11:53:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:37.154 11:53:11 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.154 11:53:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:37.154 ************************************ 00:09:37.154 START TEST locking_overlapped_coremask_via_rpc 00:09:37.154 ************************************ 00:09:37.154 11:53:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:09:37.154 11:53:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=4098914 00:09:37.154 11:53:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:09:37.154 11:53:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 4098914 /var/tmp/spdk.sock 00:09:37.154 11:53:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 4098914 ']' 00:09:37.154 11:53:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:37.154 11:53:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:37.154 11:53:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:37.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:37.154 11:53:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:37.154 11:53:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.413 [2024-12-05 11:53:11.363334] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:09:37.413 [2024-12-05 11:53:11.363378] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4098914 ] 00:09:37.413 [2024-12-05 11:53:11.436778] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:37.413 [2024-12-05 11:53:11.436803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:37.413 [2024-12-05 11:53:11.481059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:37.413 [2024-12-05 11:53:11.481167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.413 [2024-12-05 11:53:11.481168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:37.672 11:53:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:37.672 11:53:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:37.672 11:53:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=4098921 00:09:37.672 11:53:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 4098921 /var/tmp/spdk2.sock 00:09:37.672 11:53:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:09:37.672 11:53:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 4098921 ']' 00:09:37.672 11:53:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:37.672 11:53:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:37.672 11:53:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:37.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:37.673 11:53:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:37.673 11:53:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.673 [2024-12-05 11:53:11.743503] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:09:37.673 [2024-12-05 11:53:11.743552] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4098921 ] 00:09:37.673 [2024-12-05 11:53:11.832648] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:37.673 [2024-12-05 11:53:11.832677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:37.931 [2024-12-05 11:53:11.927775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:37.931 [2024-12-05 11:53:11.931417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:37.931 [2024-12-05 11:53:11.931418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:38.498 11:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:38.498 11:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:38.498 11:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:09:38.498 11:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.498 11:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:38.498 11:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.498 11:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:38.498 11:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:09:38.498 11:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:38.498 11:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:38.498 11:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:38.498 11:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:38.498 11:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:38.498 11:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:38.498 11:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.498 11:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:38.498 [2024-12-05 11:53:12.592439] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 4098914 has claimed it. 00:09:38.498 request: 00:09:38.498 { 00:09:38.498 "method": "framework_enable_cpumask_locks", 00:09:38.498 "req_id": 1 00:09:38.498 } 00:09:38.498 Got JSON-RPC error response 00:09:38.498 response: 00:09:38.498 { 00:09:38.498 "code": -32603, 00:09:38.498 "message": "Failed to claim CPU core: 2" 00:09:38.498 } 00:09:38.498 11:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:38.498 11:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:09:38.498 11:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:38.498 11:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:38.498 11:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:38.498 11:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 4098914 /var/tmp/spdk.sock 00:09:38.498 11:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 4098914 ']' 00:09:38.498 11:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.498 11:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:38.498 11:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.498 11:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:38.498 11:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:38.757 11:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:38.757 11:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:38.757 11:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 4098921 /var/tmp/spdk2.sock 00:09:38.757 11:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 4098921 ']' 00:09:38.757 11:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:38.757 11:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:38.757 11:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:38.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:38.757 11:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:38.757 11:53:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.016 11:53:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:39.016 11:53:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:39.016 11:53:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:09:39.016 11:53:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:39.016 11:53:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:39.016 11:53:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:39.016 00:09:39.016 real 0m1.684s 00:09:39.016 user 0m0.826s 00:09:39.016 sys 0m0.126s 00:09:39.016 11:53:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:39.016 11:53:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.016 ************************************ 00:09:39.016 END TEST locking_overlapped_coremask_via_rpc 00:09:39.016 ************************************ 00:09:39.016 11:53:13 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:09:39.016 11:53:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 4098914 ]] 00:09:39.016 11:53:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 4098914 00:09:39.016 11:53:13 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 4098914 ']' 00:09:39.016 11:53:13 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 4098914 00:09:39.016 11:53:13 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:09:39.016 11:53:13 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:39.016 11:53:13 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4098914 00:09:39.016 11:53:13 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:39.016 11:53:13 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:39.016 11:53:13 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4098914' 00:09:39.016 killing process with pid 4098914 00:09:39.016 11:53:13 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 4098914 00:09:39.016 11:53:13 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 4098914 00:09:39.275 11:53:13 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 4098921 ]] 00:09:39.275 11:53:13 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 4098921 00:09:39.275 11:53:13 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 4098921 ']' 00:09:39.275 11:53:13 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 4098921 00:09:39.275 11:53:13 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:09:39.275 11:53:13 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:39.275 11:53:13 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4098921 00:09:39.275 11:53:13 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:09:39.275 11:53:13 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:09:39.275 11:53:13 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4098921' 00:09:39.275 killing process with pid 4098921 00:09:39.275 11:53:13 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 4098921 00:09:39.275 11:53:13 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 4098921 00:09:39.843 11:53:13 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:39.843 11:53:13 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:09:39.843 11:53:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 4098914 ]] 00:09:39.843 11:53:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 4098914 00:09:39.843 11:53:13 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 4098914 ']' 00:09:39.843 11:53:13 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 4098914 00:09:39.843 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (4098914) - No such process 00:09:39.843 11:53:13 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 4098914 is not found' 00:09:39.843 Process with pid 4098914 is not found 00:09:39.843 11:53:13 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 4098921 ]] 00:09:39.843 11:53:13 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 4098921 00:09:39.843 11:53:13 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 4098921 ']' 00:09:39.843 11:53:13 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 4098921 00:09:39.843 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (4098921) - No such process 00:09:39.843 11:53:13 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 4098921 is not found' 00:09:39.843 Process with pid 4098921 is not found 00:09:39.843 11:53:13 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:39.843 00:09:39.843 real 0m15.248s 00:09:39.843 user 0m26.775s 00:09:39.844 sys 0m5.027s 00:09:39.844 11:53:13 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:39.844 11:53:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:39.844 ************************************ 00:09:39.844 END TEST cpu_locks 00:09:39.844 ************************************ 00:09:39.844 00:09:39.844 real 0m40.222s 00:09:39.844 user 1m16.997s 00:09:39.844 sys 0m8.560s 00:09:39.844 11:53:13 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:39.844 11:53:13 event -- common/autotest_common.sh@10 -- # set +x 00:09:39.844 ************************************ 00:09:39.844 END TEST event 00:09:39.844 ************************************ 00:09:39.844 11:53:13 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:09:39.844 11:53:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:39.844 11:53:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:39.844 11:53:13 -- common/autotest_common.sh@10 -- # set +x 00:09:39.844 ************************************ 00:09:39.844 START TEST thread 00:09:39.844 ************************************ 00:09:39.844 11:53:13 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:09:39.844 * Looking for test storage... 00:09:39.844 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:09:39.844 11:53:13 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:39.844 11:53:13 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:09:39.844 11:53:13 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:39.844 11:53:14 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:39.844 11:53:14 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:39.844 11:53:14 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:39.844 11:53:14 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:39.844 11:53:14 thread -- scripts/common.sh@336 -- # IFS=.-: 00:09:39.844 11:53:14 thread -- scripts/common.sh@336 -- # read -ra ver1 00:09:39.844 11:53:14 thread -- scripts/common.sh@337 -- # IFS=.-: 00:09:39.844 11:53:14 thread -- scripts/common.sh@337 -- # read -ra ver2 00:09:39.844 11:53:14 thread -- scripts/common.sh@338 -- # local 'op=<' 00:09:39.844 11:53:14 thread -- scripts/common.sh@340 -- # ver1_l=2 00:09:39.844 11:53:14 thread -- scripts/common.sh@341 -- # ver2_l=1 00:09:39.844 11:53:14 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:39.844 11:53:14 thread -- scripts/common.sh@344 -- # case "$op" in 00:09:39.844 11:53:14 thread -- scripts/common.sh@345 -- # : 1 00:09:39.844 11:53:14 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:39.844 11:53:14 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:39.844 11:53:14 thread -- scripts/common.sh@365 -- # decimal 1 00:09:39.844 11:53:14 thread -- scripts/common.sh@353 -- # local d=1 00:09:39.844 11:53:14 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:39.844 11:53:14 thread -- scripts/common.sh@355 -- # echo 1 00:09:40.104 11:53:14 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:09:40.104 11:53:14 thread -- scripts/common.sh@366 -- # decimal 2 00:09:40.104 11:53:14 thread -- scripts/common.sh@353 -- # local d=2 00:09:40.104 11:53:14 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:40.104 11:53:14 thread -- scripts/common.sh@355 -- # echo 2 00:09:40.104 11:53:14 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:09:40.104 11:53:14 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:40.104 11:53:14 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:40.104 11:53:14 thread -- scripts/common.sh@368 -- # return 0 00:09:40.104 11:53:14 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:40.104 11:53:14 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:40.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.104 --rc genhtml_branch_coverage=1 00:09:40.104 --rc genhtml_function_coverage=1 00:09:40.104 --rc genhtml_legend=1 00:09:40.104 --rc geninfo_all_blocks=1 00:09:40.104 --rc geninfo_unexecuted_blocks=1 00:09:40.104 00:09:40.104 ' 00:09:40.104 11:53:14 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:40.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.104 --rc genhtml_branch_coverage=1 00:09:40.104 --rc genhtml_function_coverage=1 00:09:40.104 --rc genhtml_legend=1 00:09:40.104 --rc geninfo_all_blocks=1 00:09:40.104 --rc geninfo_unexecuted_blocks=1 00:09:40.104 00:09:40.104 ' 00:09:40.104 11:53:14 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:40.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.104 --rc genhtml_branch_coverage=1 00:09:40.104 --rc genhtml_function_coverage=1 00:09:40.104 --rc genhtml_legend=1 00:09:40.104 --rc geninfo_all_blocks=1 00:09:40.104 --rc geninfo_unexecuted_blocks=1 00:09:40.104 00:09:40.104 ' 00:09:40.104 11:53:14 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:40.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.104 --rc genhtml_branch_coverage=1 00:09:40.104 --rc genhtml_function_coverage=1 00:09:40.104 --rc genhtml_legend=1 00:09:40.104 --rc geninfo_all_blocks=1 00:09:40.104 --rc geninfo_unexecuted_blocks=1 00:09:40.104 00:09:40.104 ' 00:09:40.104 11:53:14 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:40.104 11:53:14 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:09:40.104 11:53:14 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:40.104 11:53:14 thread -- common/autotest_common.sh@10 -- # set +x 00:09:40.104 ************************************ 00:09:40.104 START TEST thread_poller_perf 00:09:40.104 ************************************ 00:09:40.104 11:53:14 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:40.104 [2024-12-05 11:53:14.106768] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:09:40.104 [2024-12-05 11:53:14.106851] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4099487 ] 00:09:40.104 [2024-12-05 11:53:14.188280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.104 [2024-12-05 11:53:14.227511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.104 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:09:41.507 [2024-12-05T10:53:15.703Z] ====================================== 00:09:41.507 [2024-12-05T10:53:15.703Z] busy:2106839132 (cyc) 00:09:41.507 [2024-12-05T10:53:15.703Z] total_run_count: 414000 00:09:41.507 [2024-12-05T10:53:15.703Z] tsc_hz: 2100000000 (cyc) 00:09:41.507 [2024-12-05T10:53:15.703Z] ====================================== 00:09:41.507 [2024-12-05T10:53:15.703Z] poller_cost: 5088 (cyc), 2422 (nsec) 00:09:41.507 00:09:41.507 real 0m1.187s 00:09:41.507 user 0m1.107s 00:09:41.507 sys 0m0.077s 00:09:41.507 11:53:15 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:41.507 11:53:15 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:41.507 ************************************ 00:09:41.507 END TEST thread_poller_perf 00:09:41.507 ************************************ 00:09:41.507 11:53:15 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:41.507 11:53:15 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:09:41.507 11:53:15 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:41.507 11:53:15 thread -- common/autotest_common.sh@10 -- # set +x 00:09:41.507 ************************************ 00:09:41.507 START TEST thread_poller_perf 00:09:41.507 ************************************ 00:09:41.507 11:53:15 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:41.507 [2024-12-05 11:53:15.366684] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:09:41.507 [2024-12-05 11:53:15.366764] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4099736 ] 00:09:41.507 [2024-12-05 11:53:15.445016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.507 [2024-12-05 11:53:15.485592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.507 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:09:42.444 [2024-12-05T10:53:16.640Z] ====================================== 00:09:42.444 [2024-12-05T10:53:16.640Z] busy:2101365972 (cyc) 00:09:42.444 [2024-12-05T10:53:16.640Z] total_run_count: 5533000 00:09:42.444 [2024-12-05T10:53:16.640Z] tsc_hz: 2100000000 (cyc) 00:09:42.444 [2024-12-05T10:53:16.640Z] ====================================== 00:09:42.444 [2024-12-05T10:53:16.640Z] poller_cost: 379 (cyc), 180 (nsec) 00:09:42.444 00:09:42.444 real 0m1.179s 00:09:42.444 user 0m1.095s 00:09:42.444 sys 0m0.079s 00:09:42.444 11:53:16 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:42.444 11:53:16 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:42.444 ************************************ 00:09:42.444 END TEST thread_poller_perf 00:09:42.444 ************************************ 00:09:42.444 11:53:16 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:09:42.444 00:09:42.444 real 0m2.683s 00:09:42.444 user 0m2.357s 00:09:42.444 sys 0m0.339s 00:09:42.444 11:53:16 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:42.444 11:53:16 thread -- common/autotest_common.sh@10 -- # set +x 00:09:42.445 ************************************ 00:09:42.445 END TEST thread 00:09:42.445 ************************************ 00:09:42.445 11:53:16 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:09:42.445 11:53:16 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:09:42.445 11:53:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:42.445 11:53:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:42.445 11:53:16 -- common/autotest_common.sh@10 -- # set +x 00:09:42.445 ************************************ 00:09:42.445 START TEST app_cmdline 00:09:42.445 ************************************ 00:09:42.445 11:53:16 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:09:42.704 * Looking for test storage... 00:09:42.704 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:42.704 11:53:16 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:42.704 11:53:16 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:09:42.704 11:53:16 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:42.704 11:53:16 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:42.704 11:53:16 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:42.704 11:53:16 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:42.704 11:53:16 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:42.704 11:53:16 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:09:42.704 11:53:16 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:09:42.704 11:53:16 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:09:42.704 11:53:16 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:09:42.704 11:53:16 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:09:42.704 11:53:16 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:09:42.704 11:53:16 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:09:42.704 11:53:16 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:42.704 11:53:16 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:09:42.704 11:53:16 app_cmdline -- scripts/common.sh@345 -- # : 1 00:09:42.704 11:53:16 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:42.704 11:53:16 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:42.704 11:53:16 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:09:42.704 11:53:16 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:09:42.704 11:53:16 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:42.704 11:53:16 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:09:42.704 11:53:16 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:09:42.704 11:53:16 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:09:42.704 11:53:16 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:09:42.704 11:53:16 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:42.704 11:53:16 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:09:42.704 11:53:16 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:09:42.704 11:53:16 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:42.704 11:53:16 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:42.704 11:53:16 app_cmdline -- scripts/common.sh@368 -- # return 0 00:09:42.704 11:53:16 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:42.704 11:53:16 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:42.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.704 --rc genhtml_branch_coverage=1 00:09:42.704 --rc genhtml_function_coverage=1 00:09:42.704 --rc genhtml_legend=1 00:09:42.704 --rc geninfo_all_blocks=1 00:09:42.704 --rc geninfo_unexecuted_blocks=1 00:09:42.704 00:09:42.704 ' 00:09:42.704 11:53:16 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:42.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.704 --rc genhtml_branch_coverage=1 00:09:42.704 --rc genhtml_function_coverage=1 00:09:42.704 --rc genhtml_legend=1 00:09:42.704 --rc geninfo_all_blocks=1 00:09:42.704 --rc geninfo_unexecuted_blocks=1 00:09:42.704 00:09:42.704 ' 00:09:42.704 11:53:16 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:42.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.704 --rc genhtml_branch_coverage=1 00:09:42.704 --rc genhtml_function_coverage=1 00:09:42.704 --rc genhtml_legend=1 00:09:42.704 --rc geninfo_all_blocks=1 00:09:42.704 --rc geninfo_unexecuted_blocks=1 00:09:42.704 00:09:42.704 ' 00:09:42.704 11:53:16 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:42.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.704 --rc genhtml_branch_coverage=1 00:09:42.704 --rc genhtml_function_coverage=1 00:09:42.704 --rc genhtml_legend=1 00:09:42.704 --rc geninfo_all_blocks=1 00:09:42.704 --rc geninfo_unexecuted_blocks=1 00:09:42.704 00:09:42.704 ' 00:09:42.704 11:53:16 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:09:42.704 11:53:16 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=4100029 00:09:42.704 11:53:16 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 4100029 00:09:42.704 11:53:16 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:09:42.704 11:53:16 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 4100029 ']' 00:09:42.704 11:53:16 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.704 11:53:16 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:42.704 11:53:16 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.704 11:53:16 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:42.704 11:53:16 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:42.704 [2024-12-05 11:53:16.858676] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:09:42.704 [2024-12-05 11:53:16.858722] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4100029 ] 00:09:42.963 [2024-12-05 11:53:16.930242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.963 [2024-12-05 11:53:16.972910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.222 11:53:17 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:43.222 11:53:17 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:09:43.222 11:53:17 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:09:43.222 { 00:09:43.222 "version": "SPDK v25.01-pre git sha1 b7fa4c06b", 00:09:43.222 "fields": { 00:09:43.222 "major": 25, 00:09:43.222 "minor": 1, 00:09:43.222 "patch": 0, 00:09:43.222 "suffix": "-pre", 00:09:43.222 "commit": "b7fa4c06b" 00:09:43.222 } 00:09:43.222 } 00:09:43.222 11:53:17 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:09:43.222 11:53:17 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:09:43.222 11:53:17 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:09:43.222 11:53:17 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:09:43.222 11:53:17 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:09:43.222 11:53:17 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:09:43.222 11:53:17 app_cmdline -- app/cmdline.sh@26 -- # sort 00:09:43.222 11:53:17 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.222 11:53:17 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:43.222 11:53:17 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.481 11:53:17 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:09:43.481 11:53:17 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:09:43.481 11:53:17 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:43.481 11:53:17 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:09:43.481 11:53:17 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:43.481 11:53:17 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:43.481 11:53:17 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:43.481 11:53:17 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:43.481 11:53:17 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:43.481 11:53:17 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:43.481 11:53:17 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:43.481 11:53:17 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:43.481 11:53:17 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:43.481 11:53:17 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:43.481 request: 00:09:43.481 { 00:09:43.481 "method": "env_dpdk_get_mem_stats", 00:09:43.481 "req_id": 1 00:09:43.481 } 00:09:43.481 Got JSON-RPC error response 00:09:43.481 response: 00:09:43.481 { 00:09:43.481 "code": -32601, 00:09:43.481 "message": "Method not found" 00:09:43.481 } 00:09:43.481 11:53:17 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:09:43.481 11:53:17 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:43.481 11:53:17 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:43.481 11:53:17 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:43.481 11:53:17 app_cmdline -- app/cmdline.sh@1 -- # killprocess 4100029 00:09:43.481 11:53:17 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 4100029 ']' 00:09:43.481 11:53:17 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 4100029 00:09:43.481 11:53:17 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:09:43.481 11:53:17 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:43.481 11:53:17 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4100029 00:09:43.741 11:53:17 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:43.741 11:53:17 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:43.741 11:53:17 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4100029' 00:09:43.741 killing process with pid 4100029 00:09:43.741 11:53:17 app_cmdline -- common/autotest_common.sh@973 -- # kill 4100029 00:09:43.741 11:53:17 app_cmdline -- common/autotest_common.sh@978 -- # wait 4100029 00:09:44.000 00:09:44.000 real 0m1.353s 00:09:44.000 user 0m1.571s 00:09:44.000 sys 0m0.455s 00:09:44.000 11:53:17 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:44.000 11:53:17 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:44.000 ************************************ 00:09:44.000 END TEST app_cmdline 00:09:44.000 ************************************ 00:09:44.000 11:53:18 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:09:44.000 11:53:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:44.000 11:53:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:44.000 11:53:18 -- common/autotest_common.sh@10 -- # set +x 00:09:44.000 ************************************ 00:09:44.000 START TEST version 00:09:44.000 ************************************ 00:09:44.000 11:53:18 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:09:44.000 * Looking for test storage... 00:09:44.000 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:44.000 11:53:18 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:44.000 11:53:18 version -- common/autotest_common.sh@1711 -- # lcov --version 00:09:44.000 11:53:18 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:44.261 11:53:18 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:44.261 11:53:18 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:44.261 11:53:18 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:44.261 11:53:18 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:44.261 11:53:18 version -- scripts/common.sh@336 -- # IFS=.-: 00:09:44.261 11:53:18 version -- scripts/common.sh@336 -- # read -ra ver1 00:09:44.261 11:53:18 version -- scripts/common.sh@337 -- # IFS=.-: 00:09:44.261 11:53:18 version -- scripts/common.sh@337 -- # read -ra ver2 00:09:44.261 11:53:18 version -- scripts/common.sh@338 -- # local 'op=<' 00:09:44.261 11:53:18 version -- scripts/common.sh@340 -- # ver1_l=2 00:09:44.261 11:53:18 version -- scripts/common.sh@341 -- # ver2_l=1 00:09:44.261 11:53:18 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:44.261 11:53:18 version -- scripts/common.sh@344 -- # case "$op" in 00:09:44.261 11:53:18 version -- scripts/common.sh@345 -- # : 1 00:09:44.261 11:53:18 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:44.261 11:53:18 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:44.261 11:53:18 version -- scripts/common.sh@365 -- # decimal 1 00:09:44.261 11:53:18 version -- scripts/common.sh@353 -- # local d=1 00:09:44.261 11:53:18 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:44.261 11:53:18 version -- scripts/common.sh@355 -- # echo 1 00:09:44.261 11:53:18 version -- scripts/common.sh@365 -- # ver1[v]=1 00:09:44.261 11:53:18 version -- scripts/common.sh@366 -- # decimal 2 00:09:44.261 11:53:18 version -- scripts/common.sh@353 -- # local d=2 00:09:44.261 11:53:18 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:44.261 11:53:18 version -- scripts/common.sh@355 -- # echo 2 00:09:44.261 11:53:18 version -- scripts/common.sh@366 -- # ver2[v]=2 00:09:44.261 11:53:18 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:44.261 11:53:18 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:44.261 11:53:18 version -- scripts/common.sh@368 -- # return 0 00:09:44.261 11:53:18 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:44.261 11:53:18 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:44.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.261 --rc genhtml_branch_coverage=1 00:09:44.261 --rc genhtml_function_coverage=1 00:09:44.261 --rc genhtml_legend=1 00:09:44.261 --rc geninfo_all_blocks=1 00:09:44.261 --rc geninfo_unexecuted_blocks=1 00:09:44.261 00:09:44.261 ' 00:09:44.261 11:53:18 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:44.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.261 --rc genhtml_branch_coverage=1 00:09:44.261 --rc genhtml_function_coverage=1 00:09:44.261 --rc genhtml_legend=1 00:09:44.261 --rc geninfo_all_blocks=1 00:09:44.261 --rc geninfo_unexecuted_blocks=1 00:09:44.261 00:09:44.261 ' 00:09:44.261 11:53:18 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:44.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.261 --rc genhtml_branch_coverage=1 00:09:44.261 --rc genhtml_function_coverage=1 00:09:44.261 --rc genhtml_legend=1 00:09:44.261 --rc geninfo_all_blocks=1 00:09:44.261 --rc geninfo_unexecuted_blocks=1 00:09:44.261 00:09:44.261 ' 00:09:44.261 11:53:18 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:44.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.261 --rc genhtml_branch_coverage=1 00:09:44.261 --rc genhtml_function_coverage=1 00:09:44.261 --rc genhtml_legend=1 00:09:44.261 --rc geninfo_all_blocks=1 00:09:44.261 --rc geninfo_unexecuted_blocks=1 00:09:44.261 00:09:44.261 ' 00:09:44.261 11:53:18 version -- app/version.sh@17 -- # get_header_version major 00:09:44.261 11:53:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:09:44.261 11:53:18 version -- app/version.sh@14 -- # cut -f2 00:09:44.261 11:53:18 version -- app/version.sh@14 -- # tr -d '"' 00:09:44.261 11:53:18 version -- app/version.sh@17 -- # major=25 00:09:44.261 11:53:18 version -- app/version.sh@18 -- # get_header_version minor 00:09:44.261 11:53:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:09:44.261 11:53:18 version -- app/version.sh@14 -- # cut -f2 00:09:44.261 11:53:18 version -- app/version.sh@14 -- # tr -d '"' 00:09:44.261 11:53:18 version -- app/version.sh@18 -- # minor=1 00:09:44.261 11:53:18 version -- app/version.sh@19 -- # get_header_version patch 00:09:44.261 11:53:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:09:44.261 11:53:18 version -- app/version.sh@14 -- # cut -f2 00:09:44.261 11:53:18 version -- app/version.sh@14 -- # tr -d '"' 00:09:44.261 11:53:18 version -- app/version.sh@19 -- # patch=0 00:09:44.261 11:53:18 version -- app/version.sh@20 -- # get_header_version suffix 00:09:44.261 11:53:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:09:44.261 11:53:18 version -- app/version.sh@14 -- # cut -f2 00:09:44.261 11:53:18 version -- app/version.sh@14 -- # tr -d '"' 00:09:44.261 11:53:18 version -- app/version.sh@20 -- # suffix=-pre 00:09:44.261 11:53:18 version -- app/version.sh@22 -- # version=25.1 00:09:44.261 11:53:18 version -- app/version.sh@25 -- # (( patch != 0 )) 00:09:44.261 11:53:18 version -- app/version.sh@28 -- # version=25.1rc0 00:09:44.261 11:53:18 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:44.261 11:53:18 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:09:44.261 11:53:18 version -- app/version.sh@30 -- # py_version=25.1rc0 00:09:44.261 11:53:18 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:09:44.261 00:09:44.261 real 0m0.239s 00:09:44.261 user 0m0.145s 00:09:44.261 sys 0m0.137s 00:09:44.261 11:53:18 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:44.261 11:53:18 version -- common/autotest_common.sh@10 -- # set +x 00:09:44.261 ************************************ 00:09:44.261 END TEST version 00:09:44.261 ************************************ 00:09:44.261 11:53:18 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:09:44.261 11:53:18 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:09:44.261 11:53:18 -- spdk/autotest.sh@194 -- # uname -s 00:09:44.261 11:53:18 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:09:44.261 11:53:18 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:09:44.261 11:53:18 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:09:44.261 11:53:18 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:09:44.261 11:53:18 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:09:44.261 11:53:18 -- spdk/autotest.sh@260 -- # timing_exit lib 00:09:44.261 11:53:18 -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:44.261 11:53:18 -- common/autotest_common.sh@10 -- # set +x 00:09:44.261 11:53:18 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:09:44.262 11:53:18 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:09:44.262 11:53:18 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:09:44.262 11:53:18 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:09:44.262 11:53:18 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:09:44.262 11:53:18 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:09:44.262 11:53:18 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:44.262 11:53:18 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:44.262 11:53:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:44.262 11:53:18 -- common/autotest_common.sh@10 -- # set +x 00:09:44.262 ************************************ 00:09:44.262 START TEST nvmf_tcp 00:09:44.262 ************************************ 00:09:44.262 11:53:18 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:44.522 * Looking for test storage... 00:09:44.522 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:44.522 11:53:18 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:44.522 11:53:18 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:09:44.522 11:53:18 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:44.522 11:53:18 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:44.522 11:53:18 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:44.522 11:53:18 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:44.522 11:53:18 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:44.522 11:53:18 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:09:44.522 11:53:18 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:09:44.522 11:53:18 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:09:44.522 11:53:18 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:09:44.522 11:53:18 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:09:44.522 11:53:18 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:09:44.522 11:53:18 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:09:44.522 11:53:18 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:44.522 11:53:18 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:09:44.522 11:53:18 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:09:44.522 11:53:18 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:44.522 11:53:18 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:44.522 11:53:18 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:09:44.522 11:53:18 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:09:44.522 11:53:18 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:44.522 11:53:18 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:09:44.522 11:53:18 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:44.522 11:53:18 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:09:44.522 11:53:18 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:09:44.522 11:53:18 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:44.522 11:53:18 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:09:44.522 11:53:18 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:44.522 11:53:18 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:44.522 11:53:18 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:44.522 11:53:18 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:09:44.522 11:53:18 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:44.522 11:53:18 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:44.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.522 --rc genhtml_branch_coverage=1 00:09:44.522 --rc genhtml_function_coverage=1 00:09:44.522 --rc genhtml_legend=1 00:09:44.522 --rc geninfo_all_blocks=1 00:09:44.522 --rc geninfo_unexecuted_blocks=1 00:09:44.522 00:09:44.522 ' 00:09:44.522 11:53:18 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:44.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.522 --rc genhtml_branch_coverage=1 00:09:44.522 --rc genhtml_function_coverage=1 00:09:44.522 --rc genhtml_legend=1 00:09:44.522 --rc geninfo_all_blocks=1 00:09:44.522 --rc geninfo_unexecuted_blocks=1 00:09:44.522 00:09:44.522 ' 00:09:44.522 11:53:18 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:44.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.522 --rc genhtml_branch_coverage=1 00:09:44.522 --rc genhtml_function_coverage=1 00:09:44.522 --rc genhtml_legend=1 00:09:44.522 --rc geninfo_all_blocks=1 00:09:44.522 --rc geninfo_unexecuted_blocks=1 00:09:44.522 00:09:44.522 ' 00:09:44.522 11:53:18 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:44.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.522 --rc genhtml_branch_coverage=1 00:09:44.522 --rc genhtml_function_coverage=1 00:09:44.522 --rc genhtml_legend=1 00:09:44.522 --rc geninfo_all_blocks=1 00:09:44.522 --rc geninfo_unexecuted_blocks=1 00:09:44.522 00:09:44.522 ' 00:09:44.522 11:53:18 nvmf_tcp -- nvmf/nvmf.sh@10 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:09:44.522 11:53:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:44.522 11:53:18 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:44.522 11:53:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:44.522 ************************************ 00:09:44.522 START TEST nvmf_target_core 00:09:44.522 ************************************ 00:09:44.522 11:53:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:09:44.522 * Looking for test storage... 00:09:44.522 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:44.522 11:53:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:44.522 11:53:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:09:44.522 11:53:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:44.782 11:53:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:44.782 11:53:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:44.782 11:53:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:44.782 11:53:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:44.782 11:53:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:09:44.782 11:53:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:09:44.782 11:53:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:09:44.782 11:53:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:09:44.782 11:53:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:09:44.782 11:53:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:09:44.782 11:53:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:09:44.782 11:53:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:44.782 11:53:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:09:44.782 11:53:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:09:44.782 11:53:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:44.782 11:53:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:44.782 11:53:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:09:44.782 11:53:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:09:44.782 11:53:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:44.782 11:53:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:09:44.782 11:53:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:09:44.782 11:53:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:09:44.782 11:53:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:09:44.782 11:53:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:44.782 11:53:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:09:44.782 11:53:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:09:44.782 11:53:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:44.782 11:53:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:44.782 11:53:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:09:44.782 11:53:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:44.782 11:53:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:44.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.782 --rc genhtml_branch_coverage=1 00:09:44.782 --rc genhtml_function_coverage=1 00:09:44.782 --rc genhtml_legend=1 00:09:44.782 --rc geninfo_all_blocks=1 00:09:44.782 --rc geninfo_unexecuted_blocks=1 00:09:44.782 00:09:44.782 ' 00:09:44.782 11:53:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:44.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.782 --rc genhtml_branch_coverage=1 00:09:44.782 --rc genhtml_function_coverage=1 00:09:44.782 --rc genhtml_legend=1 00:09:44.782 --rc geninfo_all_blocks=1 00:09:44.782 --rc geninfo_unexecuted_blocks=1 00:09:44.782 00:09:44.782 ' 00:09:44.782 11:53:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:44.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.782 --rc genhtml_branch_coverage=1 00:09:44.782 --rc genhtml_function_coverage=1 00:09:44.782 --rc genhtml_legend=1 00:09:44.782 --rc geninfo_all_blocks=1 00:09:44.782 --rc geninfo_unexecuted_blocks=1 00:09:44.782 00:09:44.782 ' 00:09:44.782 11:53:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:44.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.782 --rc genhtml_branch_coverage=1 00:09:44.782 --rc genhtml_function_coverage=1 00:09:44.782 --rc genhtml_legend=1 00:09:44.782 --rc geninfo_all_blocks=1 00:09:44.782 --rc geninfo_unexecuted_blocks=1 00:09:44.782 00:09:44.782 ' 00:09:44.782 11:53:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:44.782 11:53:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:09:44.782 11:53:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:44.782 11:53:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:44.782 11:53:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:44.782 11:53:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:44.782 11:53:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:44.782 11:53:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:09:44.783 11:53:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:44.783 11:53:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:09:44.783 11:53:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:44.783 11:53:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:44.783 11:53:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:44.783 11:53:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:09:44.783 11:53:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:09:44.783 11:53:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:44.783 11:53:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:44.783 11:53:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:09:44.783 11:53:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:44.783 11:53:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:44.783 11:53:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:44.783 11:53:18 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.783 11:53:18 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.783 11:53:18 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.783 11:53:18 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:09:44.783 11:53:18 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.783 11:53:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:09:44.783 11:53:18 nvmf_tcp.nvmf_target_core -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:09:44.783 11:53:18 nvmf_tcp.nvmf_target_core -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:09:44.783 11:53:18 nvmf_tcp.nvmf_target_core -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:09:44.783 11:53:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@50 -- # : 0 00:09:44.783 11:53:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:09:44.783 11:53:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:09:44.783 11:53:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:09:44.783 11:53:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:44.783 11:53:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:44.783 11:53:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:09:44.783 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:09:44.783 11:53:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:09:44.783 11:53:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:09:44.783 11:53:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@54 -- # have_pci_nics=0 00:09:44.783 11:53:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:44.783 11:53:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@13 -- # TEST_ARGS=("$@") 00:09:44.783 11:53:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@15 -- # [[ 0 -eq 0 ]] 00:09:44.783 11:53:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:44.783 11:53:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:44.783 11:53:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:44.783 11:53:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:44.783 ************************************ 00:09:44.783 START TEST nvmf_abort 00:09:44.783 ************************************ 00:09:44.783 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:44.783 * Looking for test storage... 00:09:44.783 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:44.783 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:44.783 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:09:44.783 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:45.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.055 --rc genhtml_branch_coverage=1 00:09:45.055 --rc genhtml_function_coverage=1 00:09:45.055 --rc genhtml_legend=1 00:09:45.055 --rc geninfo_all_blocks=1 00:09:45.055 --rc geninfo_unexecuted_blocks=1 00:09:45.055 00:09:45.055 ' 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:45.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.055 --rc genhtml_branch_coverage=1 00:09:45.055 --rc genhtml_function_coverage=1 00:09:45.055 --rc genhtml_legend=1 00:09:45.055 --rc geninfo_all_blocks=1 00:09:45.055 --rc geninfo_unexecuted_blocks=1 00:09:45.055 00:09:45.055 ' 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:45.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.055 --rc genhtml_branch_coverage=1 00:09:45.055 --rc genhtml_function_coverage=1 00:09:45.055 --rc genhtml_legend=1 00:09:45.055 --rc geninfo_all_blocks=1 00:09:45.055 --rc geninfo_unexecuted_blocks=1 00:09:45.055 00:09:45.055 ' 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:45.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.055 --rc genhtml_branch_coverage=1 00:09:45.055 --rc genhtml_function_coverage=1 00:09:45.055 --rc genhtml_legend=1 00:09:45.055 --rc geninfo_all_blocks=1 00:09:45.055 --rc geninfo_unexecuted_blocks=1 00:09:45.055 00:09:45.055 ' 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@50 -- # : 0 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:09:45.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:09:45.055 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@54 -- # have_pci_nics=0 00:09:45.056 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:45.056 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:09:45.056 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:09:45.056 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:09:45.056 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:45.056 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # prepare_net_devs 00:09:45.056 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # local -g is_hw=no 00:09:45.056 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@260 -- # remove_target_ns 00:09:45.056 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:09:45.056 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:09:45.056 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_target_ns 00:09:45.056 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:09:45.056 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:09:45.056 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # xtrace_disable 00:09:45.056 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@131 -- # pci_devs=() 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@131 -- # local -a pci_devs 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@132 -- # pci_net_devs=() 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@133 -- # pci_drivers=() 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@133 -- # local -A pci_drivers 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@135 -- # net_devs=() 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@135 -- # local -ga net_devs 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@136 -- # e810=() 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@136 -- # local -ga e810 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@137 -- # x722=() 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@137 -- # local -ga x722 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@138 -- # mlx=() 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@138 -- # local -ga mlx 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:51.624 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:51.624 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # [[ up == up ]] 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:51.624 Found net devices under 0000:86:00.0: cvl_0_0 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # [[ up == up ]] 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:51.624 Found net devices under 0000:86:00.1: cvl_0_1 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # is_hw=yes 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@257 -- # create_target_ns 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:51.624 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@27 -- # local -gA dev_map 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@28 -- # local -g _dev 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@44 -- # ips=() 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@11 -- # local val=167772161 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:09:51.625 10.0.0.1 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@11 -- # local val=167772162 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:09:51.625 10.0.0.2 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@38 -- # ping_ips 1 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@107 -- # local dev=initiator0 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:51.625 11:53:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:09:51.625 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:51.625 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.420 ms 00:09:51.625 00:09:51.625 --- 10.0.0.1 ping statistics --- 00:09:51.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:51.625 rtt min/avg/max/mdev = 0.420/0.420/0.420/0.000 ms 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@168 -- # get_net_dev target0 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@107 -- # local dev=target0 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:09:51.625 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:51.625 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:09:51.625 00:09:51.625 --- 10.0.0.2 ping statistics --- 00:09:51.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:51.625 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@98 -- # (( pair++ )) 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@270 -- # return 0 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@107 -- # local dev=initiator0 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@107 -- # local dev=initiator1 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@109 -- # return 1 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@168 -- # dev= 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@169 -- # return 0 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@168 -- # get_net_dev target0 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@107 -- # local dev=target0 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@168 -- # get_net_dev target1 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@107 -- # local dev=target1 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@109 -- # return 1 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@168 -- # dev= 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@169 -- # return 0 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:09:51.625 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:09:51.626 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:51.626 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:09:51.626 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:09:51.626 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:09:51.626 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:09:51.626 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:51.626 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:51.626 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # nvmfpid=4103730 00:09:51.626 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:51.626 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@329 -- # waitforlisten 4103730 00:09:51.626 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 4103730 ']' 00:09:51.626 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:51.626 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:51.626 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:51.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:51.626 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:51.626 11:53:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:51.626 [2024-12-05 11:53:25.173260] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:09:51.626 [2024-12-05 11:53:25.173309] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:51.626 [2024-12-05 11:53:25.253535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:51.626 [2024-12-05 11:53:25.295036] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:51.626 [2024-12-05 11:53:25.295072] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:51.626 [2024-12-05 11:53:25.295080] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:51.626 [2024-12-05 11:53:25.295086] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:51.626 [2024-12-05 11:53:25.295091] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:51.626 [2024-12-05 11:53:25.296487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:51.626 [2024-12-05 11:53:25.296594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:51.626 [2024-12-05 11:53:25.296595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:51.884 11:53:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:51.884 11:53:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:09:51.884 11:53:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:09:51.884 11:53:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:51.884 11:53:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:51.884 11:53:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:51.884 11:53:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:09:51.884 11:53:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.884 11:53:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:51.884 [2024-12-05 11:53:26.062650] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:51.884 11:53:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.884 11:53:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:09:51.884 11:53:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.884 11:53:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:52.142 Malloc0 00:09:52.142 11:53:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.142 11:53:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:52.142 11:53:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.142 11:53:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:52.142 Delay0 00:09:52.142 11:53:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.142 11:53:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:52.142 11:53:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.142 11:53:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:52.142 11:53:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.142 11:53:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:09:52.142 11:53:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.142 11:53:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:52.142 11:53:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.142 11:53:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:52.142 11:53:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.142 11:53:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:52.142 [2024-12-05 11:53:26.136979] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:52.142 11:53:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.142 11:53:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:52.142 11:53:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.142 11:53:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:52.142 11:53:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.143 11:53:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:09:52.143 [2024-12-05 11:53:26.274057] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:54.678 Initializing NVMe Controllers 00:09:54.678 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:54.678 controller IO queue size 128 less than required 00:09:54.678 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:09:54.678 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:09:54.678 Initialization complete. Launching workers. 00:09:54.678 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 38026 00:09:54.678 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 38091, failed to submit 62 00:09:54.678 success 38030, unsuccessful 61, failed 0 00:09:54.678 11:53:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:54.678 11:53:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.678 11:53:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:54.678 11:53:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.678 11:53:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:09:54.678 11:53:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:09:54.678 11:53:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # nvmfcleanup 00:09:54.678 11:53:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@99 -- # sync 00:09:54.678 11:53:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:09:54.678 11:53:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@102 -- # set +e 00:09:54.678 11:53:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@103 -- # for i in {1..20} 00:09:54.678 11:53:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:09:54.678 rmmod nvme_tcp 00:09:54.678 rmmod nvme_fabrics 00:09:54.678 rmmod nvme_keyring 00:09:54.678 11:53:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:09:54.678 11:53:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # set -e 00:09:54.679 11:53:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # return 0 00:09:54.679 11:53:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # '[' -n 4103730 ']' 00:09:54.679 11:53:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@337 -- # killprocess 4103730 00:09:54.679 11:53:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 4103730 ']' 00:09:54.679 11:53:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 4103730 00:09:54.679 11:53:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:09:54.679 11:53:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:54.679 11:53:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4103730 00:09:54.679 11:53:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:54.679 11:53:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:54.679 11:53:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4103730' 00:09:54.679 killing process with pid 4103730 00:09:54.679 11:53:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 4103730 00:09:54.679 11:53:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 4103730 00:09:54.679 11:53:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:09:54.679 11:53:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # nvmf_fini 00:09:54.679 11:53:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@264 -- # local dev 00:09:54.679 11:53:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@267 -- # remove_target_ns 00:09:54.679 11:53:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:09:54.679 11:53:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:09:54.679 11:53:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_target_ns 00:09:56.584 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@268 -- # delete_main_bridge 00:09:56.584 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:09:56.584 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@130 -- # return 0 00:09:56.584 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:09:56.584 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:09:56.584 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:09:56.584 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:09:56.584 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:09:56.584 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:09:56.584 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:09:56.584 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:09:56.584 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:09:56.584 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:09:56.584 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:09:56.584 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:09:56.584 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:09:56.584 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:09:56.584 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:09:56.584 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:09:56.584 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:09:56.584 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@41 -- # _dev=0 00:09:56.584 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@41 -- # dev_map=() 00:09:56.584 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@284 -- # iptr 00:09:56.584 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@542 -- # iptables-save 00:09:56.584 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:09:56.584 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@542 -- # iptables-restore 00:09:56.584 00:09:56.584 real 0m11.859s 00:09:56.584 user 0m13.577s 00:09:56.584 sys 0m5.488s 00:09:56.584 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:56.584 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:56.584 ************************************ 00:09:56.584 END TEST nvmf_abort 00:09:56.584 ************************************ 00:09:56.584 11:53:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@17 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:56.584 11:53:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:56.584 11:53:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:56.584 11:53:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:56.584 ************************************ 00:09:56.584 START TEST nvmf_ns_hotplug_stress 00:09:56.584 ************************************ 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:56.845 * Looking for test storage... 00:09:56.845 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:56.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.845 --rc genhtml_branch_coverage=1 00:09:56.845 --rc genhtml_function_coverage=1 00:09:56.845 --rc genhtml_legend=1 00:09:56.845 --rc geninfo_all_blocks=1 00:09:56.845 --rc geninfo_unexecuted_blocks=1 00:09:56.845 00:09:56.845 ' 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:56.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.845 --rc genhtml_branch_coverage=1 00:09:56.845 --rc genhtml_function_coverage=1 00:09:56.845 --rc genhtml_legend=1 00:09:56.845 --rc geninfo_all_blocks=1 00:09:56.845 --rc geninfo_unexecuted_blocks=1 00:09:56.845 00:09:56.845 ' 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:56.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.845 --rc genhtml_branch_coverage=1 00:09:56.845 --rc genhtml_function_coverage=1 00:09:56.845 --rc genhtml_legend=1 00:09:56.845 --rc geninfo_all_blocks=1 00:09:56.845 --rc geninfo_unexecuted_blocks=1 00:09:56.845 00:09:56.845 ' 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:56.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.845 --rc genhtml_branch_coverage=1 00:09:56.845 --rc genhtml_function_coverage=1 00:09:56.845 --rc genhtml_legend=1 00:09:56.845 --rc geninfo_all_blocks=1 00:09:56.845 --rc geninfo_unexecuted_blocks=1 00:09:56.845 00:09:56.845 ' 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@50 -- # : 0 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:09:56.845 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@54 -- # have_pci_nics=0 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # prepare_net_devs 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # local -g is_hw=no 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # remove_target_ns 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_target_ns 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # xtrace_disable 00:09:56.845 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:03.414 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:03.414 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@131 -- # pci_devs=() 00:10:03.414 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@131 -- # local -a pci_devs 00:10:03.414 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@132 -- # pci_net_devs=() 00:10:03.414 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:10:03.414 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@133 -- # pci_drivers=() 00:10:03.414 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@133 -- # local -A pci_drivers 00:10:03.414 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@135 -- # net_devs=() 00:10:03.414 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@135 -- # local -ga net_devs 00:10:03.414 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@136 -- # e810=() 00:10:03.414 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@136 -- # local -ga e810 00:10:03.414 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@137 -- # x722=() 00:10:03.414 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@137 -- # local -ga x722 00:10:03.414 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@138 -- # mlx=() 00:10:03.414 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@138 -- # local -ga mlx 00:10:03.414 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:03.414 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:03.414 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:03.414 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:03.415 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:03.415 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # [[ up == up ]] 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:03.415 Found net devices under 0000:86:00.0: cvl_0_0 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # [[ up == up ]] 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:03.415 Found net devices under 0000:86:00.1: cvl_0_1 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # is_hw=yes 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@257 -- # create_target_ns 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@27 -- # local -gA dev_map 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@28 -- # local -g _dev 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@44 -- # ips=() 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@11 -- # local val=167772161 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:10:03.415 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:10:03.416 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:10:03.416 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:10:03.416 10.0.0.1 00:10:03.416 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:10:03.416 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:10:03.416 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:03.416 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:03.416 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:10:03.416 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@11 -- # local val=167772162 00:10:03.416 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:10:03.416 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:10:03.416 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:10:03.416 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:10:03.416 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:10:03.416 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:10:03.416 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:10:03.416 10.0.0.2 00:10:03.416 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:10:03.416 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:10:03.416 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:10:03.416 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:10:03.416 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:10:03.416 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:10:03.416 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:10:03.416 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:03.416 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:03.416 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:10:03.416 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:10:03.416 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:10:03.416 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:10:03.416 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:10:03.416 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:10:03.416 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:10:03.416 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:10:03.416 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:10:03.416 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:10:03.416 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:10:03.416 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@38 -- # ping_ips 1 00:10:03.416 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:10:03.416 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:10:03.416 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:10:03.416 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:10:03.416 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:10:03.416 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:10:03.416 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:10:03.416 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:10:03.416 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:10:03.416 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@107 -- # local dev=initiator0 00:10:03.416 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:10:03.416 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:10:03.416 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:10:03.416 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:10:03.416 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:10:03.416 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:10:03.416 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:10:03.416 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:10:03.416 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:10:03.416 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:10:03.416 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:10:03.416 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:03.416 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:03.416 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:10:03.416 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:10:03.416 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:03.416 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.371 ms 00:10:03.416 00:10:03.416 --- 10.0.0.1 ping statistics --- 00:10:03.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:03.416 rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms 00:10:03.416 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:10:03.416 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:10:03.416 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:10:03.416 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:10:03.416 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:03.416 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:03.416 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # get_net_dev target0 00:10:03.416 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@107 -- # local dev=target0 00:10:03.416 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:10:03.416 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:10:03.416 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:10:03.416 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:10:03.416 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:10:03.416 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:10:03.416 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:10:03.416 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:10:03.416 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:10:03.416 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:10:03.416 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:10:03.416 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:10:03.416 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:10:03.416 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:10:03.416 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:03.416 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:10:03.416 00:10:03.416 --- 10.0.0.2 ping statistics --- 00:10:03.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:03.416 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:10:03.416 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # (( pair++ )) 00:10:03.416 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:10:03.416 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:03.416 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # return 0 00:10:03.416 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:10:03.416 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:10:03.416 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:10:03.416 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:10:03.416 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:10:03.416 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:10:03.416 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:10:03.416 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:10:03.416 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@107 -- # local dev=initiator0 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@107 -- # local dev=initiator1 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # return 1 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # dev= 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@169 -- # return 0 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # get_net_dev target0 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@107 -- # local dev=target0 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # get_net_dev target1 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@107 -- # local dev=target1 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # return 1 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # dev= 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@169 -- # return 0 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # nvmfpid=4107858 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # waitforlisten 4107858 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 4107858 ']' 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:03.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:03.417 [2024-12-05 11:53:37.219946] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:10:03.417 [2024-12-05 11:53:37.219991] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:03.417 [2024-12-05 11:53:37.298105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:03.417 [2024-12-05 11:53:37.339201] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:03.417 [2024-12-05 11:53:37.339238] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:03.417 [2024-12-05 11:53:37.339245] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:03.417 [2024-12-05 11:53:37.339250] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:03.417 [2024-12-05 11:53:37.339255] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:03.417 [2024-12-05 11:53:37.340609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:03.417 [2024-12-05 11:53:37.340714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:03.417 [2024-12-05 11:53:37.340714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:10:03.417 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:03.676 [2024-12-05 11:53:37.650909] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:03.676 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:03.935 11:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:03.935 [2024-12-05 11:53:38.032277] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:03.935 11:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:04.194 11:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:10:04.452 Malloc0 00:10:04.452 11:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:04.452 Delay0 00:10:04.711 11:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:04.711 11:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:10:04.970 NULL1 00:10:04.970 11:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:05.229 11:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:10:05.229 11:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=4108274 00:10:05.229 11:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4108274 00:10:05.229 11:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:05.488 Read completed with error (sct=0, sc=11) 00:10:05.488 11:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:05.488 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:05.488 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:05.488 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:05.488 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:05.488 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:05.488 11:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:10:05.488 11:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:10:05.747 true 00:10:05.747 11:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4108274 00:10:05.747 11:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:06.685 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:06.685 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:06.685 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:10:06.685 11:53:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:10:06.944 true 00:10:06.944 11:53:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4108274 00:10:06.944 11:53:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:07.203 11:53:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:07.461 11:53:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:10:07.461 11:53:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:10:07.461 true 00:10:07.721 11:53:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4108274 00:10:07.721 11:53:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:08.659 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:08.659 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:08.917 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:08.917 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:08.917 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:08.917 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:08.917 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:08.917 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:08.917 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:10:08.917 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:10:09.176 true 00:10:09.176 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4108274 00:10:09.176 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:10.113 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:10.113 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:10.113 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:10:10.113 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:10:10.371 true 00:10:10.371 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4108274 00:10:10.371 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:10.630 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:10.889 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:10:10.889 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:10:10.889 true 00:10:10.889 11:53:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4108274 00:10:10.889 11:53:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:12.264 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:12.264 11:53:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:12.264 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:12.264 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:12.264 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:12.264 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:12.264 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:12.264 11:53:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:10:12.264 11:53:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:10:12.522 true 00:10:12.522 11:53:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4108274 00:10:12.522 11:53:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:13.458 11:53:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:13.458 11:53:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:10:13.458 11:53:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:10:13.717 true 00:10:13.717 11:53:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4108274 00:10:13.717 11:53:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:13.975 11:53:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:14.233 11:53:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:10:14.233 11:53:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:10:14.233 true 00:10:14.233 11:53:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4108274 00:10:14.233 11:53:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:15.649 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:15.649 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:15.649 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:15.649 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:15.649 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:15.649 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:15.649 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:15.649 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:10:15.649 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:10:15.939 true 00:10:15.939 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4108274 00:10:15.939 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:16.588 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:16.868 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:10:16.868 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:10:16.868 true 00:10:17.148 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4108274 00:10:17.148 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:17.148 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:17.406 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:10:17.406 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:10:17.665 true 00:10:17.665 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4108274 00:10:17.665 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.600 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:18.600 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:18.600 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:18.600 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:18.859 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:18.859 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:18.859 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:18.859 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:10:18.859 11:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:10:19.118 true 00:10:19.118 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4108274 00:10:19.118 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:20.053 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:20.053 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:20.053 11:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:10:20.053 11:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:10:20.312 true 00:10:20.312 11:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4108274 00:10:20.312 11:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:20.570 11:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:20.570 11:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:10:20.570 11:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:10:20.828 true 00:10:20.828 11:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4108274 00:10:20.828 11:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:22.203 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:22.203 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:22.203 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:22.204 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:22.204 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:22.204 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:22.204 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:22.204 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:10:22.204 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:10:22.463 true 00:10:22.463 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4108274 00:10:22.463 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:23.398 11:53:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:23.398 11:53:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:10:23.398 11:53:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:10:23.656 true 00:10:23.656 11:53:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4108274 00:10:23.656 11:53:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:23.656 11:53:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:23.915 11:53:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:10:23.915 11:53:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:10:24.172 true 00:10:24.173 11:53:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4108274 00:10:24.173 11:53:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:25.546 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:25.546 11:53:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:25.546 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:25.546 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:25.546 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:25.546 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:25.546 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:25.546 11:53:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:10:25.546 11:53:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:10:25.805 true 00:10:25.805 11:53:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4108274 00:10:25.805 11:53:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:26.738 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:26.738 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:10:26.738 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:10:26.996 true 00:10:26.996 11:54:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4108274 00:10:26.996 11:54:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.261 11:54:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:27.261 11:54:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:10:27.261 11:54:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:10:27.519 true 00:10:27.519 11:54:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4108274 00:10:27.519 11:54:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.454 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:28.455 11:54:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:28.713 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:28.713 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:28.713 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:28.713 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:28.713 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:28.713 11:54:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:10:28.713 11:54:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:10:28.972 true 00:10:28.972 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4108274 00:10:28.972 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:29.907 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:29.907 11:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:10:29.907 11:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:10:30.166 true 00:10:30.166 11:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4108274 00:10:30.166 11:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:30.425 11:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:30.703 11:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:10:30.703 11:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:10:30.703 true 00:10:30.703 11:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4108274 00:10:30.703 11:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.076 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:32.076 11:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:32.076 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:32.076 11:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:10:32.077 11:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:10:32.335 true 00:10:32.335 11:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4108274 00:10:32.335 11:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.335 11:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:32.593 11:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:10:32.593 11:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:10:32.851 true 00:10:32.851 11:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4108274 00:10:32.851 11:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:33.784 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:33.784 11:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:34.041 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:34.041 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:34.041 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:34.041 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:34.041 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:34.041 11:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:10:34.041 11:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:10:34.299 true 00:10:34.299 11:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4108274 00:10:34.299 11:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.233 11:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:35.233 11:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:10:35.233 11:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:10:35.491 Initializing NVMe Controllers 00:10:35.491 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:35.491 Controller IO queue size 128, less than required. 00:10:35.491 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:35.491 Controller IO queue size 128, less than required. 00:10:35.491 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:35.491 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:35.491 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:10:35.491 Initialization complete. Launching workers. 00:10:35.491 ======================================================== 00:10:35.491 Latency(us) 00:10:35.491 Device Information : IOPS MiB/s Average min max 00:10:35.491 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2069.35 1.01 42815.59 1829.04 1048280.26 00:10:35.491 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 18006.55 8.79 7108.10 1547.17 447188.93 00:10:35.491 ======================================================== 00:10:35.491 Total : 20075.89 9.80 10788.69 1547.17 1048280.26 00:10:35.491 00:10:35.491 true 00:10:35.491 11:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4108274 00:10:35.491 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (4108274) - No such process 00:10:35.491 11:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 4108274 00:10:35.492 11:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.750 11:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:36.009 11:54:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:10:36.009 11:54:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:10:36.009 11:54:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:10:36.009 11:54:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:36.009 11:54:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:10:36.009 null0 00:10:36.009 11:54:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:36.009 11:54:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:36.009 11:54:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:10:36.267 null1 00:10:36.267 11:54:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:36.267 11:54:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:36.267 11:54:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:10:36.527 null2 00:10:36.527 11:54:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:36.527 11:54:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:36.527 11:54:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:10:36.786 null3 00:10:36.786 11:54:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:36.786 11:54:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:36.786 11:54:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:10:36.786 null4 00:10:37.044 11:54:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:37.044 11:54:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:37.044 11:54:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:10:37.044 null5 00:10:37.044 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:37.044 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:37.044 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:10:37.303 null6 00:10:37.303 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:37.303 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:37.303 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:10:37.563 null7 00:10:37.563 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:37.563 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:37.563 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:10:37.563 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:37.563 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:37.563 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:37.563 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:10:37.563 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:37.563 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:10:37.563 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:37.563 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.563 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:37.563 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:37.563 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:37.563 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:10:37.563 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:37.563 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:10:37.563 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:37.563 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.564 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:37.564 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:37.564 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:10:37.564 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:37.564 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:37.564 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:10:37.564 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:37.564 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.564 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:37.564 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:37.564 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:37.564 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:10:37.564 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:37.564 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:10:37.564 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:37.564 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.564 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:37.564 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:37.564 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:37.564 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:10:37.564 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:37.564 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:10:37.564 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:37.564 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.564 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:37.564 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:37.564 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:37.564 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:10:37.564 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:37.564 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:10:37.564 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:37.564 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.564 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:37.564 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:37.564 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:37.564 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:10:37.564 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:37.564 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:10:37.564 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:37.564 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.564 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:37.564 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:37.564 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:37.564 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:10:37.564 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:37.564 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 4114304 4114306 4114309 4114313 4114316 4114319 4114322 4114326 00:10:37.564 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:10:37.564 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:37.564 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.564 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:37.823 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:37.823 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:37.823 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:37.823 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:37.823 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:37.823 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:37.823 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:37.823 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:38.082 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.082 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.082 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:38.082 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.082 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.082 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:38.082 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.082 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.082 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:38.082 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.082 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.082 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:38.082 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.082 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.082 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.082 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.082 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:38.082 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:38.082 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.082 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.082 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:38.082 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.082 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.082 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:38.082 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:38.082 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:38.082 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:38.082 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:38.082 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:38.082 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:38.082 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:38.082 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:38.341 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.341 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.341 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:38.341 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.341 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.341 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:38.341 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.341 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.342 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:38.342 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.342 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.342 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.342 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.342 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:38.342 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:38.342 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.342 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.342 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:38.342 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.342 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.342 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:38.342 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.342 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.342 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:38.600 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:38.600 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:38.600 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:38.600 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:38.600 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:38.600 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:38.600 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:38.600 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:38.859 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.859 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.859 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:38.859 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.859 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.859 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:38.859 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.859 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.859 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:38.859 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.859 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.859 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.859 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:38.859 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.859 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:38.859 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.859 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.859 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:38.859 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.859 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.859 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:38.859 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.859 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.859 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:38.859 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:39.118 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:39.118 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:39.118 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:39.118 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:39.118 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:39.118 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:39.118 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:39.118 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.118 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.118 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:39.118 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.118 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.118 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:39.118 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.118 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.118 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:39.118 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.118 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.118 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:39.118 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.118 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.118 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:39.118 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.118 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.118 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:39.118 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.118 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.118 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:39.118 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.118 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.118 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:39.377 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:39.377 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:39.377 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:39.377 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:39.377 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:39.377 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:39.377 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:39.377 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:39.636 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.636 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.636 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:39.636 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.636 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.637 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:39.637 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.637 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.637 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:39.637 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.637 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.637 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:39.637 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.637 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.637 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:39.637 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.637 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.637 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:39.637 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.637 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.637 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:39.637 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.637 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.637 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:39.896 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:39.896 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:39.896 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:39.896 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:39.896 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:39.896 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:39.896 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:39.896 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:39.896 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.896 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.896 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:39.896 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.896 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.896 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:39.896 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.896 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.896 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:39.896 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.896 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.896 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:39.896 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.896 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.896 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:40.155 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.155 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.155 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:40.155 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.155 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.155 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:40.156 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.156 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.156 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:40.156 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:40.156 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:40.156 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:40.156 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:40.156 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:40.156 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:40.156 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:40.156 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:40.414 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.414 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.414 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:40.414 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.414 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.414 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.414 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.414 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:40.415 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:40.415 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.415 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.415 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.415 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:40.415 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.415 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:40.415 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.415 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.415 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:40.415 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.415 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.415 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:40.415 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.415 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.415 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:40.673 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:40.673 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:40.673 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:40.673 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:40.673 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:40.673 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:40.673 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:40.673 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:40.933 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.933 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.933 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:40.933 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.933 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.933 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:40.933 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.933 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.933 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:40.933 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.933 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.933 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:40.933 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.933 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.933 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:40.933 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.933 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.933 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:40.933 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.933 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.933 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.933 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.933 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:40.933 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:40.933 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:40.933 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:40.933 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:40.933 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:40.933 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:41.190 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:41.190 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:41.190 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:41.190 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.190 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.190 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:41.190 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.190 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.190 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:41.190 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.190 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.190 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:41.190 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.190 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.190 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:41.190 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.190 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.190 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:41.190 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.190 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.190 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:41.190 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.190 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.190 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:41.190 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.190 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.190 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:41.448 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:41.448 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:41.448 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:41.448 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:41.448 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:41.448 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:41.448 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:41.448 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:41.707 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.707 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.707 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.707 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.707 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.707 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.707 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.707 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.707 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.707 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.707 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.707 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.707 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.707 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.707 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.707 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.707 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:41.707 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:10:41.707 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # nvmfcleanup 00:10:41.707 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@99 -- # sync 00:10:41.707 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:10:41.707 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # set +e 00:10:41.707 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # for i in {1..20} 00:10:41.707 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:10:41.707 rmmod nvme_tcp 00:10:41.707 rmmod nvme_fabrics 00:10:41.707 rmmod nvme_keyring 00:10:41.707 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:10:41.707 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # set -e 00:10:41.708 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # return 0 00:10:41.708 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # '[' -n 4107858 ']' 00:10:41.708 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@337 -- # killprocess 4107858 00:10:41.708 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 4107858 ']' 00:10:41.708 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 4107858 00:10:41.708 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:10:41.708 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:41.708 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4107858 00:10:41.708 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:41.708 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:41.708 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4107858' 00:10:41.708 killing process with pid 4107858 00:10:41.708 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 4107858 00:10:41.708 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 4107858 00:10:41.967 11:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:10:41.967 11:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # nvmf_fini 00:10:41.967 11:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@264 -- # local dev 00:10:41.967 11:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@267 -- # remove_target_ns 00:10:41.967 11:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:10:41.967 11:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:10:41.967 11:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_target_ns 00:10:44.505 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@268 -- # delete_main_bridge 00:10:44.505 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:10:44.505 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@130 -- # return 0 00:10:44.505 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:10:44.505 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:10:44.505 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:10:44.505 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:10:44.505 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:10:44.505 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:10:44.505 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:10:44.505 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:10:44.505 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:10:44.505 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:10:44.505 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:10:44.505 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:10:44.505 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:10:44.505 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:10:44.505 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:10:44.505 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:10:44.505 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:10:44.505 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@41 -- # _dev=0 00:10:44.505 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@41 -- # dev_map=() 00:10:44.505 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@284 -- # iptr 00:10:44.505 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@542 -- # iptables-save 00:10:44.505 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:10:44.505 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@542 -- # iptables-restore 00:10:44.505 00:10:44.505 real 0m47.355s 00:10:44.505 user 3m12.578s 00:10:44.505 sys 0m15.651s 00:10:44.505 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:44.505 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:44.505 ************************************ 00:10:44.505 END TEST nvmf_ns_hotplug_stress 00:10:44.505 ************************************ 00:10:44.505 11:54:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:44.505 11:54:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:44.505 11:54:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:44.505 11:54:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:44.505 ************************************ 00:10:44.505 START TEST nvmf_delete_subsystem 00:10:44.505 ************************************ 00:10:44.505 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:44.505 * Looking for test storage... 00:10:44.505 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:44.505 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:44.505 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:10:44.505 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:44.505 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:44.505 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:44.505 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:44.505 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:44.505 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:44.505 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:44.505 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:44.505 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:44.505 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:44.505 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:44.505 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:44.505 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:44.505 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:10:44.505 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:10:44.505 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:44.505 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:44.505 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:10:44.505 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:10:44.505 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:44.505 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:10:44.505 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:44.505 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:10:44.505 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:10:44.505 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:44.505 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:10:44.505 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:44.505 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:44.505 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:44.505 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:10:44.505 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:44.505 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:44.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.506 --rc genhtml_branch_coverage=1 00:10:44.506 --rc genhtml_function_coverage=1 00:10:44.506 --rc genhtml_legend=1 00:10:44.506 --rc geninfo_all_blocks=1 00:10:44.506 --rc geninfo_unexecuted_blocks=1 00:10:44.506 00:10:44.506 ' 00:10:44.506 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:44.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.506 --rc genhtml_branch_coverage=1 00:10:44.506 --rc genhtml_function_coverage=1 00:10:44.506 --rc genhtml_legend=1 00:10:44.506 --rc geninfo_all_blocks=1 00:10:44.506 --rc geninfo_unexecuted_blocks=1 00:10:44.506 00:10:44.506 ' 00:10:44.506 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:44.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.506 --rc genhtml_branch_coverage=1 00:10:44.506 --rc genhtml_function_coverage=1 00:10:44.506 --rc genhtml_legend=1 00:10:44.506 --rc geninfo_all_blocks=1 00:10:44.506 --rc geninfo_unexecuted_blocks=1 00:10:44.506 00:10:44.506 ' 00:10:44.506 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:44.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.506 --rc genhtml_branch_coverage=1 00:10:44.506 --rc genhtml_function_coverage=1 00:10:44.506 --rc genhtml_legend=1 00:10:44.506 --rc geninfo_all_blocks=1 00:10:44.506 --rc geninfo_unexecuted_blocks=1 00:10:44.506 00:10:44.506 ' 00:10:44.506 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:44.506 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:10:44.506 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:44.506 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:44.506 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:44.506 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:44.506 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:44.506 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:10:44.506 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:44.506 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:10:44.506 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:44.506 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:44.506 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:44.506 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:10:44.506 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:10:44.506 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:44.506 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:44.506 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:44.506 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:44.506 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:44.506 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:44.506 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.506 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.506 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.506 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:10:44.506 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.506 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:10:44.506 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:10:44.506 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:10:44.506 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:10:44.506 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@50 -- # : 0 00:10:44.506 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:10:44.506 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:10:44.506 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:10:44.506 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:44.506 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:44.506 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:10:44.506 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:10:44.506 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:10:44.506 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:10:44.506 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@54 -- # have_pci_nics=0 00:10:44.506 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:10:44.506 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:10:44.506 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:44.506 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # prepare_net_devs 00:10:44.506 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # local -g is_hw=no 00:10:44.506 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # remove_target_ns 00:10:44.506 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:10:44.506 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:10:44.506 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_target_ns 00:10:44.506 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:10:44.506 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:10:44.506 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # xtrace_disable 00:10:44.506 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:51.205 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:51.205 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@131 -- # pci_devs=() 00:10:51.205 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@131 -- # local -a pci_devs 00:10:51.205 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@132 -- # pci_net_devs=() 00:10:51.205 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:10:51.205 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@133 -- # pci_drivers=() 00:10:51.205 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@133 -- # local -A pci_drivers 00:10:51.205 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@135 -- # net_devs=() 00:10:51.205 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@135 -- # local -ga net_devs 00:10:51.205 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@136 -- # e810=() 00:10:51.205 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@136 -- # local -ga e810 00:10:51.205 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@137 -- # x722=() 00:10:51.205 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@137 -- # local -ga x722 00:10:51.205 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@138 -- # mlx=() 00:10:51.205 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@138 -- # local -ga mlx 00:10:51.205 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:51.205 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:51.205 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:51.205 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:51.205 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:51.205 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:51.205 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:51.205 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:51.205 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:51.205 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:51.205 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:51.205 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:51.205 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:10:51.205 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:10:51.205 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:10:51.205 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:10:51.205 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:10:51.205 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:10:51.205 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:10:51.205 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:51.205 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:51.205 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:10:51.205 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:10:51.205 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:51.205 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:51.205 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:10:51.205 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:10:51.205 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:51.205 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:51.205 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:10:51.205 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:10:51.205 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:51.205 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:51.205 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:10:51.205 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:10:51.205 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:10:51.205 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:10:51.205 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:10:51.205 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:51.205 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:10:51.205 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:51.205 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # [[ up == up ]] 00:10:51.205 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:10:51.205 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:51.205 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:51.205 Found net devices under 0000:86:00.0: cvl_0_0 00:10:51.205 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:10:51.205 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:10:51.205 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:51.205 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:10:51.205 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:51.205 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # [[ up == up ]] 00:10:51.205 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:10:51.205 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:51.205 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:51.205 Found net devices under 0000:86:00.1: cvl_0_1 00:10:51.205 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:10:51.205 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:10:51.205 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # is_hw=yes 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@257 -- # create_target_ns 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@27 -- # local -gA dev_map 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@28 -- # local -g _dev 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@44 -- # ips=() 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@11 -- # local val=167772161 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:10:51.206 10.0.0.1 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@11 -- # local val=167772162 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:10:51.206 10.0.0.2 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@38 -- # ping_ips 1 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@107 -- # local dev=initiator0 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:10:51.206 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:51.206 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.427 ms 00:10:51.206 00:10:51.206 --- 10.0.0.1 ping statistics --- 00:10:51.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.206 rtt min/avg/max/mdev = 0.427/0.427/0.427/0.000 ms 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # get_net_dev target0 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@107 -- # local dev=target0 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:10:51.206 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:51.206 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:10:51.206 00:10:51.206 --- 10.0.0.2 ping statistics --- 00:10:51.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.206 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # (( pair++ )) 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # return 0 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@107 -- # local dev=initiator0 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:51.206 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@107 -- # local dev=initiator1 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # return 1 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # dev= 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@169 -- # return 0 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # get_net_dev target0 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@107 -- # local dev=target0 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # get_net_dev target1 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@107 -- # local dev=target1 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # return 1 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # dev= 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@169 -- # return 0 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # nvmfpid=4118836 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # waitforlisten 4118836 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 4118836 ']' 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:51.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:51.207 [2024-12-05 11:54:24.653925] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:10:51.207 [2024-12-05 11:54:24.653968] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:51.207 [2024-12-05 11:54:24.728440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:51.207 [2024-12-05 11:54:24.769187] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:51.207 [2024-12-05 11:54:24.769220] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:51.207 [2024-12-05 11:54:24.769227] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:51.207 [2024-12-05 11:54:24.769233] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:51.207 [2024-12-05 11:54:24.769238] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:51.207 [2024-12-05 11:54:24.770405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:51.207 [2024-12-05 11:54:24.770408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:51.207 [2024-12-05 11:54:24.915148] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:51.207 [2024-12-05 11:54:24.935381] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:51.207 NULL1 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:51.207 Delay0 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=4118864 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:10:51.207 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:51.207 [2024-12-05 11:54:25.046269] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:53.115 11:54:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:53.115 11:54:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.115 11:54:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:53.115 Write completed with error (sct=0, sc=8) 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 Write completed with error (sct=0, sc=8) 00:10:53.115 starting I/O failed: -6 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 starting I/O failed: -6 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 starting I/O failed: -6 00:10:53.115 Write completed with error (sct=0, sc=8) 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 starting I/O failed: -6 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 Write completed with error (sct=0, sc=8) 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 starting I/O failed: -6 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 Write completed with error (sct=0, sc=8) 00:10:53.115 Write completed with error (sct=0, sc=8) 00:10:53.115 starting I/O failed: -6 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 starting I/O failed: -6 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 starting I/O failed: -6 00:10:53.115 Write completed with error (sct=0, sc=8) 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 Write completed with error (sct=0, sc=8) 00:10:53.115 Write completed with error (sct=0, sc=8) 00:10:53.115 starting I/O failed: -6 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 Write completed with error (sct=0, sc=8) 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 Write completed with error (sct=0, sc=8) 00:10:53.115 starting I/O failed: -6 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 starting I/O failed: -6 00:10:53.115 [2024-12-05 11:54:27.162581] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1381680 is same with the state(6) to be set 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 Write completed with error (sct=0, sc=8) 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 Write completed with error (sct=0, sc=8) 00:10:53.115 Write completed with error (sct=0, sc=8) 00:10:53.115 Write completed with error (sct=0, sc=8) 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 Write completed with error (sct=0, sc=8) 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 Write completed with error (sct=0, sc=8) 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 Write completed with error (sct=0, sc=8) 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 Write completed with error (sct=0, sc=8) 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 Write completed with error (sct=0, sc=8) 00:10:53.115 Write completed with error (sct=0, sc=8) 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 Write completed with error (sct=0, sc=8) 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 Write completed with error (sct=0, sc=8) 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 Write completed with error (sct=0, sc=8) 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 Write completed with error (sct=0, sc=8) 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 Write completed with error (sct=0, sc=8) 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 starting I/O failed: -6 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 Write completed with error (sct=0, sc=8) 00:10:53.115 starting I/O failed: -6 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.115 Write completed with error (sct=0, sc=8) 00:10:53.115 Write completed with error (sct=0, sc=8) 00:10:53.115 Read completed with error (sct=0, sc=8) 00:10:53.116 starting I/O failed: -6 00:10:53.116 Write completed with error (sct=0, sc=8) 00:10:53.116 Read completed with error (sct=0, sc=8) 00:10:53.116 Read completed with error (sct=0, sc=8) 00:10:53.116 Write completed with error (sct=0, sc=8) 00:10:53.116 starting I/O failed: -6 00:10:53.116 Write completed with error (sct=0, sc=8) 00:10:53.116 Read completed with error (sct=0, sc=8) 00:10:53.116 Read completed with error (sct=0, sc=8) 00:10:53.116 Write completed with error (sct=0, sc=8) 00:10:53.116 starting I/O failed: -6 00:10:53.116 Read completed with error (sct=0, sc=8) 00:10:53.116 Write completed with error (sct=0, sc=8) 00:10:53.116 Write completed with error (sct=0, sc=8) 00:10:53.116 Read completed with error (sct=0, sc=8) 00:10:53.116 starting I/O failed: -6 00:10:53.116 Read completed with error (sct=0, sc=8) 00:10:53.116 Read completed with error (sct=0, sc=8) 00:10:53.116 Read completed with error (sct=0, sc=8) 00:10:53.116 Read completed with error (sct=0, sc=8) 00:10:53.116 starting I/O failed: -6 00:10:53.116 Read completed with error (sct=0, sc=8) 00:10:53.116 Read completed with error (sct=0, sc=8) 00:10:53.116 Read completed with error (sct=0, sc=8) 00:10:53.116 Read completed with error (sct=0, sc=8) 00:10:53.116 starting I/O failed: -6 00:10:53.116 Read completed with error (sct=0, sc=8) 00:10:53.116 Read completed with error (sct=0, sc=8) 00:10:53.116 Read completed with error (sct=0, sc=8) 00:10:53.116 Read completed with error (sct=0, sc=8) 00:10:53.116 starting I/O failed: -6 00:10:53.116 Read completed with error (sct=0, sc=8) 00:10:53.116 Read completed with error (sct=0, sc=8) 00:10:53.116 Read completed with error (sct=0, sc=8) 00:10:53.116 Read completed with error (sct=0, sc=8) 00:10:53.116 starting I/O failed: -6 00:10:53.116 Write completed with error (sct=0, sc=8) 00:10:53.116 Read completed with error (sct=0, sc=8) 00:10:53.116 Read completed with error (sct=0, sc=8) 00:10:53.116 [2024-12-05 11:54:27.165453] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f116800d350 is same with the state(6) to be set 00:10:53.116 Read completed with error (sct=0, sc=8) 00:10:53.116 Write completed with error (sct=0, sc=8) 00:10:53.116 Read completed with error (sct=0, sc=8) 00:10:53.116 Read completed with error (sct=0, sc=8) 00:10:53.116 Write completed with error (sct=0, sc=8) 00:10:53.116 Read completed with error (sct=0, sc=8) 00:10:53.116 Read completed with error (sct=0, sc=8) 00:10:53.116 Write completed with error (sct=0, sc=8) 00:10:53.116 Write completed with error (sct=0, sc=8) 00:10:53.116 Read completed with error (sct=0, sc=8) 00:10:53.116 Read completed with error (sct=0, sc=8) 00:10:53.116 Write completed with error (sct=0, sc=8) 00:10:53.116 Write completed with error (sct=0, sc=8) 00:10:53.116 Write completed with error (sct=0, sc=8) 00:10:53.116 Read completed with error (sct=0, sc=8) 00:10:53.116 Write completed with error (sct=0, sc=8) 00:10:53.116 Read completed with error (sct=0, sc=8) 00:10:53.116 Read completed with error (sct=0, sc=8) 00:10:53.116 Read completed with error (sct=0, sc=8) 00:10:53.116 Read completed with error (sct=0, sc=8) 00:10:53.116 Write completed with error (sct=0, sc=8) 00:10:53.116 Read completed with error (sct=0, sc=8) 00:10:53.116 Read completed with error (sct=0, sc=8) 00:10:53.116 Read completed with error (sct=0, sc=8) 00:10:53.116 Write completed with error (sct=0, sc=8) 00:10:53.116 Write completed with error (sct=0, sc=8) 00:10:53.116 Read completed with error (sct=0, sc=8) 00:10:53.116 Read completed with error (sct=0, sc=8) 00:10:53.116 Read completed with error (sct=0, sc=8) 00:10:53.116 Write completed with error (sct=0, sc=8) 00:10:53.116 Read completed with error (sct=0, sc=8) 00:10:53.116 Write completed with error (sct=0, sc=8) 00:10:53.116 Read completed with error (sct=0, sc=8) 00:10:53.116 Read completed with error (sct=0, sc=8) 00:10:53.116 Write completed with error (sct=0, sc=8) 00:10:53.116 Read completed with error (sct=0, sc=8) 00:10:53.116 Read completed with error (sct=0, sc=8) 00:10:53.116 Write completed with error (sct=0, sc=8) 00:10:53.116 Read completed with error (sct=0, sc=8) 00:10:53.116 Read completed with error (sct=0, sc=8) 00:10:53.116 Read completed with error (sct=0, sc=8) 00:10:53.116 Write completed with error (sct=0, sc=8) 00:10:53.116 Read completed with error (sct=0, sc=8) 00:10:53.116 Read completed with error (sct=0, sc=8) 00:10:53.116 Read completed with error (sct=0, sc=8) 00:10:53.116 Write completed with error (sct=0, sc=8) 00:10:53.116 Read completed with error (sct=0, sc=8) 00:10:53.116 Read completed with error (sct=0, sc=8) 00:10:53.116 Read completed with error (sct=0, sc=8) 00:10:53.116 Read completed with error (sct=0, sc=8) 00:10:53.116 Read completed with error (sct=0, sc=8) 00:10:54.054 [2024-12-05 11:54:28.140928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13829b0 is same with the state(6) to be set 00:10:54.054 Read completed with error (sct=0, sc=8) 00:10:54.054 Read completed with error (sct=0, sc=8) 00:10:54.054 Read completed with error (sct=0, sc=8) 00:10:54.054 Read completed with error (sct=0, sc=8) 00:10:54.054 Write completed with error (sct=0, sc=8) 00:10:54.054 Read completed with error (sct=0, sc=8) 00:10:54.054 Read completed with error (sct=0, sc=8) 00:10:54.054 Write completed with error (sct=0, sc=8) 00:10:54.054 Write completed with error (sct=0, sc=8) 00:10:54.054 Read completed with error (sct=0, sc=8) 00:10:54.054 Write completed with error (sct=0, sc=8) 00:10:54.054 Read completed with error (sct=0, sc=8) 00:10:54.054 Read completed with error (sct=0, sc=8) 00:10:54.054 Read completed with error (sct=0, sc=8) 00:10:54.054 Write completed with error (sct=0, sc=8) 00:10:54.054 Write completed with error (sct=0, sc=8) 00:10:54.054 Write completed with error (sct=0, sc=8) 00:10:54.054 Read completed with error (sct=0, sc=8) 00:10:54.054 Write completed with error (sct=0, sc=8) 00:10:54.054 Write completed with error (sct=0, sc=8) 00:10:54.054 Read completed with error (sct=0, sc=8) 00:10:54.054 Read completed with error (sct=0, sc=8) 00:10:54.054 Read completed with error (sct=0, sc=8) 00:10:54.054 [2024-12-05 11:54:28.166573] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13814a0 is same with the state(6) to be set 00:10:54.054 Read completed with error (sct=0, sc=8) 00:10:54.054 Read completed with error (sct=0, sc=8) 00:10:54.054 Read completed with error (sct=0, sc=8) 00:10:54.054 Read completed with error (sct=0, sc=8) 00:10:54.054 Read completed with error (sct=0, sc=8) 00:10:54.054 Read completed with error (sct=0, sc=8) 00:10:54.054 Read completed with error (sct=0, sc=8) 00:10:54.054 Read completed with error (sct=0, sc=8) 00:10:54.054 Read completed with error (sct=0, sc=8) 00:10:54.054 Read completed with error (sct=0, sc=8) 00:10:54.054 Read completed with error (sct=0, sc=8) 00:10:54.054 Read completed with error (sct=0, sc=8) 00:10:54.054 Read completed with error (sct=0, sc=8) 00:10:54.054 Read completed with error (sct=0, sc=8) 00:10:54.054 Write completed with error (sct=0, sc=8) 00:10:54.054 Read completed with error (sct=0, sc=8) 00:10:54.054 Write completed with error (sct=0, sc=8) 00:10:54.054 Read completed with error (sct=0, sc=8) 00:10:54.054 Write completed with error (sct=0, sc=8) 00:10:54.054 Write completed with error (sct=0, sc=8) 00:10:54.054 Read completed with error (sct=0, sc=8) 00:10:54.054 Read completed with error (sct=0, sc=8) 00:10:54.054 [2024-12-05 11:54:28.166775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1381860 is same with the state(6) to be set 00:10:54.054 Write completed with error (sct=0, sc=8) 00:10:54.054 Read completed with error (sct=0, sc=8) 00:10:54.054 Write completed with error (sct=0, sc=8) 00:10:54.054 Write completed with error (sct=0, sc=8) 00:10:54.054 Write completed with error (sct=0, sc=8) 00:10:54.054 Write completed with error (sct=0, sc=8) 00:10:54.054 Read completed with error (sct=0, sc=8) 00:10:54.054 Read completed with error (sct=0, sc=8) 00:10:54.054 Read completed with error (sct=0, sc=8) 00:10:54.054 Read completed with error (sct=0, sc=8) 00:10:54.054 Read completed with error (sct=0, sc=8) 00:10:54.054 Write completed with error (sct=0, sc=8) 00:10:54.054 Read completed with error (sct=0, sc=8) 00:10:54.054 Write completed with error (sct=0, sc=8) 00:10:54.054 Read completed with error (sct=0, sc=8) 00:10:54.054 Read completed with error (sct=0, sc=8) 00:10:54.054 Read completed with error (sct=0, sc=8) 00:10:54.054 Read completed with error (sct=0, sc=8) 00:10:54.054 [2024-12-05 11:54:28.168029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f116800d680 is same with the state(6) to be set 00:10:54.054 Write completed with error (sct=0, sc=8) 00:10:54.054 Read completed with error (sct=0, sc=8) 00:10:54.054 Read completed with error (sct=0, sc=8) 00:10:54.055 Write completed with error (sct=0, sc=8) 00:10:54.055 Read completed with error (sct=0, sc=8) 00:10:54.055 Read completed with error (sct=0, sc=8) 00:10:54.055 Read completed with error (sct=0, sc=8) 00:10:54.055 Write completed with error (sct=0, sc=8) 00:10:54.055 Read completed with error (sct=0, sc=8) 00:10:54.055 Read completed with error (sct=0, sc=8) 00:10:54.055 Read completed with error (sct=0, sc=8) 00:10:54.055 Write completed with error (sct=0, sc=8) 00:10:54.055 Read completed with error (sct=0, sc=8) 00:10:54.055 Read completed with error (sct=0, sc=8) 00:10:54.055 Read completed with error (sct=0, sc=8) 00:10:54.055 Read completed with error (sct=0, sc=8) 00:10:54.055 Write completed with error (sct=0, sc=8) 00:10:54.055 Write completed with error (sct=0, sc=8) 00:10:54.055 Read completed with error (sct=0, sc=8) 00:10:54.055 Read completed with error (sct=0, sc=8) 00:10:54.055 Read completed with error (sct=0, sc=8) 00:10:54.055 Read completed with error (sct=0, sc=8) 00:10:54.055 Write completed with error (sct=0, sc=8) 00:10:54.055 Read completed with error (sct=0, sc=8) 00:10:54.055 Read completed with error (sct=0, sc=8) 00:10:54.055 Read completed with error (sct=0, sc=8) 00:10:54.055 Write completed with error (sct=0, sc=8) 00:10:54.055 Read completed with error (sct=0, sc=8) 00:10:54.055 Write completed with error (sct=0, sc=8) 00:10:54.055 [2024-12-05 11:54:28.168560] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f116800d020 is same with the state(6) to be set 00:10:54.055 Initializing NVMe Controllers 00:10:54.055 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:54.055 Controller IO queue size 128, less than required. 00:10:54.055 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:54.055 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:54.055 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:54.055 Initialization complete. Launching workers. 00:10:54.055 ======================================================== 00:10:54.055 Latency(us) 00:10:54.055 Device Information : IOPS MiB/s Average min max 00:10:54.055 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 167.18 0.08 899318.67 268.76 1007256.52 00:10:54.055 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 159.22 0.08 951100.66 218.67 2002004.50 00:10:54.055 ======================================================== 00:10:54.055 Total : 326.40 0.16 924578.18 218.67 2002004.50 00:10:54.055 00:10:54.055 [2024-12-05 11:54:28.169077] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13829b0 (9): Bad file descriptor 00:10:54.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:10:54.055 11:54:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.055 11:54:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:10:54.055 11:54:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 4118864 00:10:54.055 11:54:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:10:54.623 11:54:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:10:54.623 11:54:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 4118864 00:10:54.623 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (4118864) - No such process 00:10:54.623 11:54:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 4118864 00:10:54.624 11:54:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:10:54.624 11:54:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 4118864 00:10:54.624 11:54:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:10:54.624 11:54:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:54.624 11:54:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:10:54.624 11:54:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:54.624 11:54:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 4118864 00:10:54.624 11:54:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:10:54.624 11:54:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:54.624 11:54:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:54.624 11:54:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:54.624 11:54:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:54.624 11:54:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.624 11:54:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:54.624 11:54:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.624 11:54:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:54.624 11:54:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.624 11:54:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:54.624 [2024-12-05 11:54:28.699317] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:54.624 11:54:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.624 11:54:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:54.624 11:54:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.624 11:54:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:54.624 11:54:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.624 11:54:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=4119551 00:10:54.624 11:54:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:10:54.624 11:54:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:54.624 11:54:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4119551 00:10:54.624 11:54:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:54.624 [2024-12-05 11:54:28.789314] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:55.192 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:55.192 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4119551 00:10:55.192 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:55.759 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:55.759 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4119551 00:10:55.759 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:56.327 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:56.327 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4119551 00:10:56.327 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:56.586 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:56.586 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4119551 00:10:56.586 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:57.154 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:57.154 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4119551 00:10:57.154 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:57.722 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:57.722 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4119551 00:10:57.722 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:57.981 Initializing NVMe Controllers 00:10:57.981 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:57.981 Controller IO queue size 128, less than required. 00:10:57.981 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:57.981 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:57.981 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:57.981 Initialization complete. Launching workers. 00:10:57.981 ======================================================== 00:10:57.981 Latency(us) 00:10:57.981 Device Information : IOPS MiB/s Average min max 00:10:57.981 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002472.50 1000129.52 1040918.99 00:10:57.981 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003299.62 1000131.60 1011187.10 00:10:57.982 ======================================================== 00:10:57.982 Total : 256.00 0.12 1002886.06 1000129.52 1040918.99 00:10:57.982 00:10:58.241 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:58.241 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4119551 00:10:58.241 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (4119551) - No such process 00:10:58.241 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 4119551 00:10:58.241 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:10:58.241 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:10:58.241 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # nvmfcleanup 00:10:58.241 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@99 -- # sync 00:10:58.241 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:10:58.241 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # set +e 00:10:58.241 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # for i in {1..20} 00:10:58.241 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:10:58.241 rmmod nvme_tcp 00:10:58.241 rmmod nvme_fabrics 00:10:58.241 rmmod nvme_keyring 00:10:58.241 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:10:58.241 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # set -e 00:10:58.241 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # return 0 00:10:58.241 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # '[' -n 4118836 ']' 00:10:58.241 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@337 -- # killprocess 4118836 00:10:58.241 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 4118836 ']' 00:10:58.241 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 4118836 00:10:58.241 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:10:58.241 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:58.241 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4118836 00:10:58.241 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:58.241 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:58.241 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4118836' 00:10:58.241 killing process with pid 4118836 00:10:58.241 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 4118836 00:10:58.242 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 4118836 00:10:58.501 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:10:58.501 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # nvmf_fini 00:10:58.501 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@264 -- # local dev 00:10:58.501 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@267 -- # remove_target_ns 00:10:58.501 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:10:58.501 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:10:58.501 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_target_ns 00:11:00.408 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@268 -- # delete_main_bridge 00:11:00.408 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:11:00.408 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@130 -- # return 0 00:11:00.408 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:11:00.408 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:11:00.408 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:11:00.408 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:11:00.408 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:11:00.408 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:11:00.408 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:11:00.408 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:11:00.408 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:11:00.408 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:11:00.408 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:11:00.408 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:11:00.408 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:11:00.408 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:11:00.408 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:11:00.408 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:11:00.408 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:11:00.408 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@41 -- # _dev=0 00:11:00.408 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@41 -- # dev_map=() 00:11:00.408 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@284 -- # iptr 00:11:00.408 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@542 -- # iptables-save 00:11:00.408 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:11:00.408 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@542 -- # iptables-restore 00:11:00.667 00:11:00.667 real 0m16.396s 00:11:00.667 user 0m29.317s 00:11:00.667 sys 0m5.588s 00:11:00.667 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:00.667 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:00.667 ************************************ 00:11:00.667 END TEST nvmf_delete_subsystem 00:11:00.667 ************************************ 00:11:00.667 11:54:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:00.667 11:54:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:00.667 11:54:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:00.667 11:54:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:00.667 ************************************ 00:11:00.667 START TEST nvmf_host_management 00:11:00.667 ************************************ 00:11:00.667 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:00.667 * Looking for test storage... 00:11:00.667 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:00.667 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:00.667 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:11:00.667 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:00.667 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:00.667 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:00.667 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:00.667 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:00.668 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:11:00.668 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:11:00.668 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:11:00.668 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:11:00.668 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:11:00.668 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:11:00.668 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:11:00.668 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:00.668 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:11:00.668 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:11:00.668 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:00.668 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:00.668 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:11:00.668 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:11:00.668 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:00.668 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:11:00.668 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:11:00.668 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:11:00.668 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:11:00.668 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:00.668 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:11:00.668 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:11:00.668 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:00.668 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:00.668 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:11:00.668 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:00.668 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:00.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.668 --rc genhtml_branch_coverage=1 00:11:00.668 --rc genhtml_function_coverage=1 00:11:00.668 --rc genhtml_legend=1 00:11:00.668 --rc geninfo_all_blocks=1 00:11:00.668 --rc geninfo_unexecuted_blocks=1 00:11:00.668 00:11:00.668 ' 00:11:00.668 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:00.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.668 --rc genhtml_branch_coverage=1 00:11:00.668 --rc genhtml_function_coverage=1 00:11:00.668 --rc genhtml_legend=1 00:11:00.668 --rc geninfo_all_blocks=1 00:11:00.668 --rc geninfo_unexecuted_blocks=1 00:11:00.668 00:11:00.668 ' 00:11:00.668 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:00.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.668 --rc genhtml_branch_coverage=1 00:11:00.668 --rc genhtml_function_coverage=1 00:11:00.668 --rc genhtml_legend=1 00:11:00.668 --rc geninfo_all_blocks=1 00:11:00.668 --rc geninfo_unexecuted_blocks=1 00:11:00.668 00:11:00.668 ' 00:11:00.668 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:00.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.668 --rc genhtml_branch_coverage=1 00:11:00.668 --rc genhtml_function_coverage=1 00:11:00.668 --rc genhtml_legend=1 00:11:00.668 --rc geninfo_all_blocks=1 00:11:00.668 --rc geninfo_unexecuted_blocks=1 00:11:00.668 00:11:00.668 ' 00:11:00.668 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:00.668 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:11:00.668 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:00.668 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:00.668 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:00.668 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:00.668 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:00.668 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:11:00.668 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:00.668 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:11:00.927 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:00.927 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:00.927 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:00.927 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:11:00.927 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:11:00.927 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:00.927 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:00.927 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:11:00.927 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:00.927 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:00.927 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:00.927 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.927 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.927 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.927 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:11:00.927 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.927 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:11:00.927 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:11:00.927 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:11:00.927 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:11:00.927 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@50 -- # : 0 00:11:00.927 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:11:00.927 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:11:00.927 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:11:00.927 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:00.927 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:00.928 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:11:00.928 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:11:00.928 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:11:00.928 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:11:00.928 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@54 -- # have_pci_nics=0 00:11:00.928 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:00.928 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:00.928 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:11:00.928 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:11:00.928 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:00.928 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # prepare_net_devs 00:11:00.928 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # local -g is_hw=no 00:11:00.928 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@260 -- # remove_target_ns 00:11:00.928 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:11:00.928 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:11:00.928 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_target_ns 00:11:00.928 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:11:00.928 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:11:00.928 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # xtrace_disable 00:11:00.928 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:07.551 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:07.551 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@131 -- # pci_devs=() 00:11:07.551 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@131 -- # local -a pci_devs 00:11:07.551 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@132 -- # pci_net_devs=() 00:11:07.551 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:11:07.551 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@133 -- # pci_drivers=() 00:11:07.551 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@133 -- # local -A pci_drivers 00:11:07.551 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@135 -- # net_devs=() 00:11:07.551 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@135 -- # local -ga net_devs 00:11:07.551 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@136 -- # e810=() 00:11:07.551 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@136 -- # local -ga e810 00:11:07.551 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@137 -- # x722=() 00:11:07.551 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@137 -- # local -ga x722 00:11:07.551 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@138 -- # mlx=() 00:11:07.551 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@138 -- # local -ga mlx 00:11:07.551 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:07.551 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:07.551 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:07.551 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:07.551 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:07.551 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:07.551 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:07.551 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:07.551 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:07.551 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:07.551 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:07.551 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:07.551 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:11:07.551 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:11:07.551 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:11:07.551 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:11:07.551 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:11:07.551 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:11:07.551 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:11:07.551 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:07.551 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:07.551 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:11:07.551 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:11:07.551 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:07.551 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:07.551 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:11:07.551 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:11:07.551 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:07.551 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:07.551 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:11:07.551 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:11:07.551 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:07.551 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:07.551 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:11:07.551 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:11:07.551 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:11:07.551 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:11:07.551 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:11:07.551 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # [[ up == up ]] 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:07.552 Found net devices under 0000:86:00.0: cvl_0_0 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # [[ up == up ]] 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:07.552 Found net devices under 0000:86:00.1: cvl_0_1 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # is_hw=yes 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@257 -- # create_target_ns 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@27 -- # local -gA dev_map 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@28 -- # local -g _dev 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@44 -- # ips=() 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@11 -- # local val=167772161 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:11:07.552 10.0.0.1 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@11 -- # local val=167772162 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:11:07.552 10.0.0.2 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:11:07.552 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@38 -- # ping_ips 1 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@107 -- # local dev=initiator0 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:11:07.553 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:07.553 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.420 ms 00:11:07.553 00:11:07.553 --- 10.0.0.1 ping statistics --- 00:11:07.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:07.553 rtt min/avg/max/mdev = 0.420/0.420/0.420/0.000 ms 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # get_net_dev target0 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@107 -- # local dev=target0 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:11:07.553 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:07.553 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:11:07.553 00:11:07.553 --- 10.0.0.2 ping statistics --- 00:11:07.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:07.553 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # (( pair++ )) 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@270 -- # return 0 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@107 -- # local dev=initiator0 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@107 -- # local dev=initiator1 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # return 1 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # dev= 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@169 -- # return 0 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # get_net_dev target0 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@107 -- # local dev=target0 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:11:07.553 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:11:07.554 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:11:07.554 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:11:07.554 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:11:07.554 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:11:07.554 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:07.554 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:11:07.554 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:11:07.554 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:11:07.554 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:11:07.554 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:07.554 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:07.554 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # get_net_dev target1 00:11:07.554 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@107 -- # local dev=target1 00:11:07.554 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:11:07.554 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:11:07.554 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # return 1 00:11:07.554 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # dev= 00:11:07.554 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@169 -- # return 0 00:11:07.554 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:11:07.554 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:07.554 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:11:07.554 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:11:07.554 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:07.554 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:11:07.554 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:11:07.554 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:11:07.554 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:11:07.554 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:11:07.554 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:11:07.554 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:07.554 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:07.554 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # nvmfpid=4123798 00:11:07.554 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@329 -- # waitforlisten 4123798 00:11:07.554 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:11:07.554 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 4123798 ']' 00:11:07.554 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:07.554 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:07.554 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:07.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:07.554 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:07.554 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:07.554 [2024-12-05 11:54:41.040284] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:11:07.554 [2024-12-05 11:54:41.040328] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:07.554 [2024-12-05 11:54:41.119214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:07.554 [2024-12-05 11:54:41.162282] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:07.554 [2024-12-05 11:54:41.162316] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:07.554 [2024-12-05 11:54:41.162323] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:07.554 [2024-12-05 11:54:41.162329] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:07.554 [2024-12-05 11:54:41.162334] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:07.554 [2024-12-05 11:54:41.163752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:07.554 [2024-12-05 11:54:41.163860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:07.554 [2024-12-05 11:54:41.163942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:07.554 [2024-12-05 11:54:41.163942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:07.813 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:07.813 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:11:07.813 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:11:07.813 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:07.813 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:07.813 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:07.814 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:07.814 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.814 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:07.814 [2024-12-05 11:54:41.907339] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:07.814 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.814 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:11:07.814 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:07.814 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:07.814 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:07.814 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:11:07.814 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:11:07.814 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.814 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:07.814 Malloc0 00:11:07.814 [2024-12-05 11:54:41.986177] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:07.814 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.814 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:11:07.814 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:07.814 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:08.073 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=4123871 00:11:08.073 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 4123871 /var/tmp/bdevperf.sock 00:11:08.073 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 4123871 ']' 00:11:08.073 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:08.073 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:11:08.073 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:11:08.073 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:08.073 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:08.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:08.073 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # config=() 00:11:08.073 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:08.073 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # local subsystem config 00:11:08.073 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:08.073 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:11:08.074 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:11:08.074 { 00:11:08.074 "params": { 00:11:08.074 "name": "Nvme$subsystem", 00:11:08.074 "trtype": "$TEST_TRANSPORT", 00:11:08.074 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:08.074 "adrfam": "ipv4", 00:11:08.074 "trsvcid": "$NVMF_PORT", 00:11:08.074 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:08.074 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:08.074 "hdgst": ${hdgst:-false}, 00:11:08.074 "ddgst": ${ddgst:-false} 00:11:08.074 }, 00:11:08.074 "method": "bdev_nvme_attach_controller" 00:11:08.074 } 00:11:08.074 EOF 00:11:08.074 )") 00:11:08.074 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # cat 00:11:08.074 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@396 -- # jq . 00:11:08.074 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@397 -- # IFS=, 00:11:08.074 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:11:08.074 "params": { 00:11:08.074 "name": "Nvme0", 00:11:08.074 "trtype": "tcp", 00:11:08.074 "traddr": "10.0.0.2", 00:11:08.074 "adrfam": "ipv4", 00:11:08.074 "trsvcid": "4420", 00:11:08.074 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:08.074 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:08.074 "hdgst": false, 00:11:08.074 "ddgst": false 00:11:08.074 }, 00:11:08.074 "method": "bdev_nvme_attach_controller" 00:11:08.074 }' 00:11:08.074 [2024-12-05 11:54:42.081803] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:11:08.074 [2024-12-05 11:54:42.081853] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4123871 ] 00:11:08.074 [2024-12-05 11:54:42.161412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:08.074 [2024-12-05 11:54:42.202495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.332 Running I/O for 10 seconds... 00:11:08.332 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:08.332 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:11:08.332 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:11:08.332 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.332 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:08.332 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.332 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:08.332 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:11:08.332 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:11:08.332 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:11:08.332 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:11:08.333 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:11:08.333 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:11:08.333 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:11:08.333 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:11:08.333 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:11:08.333 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.333 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:08.333 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.592 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=78 00:11:08.592 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 78 -ge 100 ']' 00:11:08.592 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:11:08.853 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:11:08.853 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:11:08.853 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:11:08.853 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:11:08.853 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.853 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:08.853 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.853 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:11:08.853 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:11:08.853 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:11:08.853 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:11:08.853 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:11:08.853 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:08.853 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.853 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:08.853 [2024-12-05 11:54:42.849771] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e8090 is same with the state(6) to be set 00:11:08.853 [2024-12-05 11:54:42.849850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e8090 is same with the state(6) to be set 00:11:08.853 [2024-12-05 11:54:42.849858] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e8090 is same with the state(6) to be set 00:11:08.853 [2024-12-05 11:54:42.849864] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e8090 is same with the state(6) to be set 00:11:08.853 [2024-12-05 11:54:42.849875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e8090 is same with the state(6) to be set 00:11:08.853 [2024-12-05 11:54:42.849881] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e8090 is same with the state(6) to be set 00:11:08.853 [2024-12-05 11:54:42.849887] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e8090 is same with the state(6) to be set 00:11:08.853 [2024-12-05 11:54:42.849894] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e8090 is same with the state(6) to be set 00:11:08.853 [2024-12-05 11:54:42.849900] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e8090 is same with the state(6) to be set 00:11:08.853 [2024-12-05 11:54:42.849905] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e8090 is same with the state(6) to be set 00:11:08.854 [2024-12-05 11:54:42.849911] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e8090 is same with the state(6) to be set 00:11:08.854 [2024-12-05 11:54:42.849917] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e8090 is same with the state(6) to be set 00:11:08.854 [2024-12-05 11:54:42.849923] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e8090 is same with the state(6) to be set 00:11:08.854 [2024-12-05 11:54:42.849929] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e8090 is same with the state(6) to be set 00:11:08.854 [2024-12-05 11:54:42.849935] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e8090 is same with the state(6) to be set 00:11:08.854 [2024-12-05 11:54:42.849941] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e8090 is same with the state(6) to be set 00:11:08.854 [2024-12-05 11:54:42.849947] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e8090 is same with the state(6) to be set 00:11:08.854 [2024-12-05 11:54:42.849953] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e8090 is same with the state(6) to be set 00:11:08.854 [2024-12-05 11:54:42.849959] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e8090 is same with the state(6) to be set 00:11:08.854 [2024-12-05 11:54:42.849965] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e8090 is same with the state(6) to be set 00:11:08.854 [2024-12-05 11:54:42.849970] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e8090 is same with the state(6) to be set 00:11:08.854 [2024-12-05 11:54:42.849976] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e8090 is same with the state(6) to be set 00:11:08.854 [2024-12-05 11:54:42.849982] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e8090 is same with the state(6) to be set 00:11:08.854 [2024-12-05 11:54:42.849987] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e8090 is same with the state(6) to be set 00:11:08.854 [2024-12-05 11:54:42.849993] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e8090 is same with the state(6) to be set 00:11:08.854 [2024-12-05 11:54:42.849998] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e8090 is same with the state(6) to be set 00:11:08.854 [2024-12-05 11:54:42.850004] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e8090 is same with the state(6) to be set 00:11:08.854 [2024-12-05 11:54:42.850010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e8090 is same with the state(6) to be set 00:11:08.854 [2024-12-05 11:54:42.850016] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e8090 is same with the state(6) to be set 00:11:08.854 [2024-12-05 11:54:42.850021] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e8090 is same with the state(6) to be set 00:11:08.854 [2024-12-05 11:54:42.850032] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e8090 is same with the state(6) to be set 00:11:08.854 [2024-12-05 11:54:42.850038] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e8090 is same with the state(6) to be set 00:11:08.854 [2024-12-05 11:54:42.850045] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e8090 is same with the state(6) to be set 00:11:08.854 [2024-12-05 11:54:42.850051] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e8090 is same with the state(6) to be set 00:11:08.854 [2024-12-05 11:54:42.850058] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e8090 is same with the state(6) to be set 00:11:08.854 [2024-12-05 11:54:42.850064] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e8090 is same with the state(6) to be set 00:11:08.854 [2024-12-05 11:54:42.850070] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e8090 is same with the state(6) to be set 00:11:08.854 [2024-12-05 11:54:42.850075] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e8090 is same with the state(6) to be set 00:11:08.854 [2024-12-05 11:54:42.850081] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e8090 is same with the state(6) to be set 00:11:08.854 [2024-12-05 11:54:42.850087] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e8090 is same with the state(6) to be set 00:11:08.854 [2024-12-05 11:54:42.850093] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e8090 is same with the state(6) to be set 00:11:08.854 [2024-12-05 11:54:42.850099] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e8090 is same with the state(6) to be set 00:11:08.854 [2024-12-05 11:54:42.850105] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e8090 is same with the state(6) to be set 00:11:08.854 [2024-12-05 11:54:42.850111] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e8090 is same with the state(6) to be set 00:11:08.854 [2024-12-05 11:54:42.850117] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e8090 is same with the state(6) to be set 00:11:08.854 [2024-12-05 11:54:42.850122] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e8090 is same with the state(6) to be set 00:11:08.854 [2024-12-05 11:54:42.850128] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e8090 is same with the state(6) to be set 00:11:08.854 [2024-12-05 11:54:42.850134] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e8090 is same with the state(6) to be set 00:11:08.854 [2024-12-05 11:54:42.850140] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e8090 is same with the state(6) to be set 00:11:08.854 [2024-12-05 11:54:42.850145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e8090 is same with the state(6) to be set 00:11:08.854 [2024-12-05 11:54:42.850151] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e8090 is same with the state(6) to be set 00:11:08.854 [2024-12-05 11:54:42.850157] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e8090 is same with the state(6) to be set 00:11:08.854 [2024-12-05 11:54:42.850163] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e8090 is same with the state(6) to be set 00:11:08.854 [2024-12-05 11:54:42.850168] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e8090 is same with the state(6) to be set 00:11:08.854 [2024-12-05 11:54:42.850174] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e8090 is same with the state(6) to be set 00:11:08.854 [2024-12-05 11:54:42.850179] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e8090 is same with the state(6) to be set 00:11:08.854 [2024-12-05 11:54:42.850185] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e8090 is same with the state(6) to be set 00:11:08.854 [2024-12-05 11:54:42.850191] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e8090 is same with the state(6) to be set 00:11:08.854 [2024-12-05 11:54:42.850197] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e8090 is same with the state(6) to be set 00:11:08.854 [2024-12-05 11:54:42.850203] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e8090 is same with the state(6) to be set 00:11:08.854 [2024-12-05 11:54:42.850209] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e8090 is same with the state(6) to be set 00:11:08.854 [2024-12-05 11:54:42.850351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.854 [2024-12-05 11:54:42.850390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.854 [2024-12-05 11:54:42.850408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.854 [2024-12-05 11:54:42.850416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.854 [2024-12-05 11:54:42.850424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.854 [2024-12-05 11:54:42.850431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.854 [2024-12-05 11:54:42.850440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.854 [2024-12-05 11:54:42.850446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.854 [2024-12-05 11:54:42.850454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.854 [2024-12-05 11:54:42.850461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.854 [2024-12-05 11:54:42.850469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.854 [2024-12-05 11:54:42.850475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.854 [2024-12-05 11:54:42.850484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.854 [2024-12-05 11:54:42.850490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.854 [2024-12-05 11:54:42.850499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.854 [2024-12-05 11:54:42.850505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.854 [2024-12-05 11:54:42.850513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.854 [2024-12-05 11:54:42.850519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.854 [2024-12-05 11:54:42.850527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.854 [2024-12-05 11:54:42.850534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.854 [2024-12-05 11:54:42.850542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.854 [2024-12-05 11:54:42.850548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.854 [2024-12-05 11:54:42.850556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.854 [2024-12-05 11:54:42.850568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.854 [2024-12-05 11:54:42.850576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.855 [2024-12-05 11:54:42.850582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.855 [2024-12-05 11:54:42.850590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.855 [2024-12-05 11:54:42.850597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.855 [2024-12-05 11:54:42.850605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.855 [2024-12-05 11:54:42.850611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.855 [2024-12-05 11:54:42.850619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.855 [2024-12-05 11:54:42.850626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.855 [2024-12-05 11:54:42.850634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.855 [2024-12-05 11:54:42.850641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.855 [2024-12-05 11:54:42.850650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.855 [2024-12-05 11:54:42.850657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.855 [2024-12-05 11:54:42.850665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.855 [2024-12-05 11:54:42.850672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.855 [2024-12-05 11:54:42.850680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.855 [2024-12-05 11:54:42.850686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.855 [2024-12-05 11:54:42.850695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.855 [2024-12-05 11:54:42.850701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.855 [2024-12-05 11:54:42.850709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.855 [2024-12-05 11:54:42.850716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.855 [2024-12-05 11:54:42.850723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.855 [2024-12-05 11:54:42.850730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.855 [2024-12-05 11:54:42.850738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.855 [2024-12-05 11:54:42.850744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.855 [2024-12-05 11:54:42.850754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.855 [2024-12-05 11:54:42.850760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.855 [2024-12-05 11:54:42.850768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.855 [2024-12-05 11:54:42.850774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.855 [2024-12-05 11:54:42.850782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.855 [2024-12-05 11:54:42.850788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.855 [2024-12-05 11:54:42.850796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.855 [2024-12-05 11:54:42.850803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.855 [2024-12-05 11:54:42.850811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.855 [2024-12-05 11:54:42.850817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.855 [2024-12-05 11:54:42.850825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.855 [2024-12-05 11:54:42.850831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.855 [2024-12-05 11:54:42.850839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.855 [2024-12-05 11:54:42.850845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.855 [2024-12-05 11:54:42.850853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.855 [2024-12-05 11:54:42.850860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.855 [2024-12-05 11:54:42.850868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.855 [2024-12-05 11:54:42.850875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.855 [2024-12-05 11:54:42.850883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.855 [2024-12-05 11:54:42.850889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.855 [2024-12-05 11:54:42.850897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.855 [2024-12-05 11:54:42.850904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.855 [2024-12-05 11:54:42.850911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.855 [2024-12-05 11:54:42.850919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.855 [2024-12-05 11:54:42.850927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.855 [2024-12-05 11:54:42.850935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.855 [2024-12-05 11:54:42.850943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.855 [2024-12-05 11:54:42.850950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.855 [2024-12-05 11:54:42.850958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.855 [2024-12-05 11:54:42.850964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.855 [2024-12-05 11:54:42.850972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.855 [2024-12-05 11:54:42.850979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.855 [2024-12-05 11:54:42.850987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.855 [2024-12-05 11:54:42.850993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.855 [2024-12-05 11:54:42.851001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.855 [2024-12-05 11:54:42.851008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.855 [2024-12-05 11:54:42.851016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.855 [2024-12-05 11:54:42.851022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.855 [2024-12-05 11:54:42.851030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.855 [2024-12-05 11:54:42.851037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.855 [2024-12-05 11:54:42.851044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.855 [2024-12-05 11:54:42.851051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.855 [2024-12-05 11:54:42.851059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.855 [2024-12-05 11:54:42.851065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.855 [2024-12-05 11:54:42.851073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.855 [2024-12-05 11:54:42.851080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.855 [2024-12-05 11:54:42.851088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.855 [2024-12-05 11:54:42.851094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.856 [2024-12-05 11:54:42.851102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.856 [2024-12-05 11:54:42.851108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.856 [2024-12-05 11:54:42.851121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.856 [2024-12-05 11:54:42.851128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.856 [2024-12-05 11:54:42.851137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.856 [2024-12-05 11:54:42.851143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.856 [2024-12-05 11:54:42.851152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.856 [2024-12-05 11:54:42.851158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.856 [2024-12-05 11:54:42.851166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.856 [2024-12-05 11:54:42.851173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.856 [2024-12-05 11:54:42.851181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.856 [2024-12-05 11:54:42.851187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.856 [2024-12-05 11:54:42.851195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.856 [2024-12-05 11:54:42.851201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.856 [2024-12-05 11:54:42.851209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.856 [2024-12-05 11:54:42.851215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.856 [2024-12-05 11:54:42.851223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.856 [2024-12-05 11:54:42.851230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.856 [2024-12-05 11:54:42.851238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.856 [2024-12-05 11:54:42.851244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.856 [2024-12-05 11:54:42.851252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.856 [2024-12-05 11:54:42.851258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.856 [2024-12-05 11:54:42.851266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.856 [2024-12-05 11:54:42.851273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.856 [2024-12-05 11:54:42.851281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.856 [2024-12-05 11:54:42.851287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.856 [2024-12-05 11:54:42.851295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.856 [2024-12-05 11:54:42.851303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.856 [2024-12-05 11:54:42.851311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.856 [2024-12-05 11:54:42.851317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.856 [2024-12-05 11:54:42.851326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:08.856 [2024-12-05 11:54:42.851332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.856 [2024-12-05 11:54:42.851340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1275430 is same with the state(6) to be set 00:11:08.856 [2024-12-05 11:54:42.852301] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:11:08.856 task offset: 98304 on job bdev=Nvme0n1 fails 00:11:08.856 00:11:08.856 Latency(us) 00:11:08.856 [2024-12-05T10:54:43.052Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:08.856 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:08.856 Job: Nvme0n1 ended in about 0.41 seconds with error 00:11:08.856 Verification LBA range: start 0x0 length 0x400 00:11:08.856 Nvme0n1 : 0.41 1882.69 117.67 156.89 0.00 30555.21 3635.69 26963.38 00:11:08.856 [2024-12-05T10:54:43.052Z] =================================================================================================================== 00:11:08.856 [2024-12-05T10:54:43.052Z] Total : 1882.69 117.67 156.89 0.00 30555.21 3635.69 26963.38 00:11:08.856 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.856 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:08.856 [2024-12-05 11:54:42.854666] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:08.856 [2024-12-05 11:54:42.854686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x105c510 (9): Bad file descriptor 00:11:08.856 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.856 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:08.856 [2024-12-05 11:54:42.858823] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:11:08.856 [2024-12-05 11:54:42.858893] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:11:08.856 [2024-12-05 11:54:42.858915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.856 [2024-12-05 11:54:42.858927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:11:08.856 [2024-12-05 11:54:42.858934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:11:08.856 [2024-12-05 11:54:42.858941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:11:08.856 [2024-12-05 11:54:42.858948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x105c510 00:11:08.856 [2024-12-05 11:54:42.858967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x105c510 (9): Bad file descriptor 00:11:08.856 [2024-12-05 11:54:42.858978] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:11:08.856 [2024-12-05 11:54:42.858984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:11:08.856 [2024-12-05 11:54:42.858996] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:11:08.856 [2024-12-05 11:54:42.859005] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:11:08.856 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.856 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:11:09.794 11:54:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 4123871 00:11:09.794 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (4123871) - No such process 00:11:09.794 11:54:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:11:09.794 11:54:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:11:09.794 11:54:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:11:09.794 11:54:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:11:09.794 11:54:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # config=() 00:11:09.794 11:54:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # local subsystem config 00:11:09.794 11:54:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:11:09.794 11:54:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:11:09.794 { 00:11:09.794 "params": { 00:11:09.794 "name": "Nvme$subsystem", 00:11:09.794 "trtype": "$TEST_TRANSPORT", 00:11:09.794 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:09.794 "adrfam": "ipv4", 00:11:09.794 "trsvcid": "$NVMF_PORT", 00:11:09.794 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:09.794 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:09.794 "hdgst": ${hdgst:-false}, 00:11:09.794 "ddgst": ${ddgst:-false} 00:11:09.794 }, 00:11:09.794 "method": "bdev_nvme_attach_controller" 00:11:09.794 } 00:11:09.794 EOF 00:11:09.794 )") 00:11:09.794 11:54:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # cat 00:11:09.794 11:54:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@396 -- # jq . 00:11:09.794 11:54:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@397 -- # IFS=, 00:11:09.794 11:54:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:11:09.794 "params": { 00:11:09.794 "name": "Nvme0", 00:11:09.794 "trtype": "tcp", 00:11:09.794 "traddr": "10.0.0.2", 00:11:09.794 "adrfam": "ipv4", 00:11:09.794 "trsvcid": "4420", 00:11:09.794 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:09.794 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:09.794 "hdgst": false, 00:11:09.794 "ddgst": false 00:11:09.794 }, 00:11:09.794 "method": "bdev_nvme_attach_controller" 00:11:09.794 }' 00:11:09.794 [2024-12-05 11:54:43.922574] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:11:09.794 [2024-12-05 11:54:43.922622] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4124327 ] 00:11:10.053 [2024-12-05 11:54:43.999406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:10.053 [2024-12-05 11:54:44.038230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.312 Running I/O for 1 seconds... 00:11:11.252 1999.00 IOPS, 124.94 MiB/s 00:11:11.252 Latency(us) 00:11:11.252 [2024-12-05T10:54:45.448Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:11.252 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:11.252 Verification LBA range: start 0x0 length 0x400 00:11:11.252 Nvme0n1 : 1.01 2043.25 127.70 0.00 0.00 30725.30 2059.70 26713.72 00:11:11.252 [2024-12-05T10:54:45.448Z] =================================================================================================================== 00:11:11.252 [2024-12-05T10:54:45.448Z] Total : 2043.25 127.70 0.00 0.00 30725.30 2059.70 26713.72 00:11:11.510 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:11:11.510 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:11:11.510 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:11:11.510 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:11.510 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:11:11.510 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # nvmfcleanup 00:11:11.510 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@99 -- # sync 00:11:11.510 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:11:11.510 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@102 -- # set +e 00:11:11.511 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@103 -- # for i in {1..20} 00:11:11.511 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:11:11.511 rmmod nvme_tcp 00:11:11.511 rmmod nvme_fabrics 00:11:11.511 rmmod nvme_keyring 00:11:11.511 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:11:11.511 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # set -e 00:11:11.511 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # return 0 00:11:11.511 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # '[' -n 4123798 ']' 00:11:11.511 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@337 -- # killprocess 4123798 00:11:11.511 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 4123798 ']' 00:11:11.511 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 4123798 00:11:11.511 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:11:11.511 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:11.511 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4123798 00:11:11.511 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:11.511 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:11.511 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4123798' 00:11:11.511 killing process with pid 4123798 00:11:11.511 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 4123798 00:11:11.511 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 4123798 00:11:11.769 [2024-12-05 11:54:45.782661] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:11:11.769 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:11:11.769 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # nvmf_fini 00:11:11.769 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@264 -- # local dev 00:11:11.769 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@267 -- # remove_target_ns 00:11:11.769 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:11:11.769 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:11:11.769 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_target_ns 00:11:13.674 11:54:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@268 -- # delete_main_bridge 00:11:13.674 11:54:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:11:13.674 11:54:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@130 -- # return 0 00:11:13.674 11:54:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:11:13.674 11:54:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:11:13.674 11:54:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:11:13.674 11:54:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:11:13.674 11:54:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:11:13.674 11:54:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:11:13.674 11:54:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:11:13.674 11:54:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:11:13.674 11:54:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:11:13.674 11:54:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:11:13.674 11:54:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:11:13.674 11:54:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:11:13.674 11:54:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:11:13.674 11:54:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:11:13.933 11:54:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:11:13.933 11:54:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:11:13.933 11:54:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:11:13.933 11:54:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@41 -- # _dev=0 00:11:13.933 11:54:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@41 -- # dev_map=() 00:11:13.933 11:54:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@284 -- # iptr 00:11:13.933 11:54:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@542 -- # iptables-save 00:11:13.933 11:54:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:11:13.933 11:54:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@542 -- # iptables-restore 00:11:13.933 11:54:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:11:13.933 00:11:13.933 real 0m13.205s 00:11:13.933 user 0m22.687s 00:11:13.933 sys 0m5.719s 00:11:13.933 11:54:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:13.933 11:54:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:13.933 ************************************ 00:11:13.933 END TEST nvmf_host_management 00:11:13.933 ************************************ 00:11:13.933 11:54:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:13.933 11:54:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:13.933 11:54:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:13.933 11:54:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:13.933 ************************************ 00:11:13.933 START TEST nvmf_lvol 00:11:13.933 ************************************ 00:11:13.933 11:54:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:13.933 * Looking for test storage... 00:11:13.933 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:13.933 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:13.933 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:11:13.933 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:13.933 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:13.933 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:13.933 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:13.933 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:13.933 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:11:13.933 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:11:13.933 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:11:13.933 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:11:13.933 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:11:13.933 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:11:13.933 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:11:13.933 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:13.933 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:11:13.933 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:11:13.933 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:13.933 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:14.193 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:11:14.193 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:11:14.193 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:14.193 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:11:14.193 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:11:14.193 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:11:14.193 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:11:14.193 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:14.193 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:11:14.193 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:11:14.193 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:14.193 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:14.193 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:11:14.193 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:14.193 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:14.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.193 --rc genhtml_branch_coverage=1 00:11:14.193 --rc genhtml_function_coverage=1 00:11:14.193 --rc genhtml_legend=1 00:11:14.193 --rc geninfo_all_blocks=1 00:11:14.193 --rc geninfo_unexecuted_blocks=1 00:11:14.193 00:11:14.193 ' 00:11:14.193 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:14.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.193 --rc genhtml_branch_coverage=1 00:11:14.193 --rc genhtml_function_coverage=1 00:11:14.193 --rc genhtml_legend=1 00:11:14.193 --rc geninfo_all_blocks=1 00:11:14.193 --rc geninfo_unexecuted_blocks=1 00:11:14.193 00:11:14.193 ' 00:11:14.193 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:14.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.193 --rc genhtml_branch_coverage=1 00:11:14.193 --rc genhtml_function_coverage=1 00:11:14.193 --rc genhtml_legend=1 00:11:14.193 --rc geninfo_all_blocks=1 00:11:14.193 --rc geninfo_unexecuted_blocks=1 00:11:14.193 00:11:14.193 ' 00:11:14.193 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:14.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.193 --rc genhtml_branch_coverage=1 00:11:14.193 --rc genhtml_function_coverage=1 00:11:14.193 --rc genhtml_legend=1 00:11:14.193 --rc geninfo_all_blocks=1 00:11:14.193 --rc geninfo_unexecuted_blocks=1 00:11:14.193 00:11:14.193 ' 00:11:14.193 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:14.193 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:11:14.193 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:14.193 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:14.193 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:14.193 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:14.193 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:14.193 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:11:14.193 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:14.193 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:11:14.193 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:14.193 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:14.193 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:14.193 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:11:14.193 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:11:14.193 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:14.193 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:14.194 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:11:14.194 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:14.194 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:14.194 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:14.194 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.194 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.194 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.194 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:11:14.194 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.194 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:11:14.194 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:11:14.194 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:11:14.194 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:11:14.194 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@50 -- # : 0 00:11:14.194 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:11:14.194 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:11:14.194 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:11:14.194 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:14.194 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:14.194 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:11:14.194 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:11:14.194 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:11:14.194 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:11:14.194 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@54 -- # have_pci_nics=0 00:11:14.194 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:14.194 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:14.194 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:11:14.194 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:11:14.194 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:14.194 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:11:14.194 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:11:14.194 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:14.194 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # prepare_net_devs 00:11:14.194 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # local -g is_hw=no 00:11:14.194 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@260 -- # remove_target_ns 00:11:14.194 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:11:14.194 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:11:14.194 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_target_ns 00:11:14.194 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:11:14.194 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:11:14.194 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # xtrace_disable 00:11:14.194 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:20.784 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:20.784 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@131 -- # pci_devs=() 00:11:20.784 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@131 -- # local -a pci_devs 00:11:20.784 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@132 -- # pci_net_devs=() 00:11:20.784 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:11:20.784 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@133 -- # pci_drivers=() 00:11:20.784 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@133 -- # local -A pci_drivers 00:11:20.784 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@135 -- # net_devs=() 00:11:20.784 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@135 -- # local -ga net_devs 00:11:20.784 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@136 -- # e810=() 00:11:20.784 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@136 -- # local -ga e810 00:11:20.784 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@137 -- # x722=() 00:11:20.784 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@137 -- # local -ga x722 00:11:20.784 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@138 -- # mlx=() 00:11:20.784 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@138 -- # local -ga mlx 00:11:20.784 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:20.784 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:20.784 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:20.784 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:20.784 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:20.784 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:20.784 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:20.784 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:20.784 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:20.784 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:20.784 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:20.784 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:20.784 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:11:20.784 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:11:20.784 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:11:20.784 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:11:20.784 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:11:20.784 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:11:20.784 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:11:20.784 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:20.784 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:20.784 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:11:20.784 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:11:20.784 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:20.784 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:20.784 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:11:20.784 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:11:20.784 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:20.784 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:20.784 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:11:20.784 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:11:20.784 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:20.784 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:20.784 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:11:20.784 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:11:20.784 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:11:20.784 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:11:20.784 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:11:20.784 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:20.784 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:11:20.784 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:20.784 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # [[ up == up ]] 00:11:20.784 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:11:20.784 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:20.785 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:20.785 Found net devices under 0000:86:00.0: cvl_0_0 00:11:20.785 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:11:20.785 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:11:20.785 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:20.785 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:11:20.785 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:20.785 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # [[ up == up ]] 00:11:20.785 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:11:20.785 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:20.785 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:20.785 Found net devices under 0000:86:00.1: cvl_0_1 00:11:20.785 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:11:20.785 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:11:20.785 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:11:20.785 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # is_hw=yes 00:11:20.785 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:11:20.785 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:11:20.785 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:11:20.785 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:11:20.785 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@257 -- # create_target_ns 00:11:20.785 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:11:20.785 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:11:20.785 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:11:20.785 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:20.785 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:11:20.785 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:11:20.785 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:20.785 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:20.785 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:11:20.785 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:11:20.785 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:11:20.785 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:11:20.785 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@27 -- # local -gA dev_map 00:11:20.785 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@28 -- # local -g _dev 00:11:20.785 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:11:20.785 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:11:20.785 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:11:20.785 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:11:20.785 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@44 -- # ips=() 00:11:20.785 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:11:20.785 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:11:20.785 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:11:20.785 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:11:20.785 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:11:20.785 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:11:20.785 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:11:20.785 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:11:20.785 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:11:20.785 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:11:20.785 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:11:20.785 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:11:20.785 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:11:20.785 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:11:20.785 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:11:20.785 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:11:20.785 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:11:20.785 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:11:20.785 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:20.785 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:11:20.785 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@11 -- # local val=167772161 00:11:20.785 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:11:20.785 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:11:20.785 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:11:20.785 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:11:20.785 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:11:20.785 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:11:20.785 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:11:20.785 10.0.0.1 00:11:20.785 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:11:20.785 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:11:20.785 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:20.785 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:20.785 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:11:20.785 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@11 -- # local val=167772162 00:11:20.785 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:11:20.785 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:11:20.785 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:11:20.785 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:11:20.785 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:11:20.785 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:11:20.785 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:11:20.785 10.0.0.2 00:11:20.785 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:11:20.785 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:11:20.785 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:11:20.785 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:11:20.785 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:11:20.785 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:11:20.785 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@38 -- # ping_ips 1 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@107 -- # local dev=initiator0 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:11:20.786 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:20.786 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.461 ms 00:11:20.786 00:11:20.786 --- 10.0.0.1 ping statistics --- 00:11:20.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:20.786 rtt min/avg/max/mdev = 0.461/0.461/0.461/0.000 ms 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # get_net_dev target0 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@107 -- # local dev=target0 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:11:20.786 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:20.786 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:11:20.786 00:11:20.786 --- 10.0.0.2 ping statistics --- 00:11:20.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:20.786 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # (( pair++ )) 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@270 -- # return 0 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@107 -- # local dev=initiator0 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:11:20.786 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@107 -- # local dev=initiator1 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # return 1 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # dev= 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@169 -- # return 0 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # get_net_dev target0 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@107 -- # local dev=target0 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # get_net_dev target1 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@107 -- # local dev=target1 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # return 1 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # dev= 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@169 -- # return 0 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # nvmfpid=4128130 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@329 -- # waitforlisten 4128130 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 4128130 ']' 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:20.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:20.787 [2024-12-05 11:54:54.374470] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:11:20.787 [2024-12-05 11:54:54.374516] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:20.787 [2024-12-05 11:54:54.451344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:20.787 [2024-12-05 11:54:54.494912] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:20.787 [2024-12-05 11:54:54.494941] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:20.787 [2024-12-05 11:54:54.494948] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:20.787 [2024-12-05 11:54:54.494954] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:20.787 [2024-12-05 11:54:54.494960] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:20.787 [2024-12-05 11:54:54.496102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:20.787 [2024-12-05 11:54:54.496215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.787 [2024-12-05 11:54:54.496217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:20.787 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:11:20.788 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:11:20.788 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:20.788 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:20.788 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:20.788 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:20.788 [2024-12-05 11:54:54.808959] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:20.788 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:21.050 11:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:11:21.050 11:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:21.310 11:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:11:21.310 11:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:11:21.310 11:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:11:21.568 11:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=1fd49033-039d-45ce-8efd-23718d5c979a 00:11:21.568 11:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1fd49033-039d-45ce-8efd-23718d5c979a lvol 20 00:11:21.826 11:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=7a4093a7-6a41-4df5-9d7c-1564f7bddd73 00:11:21.826 11:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:22.084 11:54:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7a4093a7-6a41-4df5-9d7c-1564f7bddd73 00:11:22.341 11:54:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:22.341 [2024-12-05 11:54:56.467406] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:22.341 11:54:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:22.599 11:54:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=4128621 00:11:22.599 11:54:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:11:22.599 11:54:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:11:23.536 11:54:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 7a4093a7-6a41-4df5-9d7c-1564f7bddd73 MY_SNAPSHOT 00:11:23.794 11:54:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=47280aef-206d-466c-babf-cab0a5c6dcfe 00:11:23.794 11:54:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 7a4093a7-6a41-4df5-9d7c-1564f7bddd73 30 00:11:24.053 11:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 47280aef-206d-466c-babf-cab0a5c6dcfe MY_CLONE 00:11:24.312 11:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=ebca3041-8ec2-4b54-ac5a-032c715b21af 00:11:24.312 11:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate ebca3041-8ec2-4b54-ac5a-032c715b21af 00:11:24.880 11:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 4128621 00:11:32.999 Initializing NVMe Controllers 00:11:32.999 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:11:32.999 Controller IO queue size 128, less than required. 00:11:32.999 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:32.999 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:11:32.999 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:11:32.999 Initialization complete. Launching workers. 00:11:32.999 ======================================================== 00:11:32.999 Latency(us) 00:11:32.999 Device Information : IOPS MiB/s Average min max 00:11:32.999 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12214.80 47.71 10481.96 1619.72 47046.04 00:11:32.999 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12282.30 47.98 10422.65 3550.18 50772.51 00:11:32.999 ======================================================== 00:11:32.999 Total : 24497.10 95.69 10452.22 1619.72 50772.51 00:11:32.999 00:11:32.999 11:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:33.258 11:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7a4093a7-6a41-4df5-9d7c-1564f7bddd73 00:11:33.517 11:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1fd49033-039d-45ce-8efd-23718d5c979a 00:11:33.777 11:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:11:33.777 11:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:11:33.777 11:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:11:33.777 11:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # nvmfcleanup 00:11:33.777 11:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@99 -- # sync 00:11:33.777 11:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:11:33.777 11:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@102 -- # set +e 00:11:33.777 11:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@103 -- # for i in {1..20} 00:11:33.777 11:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:11:33.777 rmmod nvme_tcp 00:11:33.777 rmmod nvme_fabrics 00:11:33.777 rmmod nvme_keyring 00:11:33.777 11:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:11:33.777 11:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # set -e 00:11:33.777 11:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # return 0 00:11:33.777 11:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # '[' -n 4128130 ']' 00:11:33.777 11:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@337 -- # killprocess 4128130 00:11:33.777 11:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 4128130 ']' 00:11:33.777 11:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 4128130 00:11:33.777 11:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:11:33.777 11:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:33.777 11:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4128130 00:11:33.777 11:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:33.777 11:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:33.777 11:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4128130' 00:11:33.777 killing process with pid 4128130 00:11:33.777 11:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 4128130 00:11:33.777 11:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 4128130 00:11:34.037 11:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:11:34.037 11:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # nvmf_fini 00:11:34.037 11:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@264 -- # local dev 00:11:34.037 11:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@267 -- # remove_target_ns 00:11:34.037 11:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:11:34.037 11:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:11:34.037 11:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_target_ns 00:11:35.945 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@268 -- # delete_main_bridge 00:11:35.945 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:11:35.945 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@130 -- # return 0 00:11:35.945 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:11:35.945 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:11:35.945 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:11:35.945 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:11:35.945 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:11:35.945 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:11:35.945 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:11:35.945 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:11:35.945 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:11:35.945 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:11:35.945 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:11:35.945 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:11:35.945 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:11:35.945 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:11:35.945 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:11:35.945 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:11:35.945 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:11:35.945 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@41 -- # _dev=0 00:11:35.945 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@41 -- # dev_map=() 00:11:35.945 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@284 -- # iptr 00:11:35.945 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@542 -- # iptables-save 00:11:35.945 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:11:35.945 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@542 -- # iptables-restore 00:11:35.945 00:11:35.945 real 0m22.167s 00:11:35.945 user 1m3.303s 00:11:35.945 sys 0m7.805s 00:11:35.945 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:35.945 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:35.945 ************************************ 00:11:35.945 END TEST nvmf_lvol 00:11:35.945 ************************************ 00:11:36.204 11:55:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:36.204 11:55:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:36.204 11:55:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:36.204 11:55:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:36.204 ************************************ 00:11:36.204 START TEST nvmf_lvs_grow 00:11:36.204 ************************************ 00:11:36.204 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:36.204 * Looking for test storage... 00:11:36.204 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:36.204 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:36.204 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:11:36.204 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:36.204 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:36.204 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:36.204 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:36.204 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:36.204 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:11:36.204 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:11:36.204 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:11:36.204 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:11:36.204 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:11:36.204 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:11:36.204 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:11:36.204 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:36.204 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:11:36.204 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:11:36.204 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:36.204 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:36.204 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:11:36.204 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:11:36.204 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:36.204 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:11:36.204 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:11:36.204 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:11:36.204 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:11:36.204 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:36.204 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:11:36.204 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:11:36.204 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:36.204 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:36.204 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:11:36.204 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:36.204 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:36.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.204 --rc genhtml_branch_coverage=1 00:11:36.204 --rc genhtml_function_coverage=1 00:11:36.204 --rc genhtml_legend=1 00:11:36.204 --rc geninfo_all_blocks=1 00:11:36.204 --rc geninfo_unexecuted_blocks=1 00:11:36.204 00:11:36.204 ' 00:11:36.204 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:36.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.204 --rc genhtml_branch_coverage=1 00:11:36.204 --rc genhtml_function_coverage=1 00:11:36.204 --rc genhtml_legend=1 00:11:36.204 --rc geninfo_all_blocks=1 00:11:36.204 --rc geninfo_unexecuted_blocks=1 00:11:36.204 00:11:36.204 ' 00:11:36.204 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:36.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.204 --rc genhtml_branch_coverage=1 00:11:36.204 --rc genhtml_function_coverage=1 00:11:36.204 --rc genhtml_legend=1 00:11:36.204 --rc geninfo_all_blocks=1 00:11:36.204 --rc geninfo_unexecuted_blocks=1 00:11:36.204 00:11:36.204 ' 00:11:36.204 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:36.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.204 --rc genhtml_branch_coverage=1 00:11:36.204 --rc genhtml_function_coverage=1 00:11:36.204 --rc genhtml_legend=1 00:11:36.204 --rc geninfo_all_blocks=1 00:11:36.204 --rc geninfo_unexecuted_blocks=1 00:11:36.204 00:11:36.204 ' 00:11:36.204 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:36.204 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:11:36.204 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:36.204 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:36.204 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:36.204 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:36.204 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:36.204 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:11:36.204 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:36.204 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:11:36.204 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:36.463 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:36.463 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:36.463 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:11:36.463 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:11:36.463 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:36.463 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:36.463 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:11:36.463 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:36.463 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:36.463 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:36.463 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.463 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.463 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.463 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:11:36.463 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.463 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:11:36.463 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:11:36.463 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:11:36.463 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:11:36.463 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@50 -- # : 0 00:11:36.463 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:11:36.464 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:11:36.464 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:11:36.464 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:36.464 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:36.464 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:11:36.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:11:36.464 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:11:36.464 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:11:36.464 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@54 -- # have_pci_nics=0 00:11:36.464 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:36.464 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:36.464 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:11:36.464 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:11:36.464 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:36.464 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # prepare_net_devs 00:11:36.464 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # local -g is_hw=no 00:11:36.464 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@260 -- # remove_target_ns 00:11:36.464 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:11:36.464 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:11:36.464 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_target_ns 00:11:36.464 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:11:36.464 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:11:36.464 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # xtrace_disable 00:11:36.464 11:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:43.031 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:43.031 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@131 -- # pci_devs=() 00:11:43.031 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@131 -- # local -a pci_devs 00:11:43.031 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@132 -- # pci_net_devs=() 00:11:43.031 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:11:43.031 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@133 -- # pci_drivers=() 00:11:43.031 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@133 -- # local -A pci_drivers 00:11:43.031 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@135 -- # net_devs=() 00:11:43.031 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@135 -- # local -ga net_devs 00:11:43.031 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@136 -- # e810=() 00:11:43.031 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@136 -- # local -ga e810 00:11:43.031 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@137 -- # x722=() 00:11:43.031 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@137 -- # local -ga x722 00:11:43.031 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@138 -- # mlx=() 00:11:43.031 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@138 -- # local -ga mlx 00:11:43.031 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:43.031 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:43.031 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:43.031 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:43.031 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:43.031 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:43.031 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:43.031 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:43.031 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:43.031 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:43.031 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:43.031 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:43.031 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:11:43.031 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:11:43.031 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:11:43.031 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:11:43.031 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:11:43.031 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:11:43.031 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:11:43.031 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:43.031 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:43.031 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:11:43.031 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:11:43.031 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:43.031 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:43.031 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:11:43.031 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:11:43.031 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:43.031 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:43.031 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:11:43.031 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:11:43.031 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:43.031 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:43.031 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:11:43.031 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:11:43.031 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:11:43.031 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:11:43.031 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:11:43.031 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:43.031 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:11:43.031 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:43.031 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # [[ up == up ]] 00:11:43.031 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:11:43.031 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:43.031 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:43.031 Found net devices under 0000:86:00.0: cvl_0_0 00:11:43.031 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:11:43.031 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:11:43.031 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:43.031 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:11:43.031 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:43.031 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # [[ up == up ]] 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:43.032 Found net devices under 0000:86:00.1: cvl_0_1 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # is_hw=yes 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@257 -- # create_target_ns 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@27 -- # local -gA dev_map 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@28 -- # local -g _dev 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@44 -- # ips=() 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@11 -- # local val=167772161 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:11:43.032 10.0.0.1 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@11 -- # local val=167772162 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:11:43.032 10.0.0.2 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:43.032 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@38 -- # ping_ips 1 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@107 -- # local dev=initiator0 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:11:43.033 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:43.033 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.421 ms 00:11:43.033 00:11:43.033 --- 10.0.0.1 ping statistics --- 00:11:43.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.033 rtt min/avg/max/mdev = 0.421/0.421/0.421/0.000 ms 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # get_net_dev target0 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@107 -- # local dev=target0 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:11:43.033 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:43.033 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:11:43.033 00:11:43.033 --- 10.0.0.2 ping statistics --- 00:11:43.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.033 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # (( pair++ )) 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@270 -- # return 0 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@107 -- # local dev=initiator0 00:11:43.033 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@107 -- # local dev=initiator1 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # return 1 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # dev= 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@169 -- # return 0 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # get_net_dev target0 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@107 -- # local dev=target0 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # get_net_dev target1 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@107 -- # local dev=target1 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # return 1 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # dev= 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@169 -- # return 0 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # nvmfpid=4134031 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@329 -- # waitforlisten 4134031 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 4134031 ']' 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:43.034 [2024-12-05 11:55:16.661428] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:11:43.034 [2024-12-05 11:55:16.661476] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:43.034 [2024-12-05 11:55:16.740541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:43.034 [2024-12-05 11:55:16.781483] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:43.034 [2024-12-05 11:55:16.781519] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:43.034 [2024-12-05 11:55:16.781526] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:43.034 [2024-12-05 11:55:16.781532] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:43.034 [2024-12-05 11:55:16.781537] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:43.034 [2024-12-05 11:55:16.782093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.034 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:43.035 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:11:43.035 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:11:43.035 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:43.035 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:43.035 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:43.035 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:43.035 [2024-12-05 11:55:17.090852] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:43.035 11:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:11:43.035 11:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:43.035 11:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:43.035 11:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:43.035 ************************************ 00:11:43.035 START TEST lvs_grow_clean 00:11:43.035 ************************************ 00:11:43.035 11:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:11:43.035 11:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:43.035 11:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:43.035 11:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:43.035 11:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:43.035 11:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:43.035 11:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:43.035 11:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:43.035 11:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:43.035 11:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:43.295 11:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:43.295 11:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:43.556 11:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=a0e36e8f-30e5-4e31-b681-7b701e90e61c 00:11:43.556 11:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a0e36e8f-30e5-4e31-b681-7b701e90e61c 00:11:43.556 11:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:43.556 11:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:43.556 11:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:43.556 11:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a0e36e8f-30e5-4e31-b681-7b701e90e61c lvol 150 00:11:43.815 11:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=ef12f921-0f68-4a59-8962-328d2d800b7a 00:11:43.815 11:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:43.815 11:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:44.074 [2024-12-05 11:55:18.105238] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:44.074 [2024-12-05 11:55:18.105289] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:44.074 true 00:11:44.074 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a0e36e8f-30e5-4e31-b681-7b701e90e61c 00:11:44.074 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:44.332 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:44.332 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:44.332 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ef12f921-0f68-4a59-8962-328d2d800b7a 00:11:44.592 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:44.851 [2024-12-05 11:55:18.823378] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:44.851 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:44.851 11:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:44.851 11:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=4134527 00:11:44.851 11:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:44.851 11:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 4134527 /var/tmp/bdevperf.sock 00:11:44.851 11:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 4134527 ']' 00:11:44.851 11:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:44.851 11:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:44.851 11:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:44.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:44.851 11:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:44.851 11:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:11:44.851 [2024-12-05 11:55:19.034090] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:11:44.851 [2024-12-05 11:55:19.034135] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4134527 ] 00:11:45.110 [2024-12-05 11:55:19.107204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:45.110 [2024-12-05 11:55:19.147022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:45.110 11:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:45.110 11:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:11:45.110 11:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:45.679 Nvme0n1 00:11:45.679 11:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:45.679 [ 00:11:45.679 { 00:11:45.679 "name": "Nvme0n1", 00:11:45.679 "aliases": [ 00:11:45.679 "ef12f921-0f68-4a59-8962-328d2d800b7a" 00:11:45.679 ], 00:11:45.679 "product_name": "NVMe disk", 00:11:45.679 "block_size": 4096, 00:11:45.679 "num_blocks": 38912, 00:11:45.679 "uuid": "ef12f921-0f68-4a59-8962-328d2d800b7a", 00:11:45.679 "numa_id": 1, 00:11:45.679 "assigned_rate_limits": { 00:11:45.679 "rw_ios_per_sec": 0, 00:11:45.679 "rw_mbytes_per_sec": 0, 00:11:45.679 "r_mbytes_per_sec": 0, 00:11:45.679 "w_mbytes_per_sec": 0 00:11:45.679 }, 00:11:45.679 "claimed": false, 00:11:45.679 "zoned": false, 00:11:45.679 "supported_io_types": { 00:11:45.679 "read": true, 00:11:45.679 "write": true, 00:11:45.679 "unmap": true, 00:11:45.679 "flush": true, 00:11:45.679 "reset": true, 00:11:45.679 "nvme_admin": true, 00:11:45.679 "nvme_io": true, 00:11:45.679 "nvme_io_md": false, 00:11:45.679 "write_zeroes": true, 00:11:45.679 "zcopy": false, 00:11:45.679 "get_zone_info": false, 00:11:45.679 "zone_management": false, 00:11:45.679 "zone_append": false, 00:11:45.679 "compare": true, 00:11:45.679 "compare_and_write": true, 00:11:45.679 "abort": true, 00:11:45.679 "seek_hole": false, 00:11:45.679 "seek_data": false, 00:11:45.679 "copy": true, 00:11:45.679 "nvme_iov_md": false 00:11:45.679 }, 00:11:45.679 "memory_domains": [ 00:11:45.679 { 00:11:45.679 "dma_device_id": "system", 00:11:45.679 "dma_device_type": 1 00:11:45.679 } 00:11:45.679 ], 00:11:45.679 "driver_specific": { 00:11:45.679 "nvme": [ 00:11:45.679 { 00:11:45.679 "trid": { 00:11:45.679 "trtype": "TCP", 00:11:45.679 "adrfam": "IPv4", 00:11:45.679 "traddr": "10.0.0.2", 00:11:45.679 "trsvcid": "4420", 00:11:45.679 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:45.679 }, 00:11:45.679 "ctrlr_data": { 00:11:45.679 "cntlid": 1, 00:11:45.679 "vendor_id": "0x8086", 00:11:45.679 "model_number": "SPDK bdev Controller", 00:11:45.679 "serial_number": "SPDK0", 00:11:45.679 "firmware_revision": "25.01", 00:11:45.679 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:45.679 "oacs": { 00:11:45.679 "security": 0, 00:11:45.679 "format": 0, 00:11:45.679 "firmware": 0, 00:11:45.679 "ns_manage": 0 00:11:45.679 }, 00:11:45.679 "multi_ctrlr": true, 00:11:45.679 "ana_reporting": false 00:11:45.679 }, 00:11:45.679 "vs": { 00:11:45.679 "nvme_version": "1.3" 00:11:45.679 }, 00:11:45.679 "ns_data": { 00:11:45.679 "id": 1, 00:11:45.679 "can_share": true 00:11:45.679 } 00:11:45.679 } 00:11:45.679 ], 00:11:45.679 "mp_policy": "active_passive" 00:11:45.679 } 00:11:45.679 } 00:11:45.679 ] 00:11:45.679 11:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=4134646 00:11:45.679 11:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:45.679 11:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:45.939 Running I/O for 10 seconds... 00:11:46.876 Latency(us) 00:11:46.876 [2024-12-05T10:55:21.072Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:46.876 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:46.876 Nvme0n1 : 1.00 23067.00 90.11 0.00 0.00 0.00 0.00 0.00 00:11:46.876 [2024-12-05T10:55:21.072Z] =================================================================================================================== 00:11:46.876 [2024-12-05T10:55:21.072Z] Total : 23067.00 90.11 0.00 0.00 0.00 0.00 0.00 00:11:46.876 00:11:47.813 11:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a0e36e8f-30e5-4e31-b681-7b701e90e61c 00:11:47.813 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:47.813 Nvme0n1 : 2.00 23374.50 91.31 0.00 0.00 0.00 0.00 0.00 00:11:47.813 [2024-12-05T10:55:22.009Z] =================================================================================================================== 00:11:47.813 [2024-12-05T10:55:22.009Z] Total : 23374.50 91.31 0.00 0.00 0.00 0.00 0.00 00:11:47.813 00:11:48.072 true 00:11:48.072 11:55:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a0e36e8f-30e5-4e31-b681-7b701e90e61c 00:11:48.072 11:55:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:48.331 11:55:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:48.331 11:55:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:48.331 11:55:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 4134646 00:11:48.898 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:48.898 Nvme0n1 : 3.00 23478.67 91.71 0.00 0.00 0.00 0.00 0.00 00:11:48.898 [2024-12-05T10:55:23.094Z] =================================================================================================================== 00:11:48.898 [2024-12-05T10:55:23.094Z] Total : 23478.67 91.71 0.00 0.00 0.00 0.00 0.00 00:11:48.898 00:11:49.836 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:49.836 Nvme0n1 : 4.00 23569.75 92.07 0.00 0.00 0.00 0.00 0.00 00:11:49.836 [2024-12-05T10:55:24.032Z] =================================================================================================================== 00:11:49.836 [2024-12-05T10:55:24.032Z] Total : 23569.75 92.07 0.00 0.00 0.00 0.00 0.00 00:11:49.836 00:11:50.774 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:50.774 Nvme0n1 : 5.00 23635.40 92.33 0.00 0.00 0.00 0.00 0.00 00:11:50.774 [2024-12-05T10:55:24.970Z] =================================================================================================================== 00:11:50.774 [2024-12-05T10:55:24.970Z] Total : 23635.40 92.33 0.00 0.00 0.00 0.00 0.00 00:11:50.774 00:11:52.228 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:52.228 Nvme0n1 : 6.00 23682.33 92.51 0.00 0.00 0.00 0.00 0.00 00:11:52.228 [2024-12-05T10:55:26.424Z] =================================================================================================================== 00:11:52.228 [2024-12-05T10:55:26.424Z] Total : 23682.33 92.51 0.00 0.00 0.00 0.00 0.00 00:11:52.228 00:11:52.914 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:52.915 Nvme0n1 : 7.00 23711.71 92.62 0.00 0.00 0.00 0.00 0.00 00:11:52.915 [2024-12-05T10:55:27.111Z] =================================================================================================================== 00:11:52.915 [2024-12-05T10:55:27.111Z] Total : 23711.71 92.62 0.00 0.00 0.00 0.00 0.00 00:11:52.915 00:11:53.846 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:53.846 Nvme0n1 : 8.00 23740.75 92.74 0.00 0.00 0.00 0.00 0.00 00:11:53.846 [2024-12-05T10:55:28.042Z] =================================================================================================================== 00:11:53.846 [2024-12-05T10:55:28.042Z] Total : 23740.75 92.74 0.00 0.00 0.00 0.00 0.00 00:11:53.846 00:11:54.778 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:54.778 Nvme0n1 : 9.00 23770.22 92.85 0.00 0.00 0.00 0.00 0.00 00:11:54.778 [2024-12-05T10:55:28.974Z] =================================================================================================================== 00:11:54.778 [2024-12-05T10:55:28.974Z] Total : 23770.22 92.85 0.00 0.00 0.00 0.00 0.00 00:11:54.778 00:11:56.148 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:56.148 Nvme0n1 : 10.00 23791.30 92.93 0.00 0.00 0.00 0.00 0.00 00:11:56.148 [2024-12-05T10:55:30.344Z] =================================================================================================================== 00:11:56.148 [2024-12-05T10:55:30.344Z] Total : 23791.30 92.93 0.00 0.00 0.00 0.00 0.00 00:11:56.148 00:11:56.148 00:11:56.148 Latency(us) 00:11:56.148 [2024-12-05T10:55:30.344Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:56.148 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:56.148 Nvme0n1 : 10.01 23790.78 92.93 0.00 0.00 5377.34 2200.14 12046.14 00:11:56.148 [2024-12-05T10:55:30.344Z] =================================================================================================================== 00:11:56.148 [2024-12-05T10:55:30.344Z] Total : 23790.78 92.93 0.00 0.00 5377.34 2200.14 12046.14 00:11:56.148 { 00:11:56.148 "results": [ 00:11:56.148 { 00:11:56.149 "job": "Nvme0n1", 00:11:56.149 "core_mask": "0x2", 00:11:56.149 "workload": "randwrite", 00:11:56.149 "status": "finished", 00:11:56.149 "queue_depth": 128, 00:11:56.149 "io_size": 4096, 00:11:56.149 "runtime": 10.005597, 00:11:56.149 "iops": 23790.784298028393, 00:11:56.149 "mibps": 92.93275116417341, 00:11:56.149 "io_failed": 0, 00:11:56.149 "io_timeout": 0, 00:11:56.149 "avg_latency_us": 5377.33948209402, 00:11:56.149 "min_latency_us": 2200.137142857143, 00:11:56.149 "max_latency_us": 12046.140952380953 00:11:56.149 } 00:11:56.149 ], 00:11:56.149 "core_count": 1 00:11:56.149 } 00:11:56.149 11:55:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 4134527 00:11:56.149 11:55:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 4134527 ']' 00:11:56.149 11:55:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 4134527 00:11:56.149 11:55:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:11:56.149 11:55:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:56.149 11:55:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4134527 00:11:56.149 11:55:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:56.149 11:55:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:56.149 11:55:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4134527' 00:11:56.149 killing process with pid 4134527 00:11:56.149 11:55:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 4134527 00:11:56.149 Received shutdown signal, test time was about 10.000000 seconds 00:11:56.149 00:11:56.149 Latency(us) 00:11:56.149 [2024-12-05T10:55:30.345Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:56.149 [2024-12-05T10:55:30.345Z] =================================================================================================================== 00:11:56.149 [2024-12-05T10:55:30.345Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:56.149 11:55:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 4134527 00:11:56.149 11:55:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:56.407 11:55:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:56.666 11:55:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a0e36e8f-30e5-4e31-b681-7b701e90e61c 00:11:56.666 11:55:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:56.666 11:55:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:56.666 11:55:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:11:56.666 11:55:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:56.924 [2024-12-05 11:55:30.971033] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:56.925 11:55:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a0e36e8f-30e5-4e31-b681-7b701e90e61c 00:11:56.925 11:55:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:11:56.925 11:55:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a0e36e8f-30e5-4e31-b681-7b701e90e61c 00:11:56.925 11:55:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:56.925 11:55:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:56.925 11:55:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:56.925 11:55:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:56.925 11:55:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:56.925 11:55:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:56.925 11:55:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:56.925 11:55:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:56.925 11:55:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a0e36e8f-30e5-4e31-b681-7b701e90e61c 00:11:57.183 request: 00:11:57.183 { 00:11:57.183 "uuid": "a0e36e8f-30e5-4e31-b681-7b701e90e61c", 00:11:57.183 "method": "bdev_lvol_get_lvstores", 00:11:57.183 "req_id": 1 00:11:57.183 } 00:11:57.183 Got JSON-RPC error response 00:11:57.183 response: 00:11:57.183 { 00:11:57.183 "code": -19, 00:11:57.183 "message": "No such device" 00:11:57.183 } 00:11:57.183 11:55:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:11:57.183 11:55:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:57.183 11:55:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:57.183 11:55:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:57.183 11:55:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:57.442 aio_bdev 00:11:57.442 11:55:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ef12f921-0f68-4a59-8962-328d2d800b7a 00:11:57.442 11:55:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=ef12f921-0f68-4a59-8962-328d2d800b7a 00:11:57.442 11:55:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:57.442 11:55:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:11:57.442 11:55:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:57.442 11:55:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:57.442 11:55:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:57.442 11:55:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ef12f921-0f68-4a59-8962-328d2d800b7a -t 2000 00:11:57.701 [ 00:11:57.701 { 00:11:57.701 "name": "ef12f921-0f68-4a59-8962-328d2d800b7a", 00:11:57.701 "aliases": [ 00:11:57.701 "lvs/lvol" 00:11:57.701 ], 00:11:57.701 "product_name": "Logical Volume", 00:11:57.701 "block_size": 4096, 00:11:57.701 "num_blocks": 38912, 00:11:57.701 "uuid": "ef12f921-0f68-4a59-8962-328d2d800b7a", 00:11:57.701 "assigned_rate_limits": { 00:11:57.701 "rw_ios_per_sec": 0, 00:11:57.701 "rw_mbytes_per_sec": 0, 00:11:57.701 "r_mbytes_per_sec": 0, 00:11:57.701 "w_mbytes_per_sec": 0 00:11:57.701 }, 00:11:57.701 "claimed": false, 00:11:57.701 "zoned": false, 00:11:57.701 "supported_io_types": { 00:11:57.701 "read": true, 00:11:57.701 "write": true, 00:11:57.701 "unmap": true, 00:11:57.701 "flush": false, 00:11:57.701 "reset": true, 00:11:57.701 "nvme_admin": false, 00:11:57.701 "nvme_io": false, 00:11:57.701 "nvme_io_md": false, 00:11:57.701 "write_zeroes": true, 00:11:57.701 "zcopy": false, 00:11:57.701 "get_zone_info": false, 00:11:57.701 "zone_management": false, 00:11:57.701 "zone_append": false, 00:11:57.701 "compare": false, 00:11:57.701 "compare_and_write": false, 00:11:57.701 "abort": false, 00:11:57.701 "seek_hole": true, 00:11:57.701 "seek_data": true, 00:11:57.701 "copy": false, 00:11:57.701 "nvme_iov_md": false 00:11:57.701 }, 00:11:57.701 "driver_specific": { 00:11:57.701 "lvol": { 00:11:57.701 "lvol_store_uuid": "a0e36e8f-30e5-4e31-b681-7b701e90e61c", 00:11:57.701 "base_bdev": "aio_bdev", 00:11:57.701 "thin_provision": false, 00:11:57.701 "num_allocated_clusters": 38, 00:11:57.701 "snapshot": false, 00:11:57.701 "clone": false, 00:11:57.701 "esnap_clone": false 00:11:57.701 } 00:11:57.701 } 00:11:57.701 } 00:11:57.701 ] 00:11:57.701 11:55:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:11:57.702 11:55:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a0e36e8f-30e5-4e31-b681-7b701e90e61c 00:11:57.702 11:55:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:11:57.960 11:55:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:11:57.960 11:55:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a0e36e8f-30e5-4e31-b681-7b701e90e61c 00:11:57.960 11:55:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:11:57.960 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:11:57.960 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ef12f921-0f68-4a59-8962-328d2d800b7a 00:11:58.219 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a0e36e8f-30e5-4e31-b681-7b701e90e61c 00:11:58.477 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:58.736 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:58.736 00:11:58.736 real 0m15.588s 00:11:58.736 user 0m15.213s 00:11:58.736 sys 0m1.406s 00:11:58.736 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:58.736 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:11:58.736 ************************************ 00:11:58.736 END TEST lvs_grow_clean 00:11:58.736 ************************************ 00:11:58.736 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:11:58.736 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:58.736 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:58.736 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:58.736 ************************************ 00:11:58.736 START TEST lvs_grow_dirty 00:11:58.736 ************************************ 00:11:58.736 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:11:58.736 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:58.736 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:58.736 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:58.736 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:58.736 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:58.736 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:58.736 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:58.736 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:58.736 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:58.995 11:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:58.995 11:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:59.254 11:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=4dacb7aa-5bd9-4865-a9da-d1f142f44c58 00:11:59.254 11:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4dacb7aa-5bd9-4865-a9da-d1f142f44c58 00:11:59.254 11:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:59.254 11:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:59.254 11:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:59.254 11:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4dacb7aa-5bd9-4865-a9da-d1f142f44c58 lvol 150 00:11:59.513 11:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=a00cef68-e531-4681-9f6e-2944bafd1372 00:11:59.513 11:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:59.513 11:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:59.771 [2024-12-05 11:55:33.762341] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:59.771 [2024-12-05 11:55:33.762394] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:59.771 true 00:11:59.771 11:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4dacb7aa-5bd9-4865-a9da-d1f142f44c58 00:11:59.771 11:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:59.771 11:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:59.771 11:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:00.030 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a00cef68-e531-4681-9f6e-2944bafd1372 00:12:00.289 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:00.289 [2024-12-05 11:55:34.480496] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:00.547 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:00.547 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:00.547 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=4137141 00:12:00.547 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:00.547 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 4137141 /var/tmp/bdevperf.sock 00:12:00.547 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 4137141 ']' 00:12:00.547 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:00.547 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:00.547 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:00.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:00.547 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:00.548 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:00.548 [2024-12-05 11:55:34.696236] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:12:00.548 [2024-12-05 11:55:34.696281] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4137141 ] 00:12:00.806 [2024-12-05 11:55:34.767487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:00.806 [2024-12-05 11:55:34.807421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:00.806 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:00.806 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:12:00.806 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:01.063 Nvme0n1 00:12:01.063 11:55:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:01.320 [ 00:12:01.320 { 00:12:01.320 "name": "Nvme0n1", 00:12:01.320 "aliases": [ 00:12:01.320 "a00cef68-e531-4681-9f6e-2944bafd1372" 00:12:01.320 ], 00:12:01.320 "product_name": "NVMe disk", 00:12:01.320 "block_size": 4096, 00:12:01.320 "num_blocks": 38912, 00:12:01.320 "uuid": "a00cef68-e531-4681-9f6e-2944bafd1372", 00:12:01.320 "numa_id": 1, 00:12:01.320 "assigned_rate_limits": { 00:12:01.320 "rw_ios_per_sec": 0, 00:12:01.320 "rw_mbytes_per_sec": 0, 00:12:01.320 "r_mbytes_per_sec": 0, 00:12:01.320 "w_mbytes_per_sec": 0 00:12:01.320 }, 00:12:01.320 "claimed": false, 00:12:01.320 "zoned": false, 00:12:01.320 "supported_io_types": { 00:12:01.320 "read": true, 00:12:01.320 "write": true, 00:12:01.320 "unmap": true, 00:12:01.320 "flush": true, 00:12:01.320 "reset": true, 00:12:01.320 "nvme_admin": true, 00:12:01.320 "nvme_io": true, 00:12:01.320 "nvme_io_md": false, 00:12:01.320 "write_zeroes": true, 00:12:01.320 "zcopy": false, 00:12:01.320 "get_zone_info": false, 00:12:01.320 "zone_management": false, 00:12:01.320 "zone_append": false, 00:12:01.320 "compare": true, 00:12:01.320 "compare_and_write": true, 00:12:01.320 "abort": true, 00:12:01.320 "seek_hole": false, 00:12:01.320 "seek_data": false, 00:12:01.320 "copy": true, 00:12:01.320 "nvme_iov_md": false 00:12:01.320 }, 00:12:01.320 "memory_domains": [ 00:12:01.320 { 00:12:01.320 "dma_device_id": "system", 00:12:01.320 "dma_device_type": 1 00:12:01.320 } 00:12:01.320 ], 00:12:01.320 "driver_specific": { 00:12:01.320 "nvme": [ 00:12:01.320 { 00:12:01.320 "trid": { 00:12:01.320 "trtype": "TCP", 00:12:01.320 "adrfam": "IPv4", 00:12:01.320 "traddr": "10.0.0.2", 00:12:01.320 "trsvcid": "4420", 00:12:01.320 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:01.320 }, 00:12:01.320 "ctrlr_data": { 00:12:01.320 "cntlid": 1, 00:12:01.320 "vendor_id": "0x8086", 00:12:01.320 "model_number": "SPDK bdev Controller", 00:12:01.320 "serial_number": "SPDK0", 00:12:01.320 "firmware_revision": "25.01", 00:12:01.320 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:01.320 "oacs": { 00:12:01.320 "security": 0, 00:12:01.320 "format": 0, 00:12:01.320 "firmware": 0, 00:12:01.320 "ns_manage": 0 00:12:01.320 }, 00:12:01.320 "multi_ctrlr": true, 00:12:01.320 "ana_reporting": false 00:12:01.320 }, 00:12:01.320 "vs": { 00:12:01.320 "nvme_version": "1.3" 00:12:01.320 }, 00:12:01.320 "ns_data": { 00:12:01.320 "id": 1, 00:12:01.320 "can_share": true 00:12:01.320 } 00:12:01.320 } 00:12:01.320 ], 00:12:01.320 "mp_policy": "active_passive" 00:12:01.320 } 00:12:01.320 } 00:12:01.320 ] 00:12:01.320 11:55:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=4137365 00:12:01.320 11:55:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:01.320 11:55:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:01.320 Running I/O for 10 seconds... 00:12:02.694 Latency(us) 00:12:02.694 [2024-12-05T10:55:36.890Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:02.694 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:02.694 Nvme0n1 : 1.00 23543.00 91.96 0.00 0.00 0.00 0.00 0.00 00:12:02.694 [2024-12-05T10:55:36.890Z] =================================================================================================================== 00:12:02.694 [2024-12-05T10:55:36.890Z] Total : 23543.00 91.96 0.00 0.00 0.00 0.00 0.00 00:12:02.694 00:12:03.261 11:55:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4dacb7aa-5bd9-4865-a9da-d1f142f44c58 00:12:03.519 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:03.519 Nvme0n1 : 2.00 23650.00 92.38 0.00 0.00 0.00 0.00 0.00 00:12:03.519 [2024-12-05T10:55:37.715Z] =================================================================================================================== 00:12:03.519 [2024-12-05T10:55:37.715Z] Total : 23650.00 92.38 0.00 0.00 0.00 0.00 0.00 00:12:03.519 00:12:03.519 true 00:12:03.519 11:55:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4dacb7aa-5bd9-4865-a9da-d1f142f44c58 00:12:03.519 11:55:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:03.778 11:55:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:03.778 11:55:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:03.778 11:55:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 4137365 00:12:04.345 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:04.345 Nvme0n1 : 3.00 23666.67 92.45 0.00 0.00 0.00 0.00 0.00 00:12:04.345 [2024-12-05T10:55:38.541Z] =================================================================================================================== 00:12:04.345 [2024-12-05T10:55:38.541Z] Total : 23666.67 92.45 0.00 0.00 0.00 0.00 0.00 00:12:04.345 00:12:05.727 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:05.727 Nvme0n1 : 4.00 23735.25 92.72 0.00 0.00 0.00 0.00 0.00 00:12:05.727 [2024-12-05T10:55:39.923Z] =================================================================================================================== 00:12:05.727 [2024-12-05T10:55:39.923Z] Total : 23735.25 92.72 0.00 0.00 0.00 0.00 0.00 00:12:05.727 00:12:06.662 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:06.662 Nvme0n1 : 5.00 23764.20 92.83 0.00 0.00 0.00 0.00 0.00 00:12:06.662 [2024-12-05T10:55:40.858Z] =================================================================================================================== 00:12:06.662 [2024-12-05T10:55:40.858Z] Total : 23764.20 92.83 0.00 0.00 0.00 0.00 0.00 00:12:06.662 00:12:07.598 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:07.598 Nvme0n1 : 6.00 23800.83 92.97 0.00 0.00 0.00 0.00 0.00 00:12:07.598 [2024-12-05T10:55:41.794Z] =================================================================================================================== 00:12:07.598 [2024-12-05T10:55:41.794Z] Total : 23800.83 92.97 0.00 0.00 0.00 0.00 0.00 00:12:07.598 00:12:08.533 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:08.533 Nvme0n1 : 7.00 23839.57 93.12 0.00 0.00 0.00 0.00 0.00 00:12:08.533 [2024-12-05T10:55:42.729Z] =================================================================================================================== 00:12:08.533 [2024-12-05T10:55:42.729Z] Total : 23839.57 93.12 0.00 0.00 0.00 0.00 0.00 00:12:08.533 00:12:09.467 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:09.467 Nvme0n1 : 8.00 23869.12 93.24 0.00 0.00 0.00 0.00 0.00 00:12:09.467 [2024-12-05T10:55:43.663Z] =================================================================================================================== 00:12:09.467 [2024-12-05T10:55:43.663Z] Total : 23869.12 93.24 0.00 0.00 0.00 0.00 0.00 00:12:09.467 00:12:10.402 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:10.402 Nvme0n1 : 9.00 23898.44 93.35 0.00 0.00 0.00 0.00 0.00 00:12:10.402 [2024-12-05T10:55:44.598Z] =================================================================================================================== 00:12:10.402 [2024-12-05T10:55:44.598Z] Total : 23898.44 93.35 0.00 0.00 0.00 0.00 0.00 00:12:10.402 00:12:11.339 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:11.339 Nvme0n1 : 10.00 23909.40 93.40 0.00 0.00 0.00 0.00 0.00 00:12:11.339 [2024-12-05T10:55:45.535Z] =================================================================================================================== 00:12:11.339 [2024-12-05T10:55:45.535Z] Total : 23909.40 93.40 0.00 0.00 0.00 0.00 0.00 00:12:11.339 00:12:11.339 00:12:11.339 Latency(us) 00:12:11.339 [2024-12-05T10:55:45.535Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:11.339 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:11.339 Nvme0n1 : 10.00 23914.75 93.42 0.00 0.00 5349.55 3136.37 10673.01 00:12:11.339 [2024-12-05T10:55:45.535Z] =================================================================================================================== 00:12:11.339 [2024-12-05T10:55:45.535Z] Total : 23914.75 93.42 0.00 0.00 5349.55 3136.37 10673.01 00:12:11.339 { 00:12:11.339 "results": [ 00:12:11.339 { 00:12:11.339 "job": "Nvme0n1", 00:12:11.339 "core_mask": "0x2", 00:12:11.339 "workload": "randwrite", 00:12:11.339 "status": "finished", 00:12:11.339 "queue_depth": 128, 00:12:11.339 "io_size": 4096, 00:12:11.339 "runtime": 10.003115, 00:12:11.339 "iops": 23914.750555202056, 00:12:11.339 "mibps": 93.41699435625803, 00:12:11.339 "io_failed": 0, 00:12:11.339 "io_timeout": 0, 00:12:11.339 "avg_latency_us": 5349.552658407353, 00:12:11.339 "min_latency_us": 3136.365714285714, 00:12:11.339 "max_latency_us": 10673.005714285715 00:12:11.339 } 00:12:11.339 ], 00:12:11.339 "core_count": 1 00:12:11.339 } 00:12:11.339 11:55:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 4137141 00:12:11.339 11:55:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 4137141 ']' 00:12:11.339 11:55:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 4137141 00:12:11.339 11:55:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:12:11.597 11:55:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:11.597 11:55:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4137141 00:12:11.597 11:55:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:11.597 11:55:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:11.597 11:55:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4137141' 00:12:11.597 killing process with pid 4137141 00:12:11.597 11:55:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 4137141 00:12:11.597 Received shutdown signal, test time was about 10.000000 seconds 00:12:11.597 00:12:11.597 Latency(us) 00:12:11.597 [2024-12-05T10:55:45.793Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:11.597 [2024-12-05T10:55:45.793Z] =================================================================================================================== 00:12:11.597 [2024-12-05T10:55:45.793Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:11.597 11:55:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 4137141 00:12:11.597 11:55:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:11.855 11:55:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:12.112 11:55:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4dacb7aa-5bd9-4865-a9da-d1f142f44c58 00:12:12.112 11:55:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:12.371 11:55:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:12.371 11:55:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:12:12.371 11:55:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 4134031 00:12:12.371 11:55:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 4134031 00:12:12.371 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 4134031 Killed "${NVMF_APP[@]}" "$@" 00:12:12.371 11:55:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:12:12.371 11:55:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:12:12.371 11:55:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:12:12.371 11:55:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:12.371 11:55:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:12.371 11:55:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@328 -- # nvmfpid=4139214 00:12:12.371 11:55:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@329 -- # waitforlisten 4139214 00:12:12.371 11:55:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:12.371 11:55:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 4139214 ']' 00:12:12.371 11:55:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:12.371 11:55:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:12.371 11:55:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:12.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:12.371 11:55:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:12.371 11:55:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:12.371 [2024-12-05 11:55:46.476547] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:12:12.371 [2024-12-05 11:55:46.476591] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:12.371 [2024-12-05 11:55:46.556064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:12.629 [2024-12-05 11:55:46.597147] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:12.629 [2024-12-05 11:55:46.597180] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:12.629 [2024-12-05 11:55:46.597187] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:12.629 [2024-12-05 11:55:46.597193] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:12.629 [2024-12-05 11:55:46.597199] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:12.629 [2024-12-05 11:55:46.597795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.629 11:55:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:12.629 11:55:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:12:12.629 11:55:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:12:12.629 11:55:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:12.629 11:55:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:12.629 11:55:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:12.629 11:55:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:12.887 [2024-12-05 11:55:46.895876] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:12:12.887 [2024-12-05 11:55:46.895962] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:12:12.887 [2024-12-05 11:55:46.895988] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:12:12.887 11:55:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:12:12.887 11:55:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev a00cef68-e531-4681-9f6e-2944bafd1372 00:12:12.887 11:55:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=a00cef68-e531-4681-9f6e-2944bafd1372 00:12:12.887 11:55:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:12.887 11:55:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:12:12.887 11:55:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:12.887 11:55:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:12.887 11:55:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:13.145 11:55:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a00cef68-e531-4681-9f6e-2944bafd1372 -t 2000 00:12:13.146 [ 00:12:13.146 { 00:12:13.146 "name": "a00cef68-e531-4681-9f6e-2944bafd1372", 00:12:13.146 "aliases": [ 00:12:13.146 "lvs/lvol" 00:12:13.146 ], 00:12:13.146 "product_name": "Logical Volume", 00:12:13.146 "block_size": 4096, 00:12:13.146 "num_blocks": 38912, 00:12:13.146 "uuid": "a00cef68-e531-4681-9f6e-2944bafd1372", 00:12:13.146 "assigned_rate_limits": { 00:12:13.146 "rw_ios_per_sec": 0, 00:12:13.146 "rw_mbytes_per_sec": 0, 00:12:13.146 "r_mbytes_per_sec": 0, 00:12:13.146 "w_mbytes_per_sec": 0 00:12:13.146 }, 00:12:13.146 "claimed": false, 00:12:13.146 "zoned": false, 00:12:13.146 "supported_io_types": { 00:12:13.146 "read": true, 00:12:13.146 "write": true, 00:12:13.146 "unmap": true, 00:12:13.146 "flush": false, 00:12:13.146 "reset": true, 00:12:13.146 "nvme_admin": false, 00:12:13.146 "nvme_io": false, 00:12:13.146 "nvme_io_md": false, 00:12:13.146 "write_zeroes": true, 00:12:13.146 "zcopy": false, 00:12:13.146 "get_zone_info": false, 00:12:13.146 "zone_management": false, 00:12:13.146 "zone_append": false, 00:12:13.146 "compare": false, 00:12:13.146 "compare_and_write": false, 00:12:13.146 "abort": false, 00:12:13.146 "seek_hole": true, 00:12:13.146 "seek_data": true, 00:12:13.146 "copy": false, 00:12:13.146 "nvme_iov_md": false 00:12:13.146 }, 00:12:13.146 "driver_specific": { 00:12:13.146 "lvol": { 00:12:13.146 "lvol_store_uuid": "4dacb7aa-5bd9-4865-a9da-d1f142f44c58", 00:12:13.146 "base_bdev": "aio_bdev", 00:12:13.146 "thin_provision": false, 00:12:13.146 "num_allocated_clusters": 38, 00:12:13.146 "snapshot": false, 00:12:13.146 "clone": false, 00:12:13.146 "esnap_clone": false 00:12:13.146 } 00:12:13.146 } 00:12:13.146 } 00:12:13.146 ] 00:12:13.146 11:55:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:12:13.146 11:55:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4dacb7aa-5bd9-4865-a9da-d1f142f44c58 00:12:13.146 11:55:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:12:13.403 11:55:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:12:13.403 11:55:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4dacb7aa-5bd9-4865-a9da-d1f142f44c58 00:12:13.403 11:55:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:12:13.661 11:55:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:12:13.661 11:55:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:13.661 [2024-12-05 11:55:47.836647] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:13.919 11:55:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4dacb7aa-5bd9-4865-a9da-d1f142f44c58 00:12:13.919 11:55:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:12:13.919 11:55:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4dacb7aa-5bd9-4865-a9da-d1f142f44c58 00:12:13.919 11:55:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:13.919 11:55:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:13.919 11:55:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:13.919 11:55:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:13.919 11:55:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:13.919 11:55:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:13.919 11:55:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:13.919 11:55:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:13.919 11:55:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4dacb7aa-5bd9-4865-a9da-d1f142f44c58 00:12:13.919 request: 00:12:13.919 { 00:12:13.919 "uuid": "4dacb7aa-5bd9-4865-a9da-d1f142f44c58", 00:12:13.919 "method": "bdev_lvol_get_lvstores", 00:12:13.919 "req_id": 1 00:12:13.919 } 00:12:13.919 Got JSON-RPC error response 00:12:13.919 response: 00:12:13.919 { 00:12:13.919 "code": -19, 00:12:13.919 "message": "No such device" 00:12:13.919 } 00:12:13.919 11:55:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:12:13.919 11:55:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:13.919 11:55:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:13.919 11:55:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:13.919 11:55:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:14.177 aio_bdev 00:12:14.177 11:55:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a00cef68-e531-4681-9f6e-2944bafd1372 00:12:14.177 11:55:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=a00cef68-e531-4681-9f6e-2944bafd1372 00:12:14.177 11:55:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:14.177 11:55:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:12:14.177 11:55:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:14.177 11:55:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:14.177 11:55:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:14.435 11:55:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a00cef68-e531-4681-9f6e-2944bafd1372 -t 2000 00:12:14.435 [ 00:12:14.435 { 00:12:14.435 "name": "a00cef68-e531-4681-9f6e-2944bafd1372", 00:12:14.435 "aliases": [ 00:12:14.435 "lvs/lvol" 00:12:14.435 ], 00:12:14.435 "product_name": "Logical Volume", 00:12:14.435 "block_size": 4096, 00:12:14.435 "num_blocks": 38912, 00:12:14.435 "uuid": "a00cef68-e531-4681-9f6e-2944bafd1372", 00:12:14.435 "assigned_rate_limits": { 00:12:14.435 "rw_ios_per_sec": 0, 00:12:14.435 "rw_mbytes_per_sec": 0, 00:12:14.435 "r_mbytes_per_sec": 0, 00:12:14.435 "w_mbytes_per_sec": 0 00:12:14.435 }, 00:12:14.435 "claimed": false, 00:12:14.435 "zoned": false, 00:12:14.435 "supported_io_types": { 00:12:14.435 "read": true, 00:12:14.435 "write": true, 00:12:14.435 "unmap": true, 00:12:14.435 "flush": false, 00:12:14.435 "reset": true, 00:12:14.435 "nvme_admin": false, 00:12:14.435 "nvme_io": false, 00:12:14.435 "nvme_io_md": false, 00:12:14.435 "write_zeroes": true, 00:12:14.435 "zcopy": false, 00:12:14.435 "get_zone_info": false, 00:12:14.435 "zone_management": false, 00:12:14.435 "zone_append": false, 00:12:14.435 "compare": false, 00:12:14.435 "compare_and_write": false, 00:12:14.435 "abort": false, 00:12:14.435 "seek_hole": true, 00:12:14.435 "seek_data": true, 00:12:14.435 "copy": false, 00:12:14.435 "nvme_iov_md": false 00:12:14.435 }, 00:12:14.435 "driver_specific": { 00:12:14.435 "lvol": { 00:12:14.435 "lvol_store_uuid": "4dacb7aa-5bd9-4865-a9da-d1f142f44c58", 00:12:14.435 "base_bdev": "aio_bdev", 00:12:14.435 "thin_provision": false, 00:12:14.435 "num_allocated_clusters": 38, 00:12:14.435 "snapshot": false, 00:12:14.435 "clone": false, 00:12:14.435 "esnap_clone": false 00:12:14.435 } 00:12:14.435 } 00:12:14.435 } 00:12:14.435 ] 00:12:14.435 11:55:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:12:14.435 11:55:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4dacb7aa-5bd9-4865-a9da-d1f142f44c58 00:12:14.435 11:55:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:14.693 11:55:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:14.693 11:55:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4dacb7aa-5bd9-4865-a9da-d1f142f44c58 00:12:14.693 11:55:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:14.965 11:55:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:14.965 11:55:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a00cef68-e531-4681-9f6e-2944bafd1372 00:12:15.224 11:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4dacb7aa-5bd9-4865-a9da-d1f142f44c58 00:12:15.224 11:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:15.482 11:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:15.482 00:12:15.482 real 0m16.760s 00:12:15.482 user 0m43.393s 00:12:15.482 sys 0m3.803s 00:12:15.482 11:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:15.482 11:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:15.482 ************************************ 00:12:15.482 END TEST lvs_grow_dirty 00:12:15.482 ************************************ 00:12:15.482 11:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:12:15.482 11:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:12:15.482 11:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:12:15.483 11:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:12:15.483 11:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:15.483 11:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:12:15.483 11:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:12:15.483 11:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:12:15.483 11:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:15.483 nvmf_trace.0 00:12:15.483 11:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:12:15.483 11:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:12:15.483 11:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # nvmfcleanup 00:12:15.483 11:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@99 -- # sync 00:12:15.483 11:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:12:15.483 11:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@102 -- # set +e 00:12:15.483 11:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@103 -- # for i in {1..20} 00:12:15.483 11:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:12:15.483 rmmod nvme_tcp 00:12:15.741 rmmod nvme_fabrics 00:12:15.741 rmmod nvme_keyring 00:12:15.741 11:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:12:15.741 11:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # set -e 00:12:15.741 11:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # return 0 00:12:15.741 11:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # '[' -n 4139214 ']' 00:12:15.741 11:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@337 -- # killprocess 4139214 00:12:15.741 11:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 4139214 ']' 00:12:15.741 11:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 4139214 00:12:15.741 11:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:12:15.741 11:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:15.741 11:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4139214 00:12:15.741 11:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:15.741 11:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:15.741 11:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4139214' 00:12:15.741 killing process with pid 4139214 00:12:15.741 11:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 4139214 00:12:15.741 11:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 4139214 00:12:16.000 11:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:12:16.000 11:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # nvmf_fini 00:12:16.000 11:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@264 -- # local dev 00:12:16.000 11:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@267 -- # remove_target_ns 00:12:16.000 11:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:12:16.000 11:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:12:16.000 11:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_target_ns 00:12:17.905 11:55:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@268 -- # delete_main_bridge 00:12:17.905 11:55:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:12:17.905 11:55:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@130 -- # return 0 00:12:17.905 11:55:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:12:17.905 11:55:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:12:17.905 11:55:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:12:17.905 11:55:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:12:17.905 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:12:17.905 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:12:17.905 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:12:17.905 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:12:17.905 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:12:17.905 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:12:17.905 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:12:17.905 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:12:17.905 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:12:17.905 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:12:17.905 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:12:17.905 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:12:17.905 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:12:17.905 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@41 -- # _dev=0 00:12:17.905 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@41 -- # dev_map=() 00:12:17.905 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@284 -- # iptr 00:12:17.905 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@542 -- # iptables-save 00:12:17.905 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:12:17.905 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@542 -- # iptables-restore 00:12:17.905 00:12:17.905 real 0m41.816s 00:12:17.905 user 1m4.213s 00:12:17.905 sys 0m10.293s 00:12:17.905 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:17.905 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:17.905 ************************************ 00:12:17.905 END TEST nvmf_lvs_grow 00:12:17.905 ************************************ 00:12:17.905 11:55:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@24 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:17.905 11:55:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:17.905 11:55:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:17.905 11:55:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:17.905 ************************************ 00:12:17.905 START TEST nvmf_bdev_io_wait 00:12:17.905 ************************************ 00:12:17.905 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:18.167 * Looking for test storage... 00:12:18.167 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:18.167 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:18.167 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:12:18.167 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:18.167 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:18.167 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:18.167 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:18.167 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:18.167 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:12:18.167 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:12:18.167 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:12:18.167 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:12:18.167 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:12:18.167 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:12:18.167 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:12:18.167 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:18.167 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:12:18.167 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:12:18.167 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:18.167 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:18.167 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:12:18.167 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:12:18.167 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:18.167 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:12:18.167 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:12:18.167 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:12:18.167 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:12:18.167 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:18.167 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:12:18.167 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:12:18.167 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:18.167 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:18.167 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:12:18.168 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:18.168 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:18.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.168 --rc genhtml_branch_coverage=1 00:12:18.168 --rc genhtml_function_coverage=1 00:12:18.168 --rc genhtml_legend=1 00:12:18.168 --rc geninfo_all_blocks=1 00:12:18.168 --rc geninfo_unexecuted_blocks=1 00:12:18.168 00:12:18.168 ' 00:12:18.168 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:18.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.168 --rc genhtml_branch_coverage=1 00:12:18.168 --rc genhtml_function_coverage=1 00:12:18.168 --rc genhtml_legend=1 00:12:18.168 --rc geninfo_all_blocks=1 00:12:18.168 --rc geninfo_unexecuted_blocks=1 00:12:18.168 00:12:18.168 ' 00:12:18.168 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:18.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.168 --rc genhtml_branch_coverage=1 00:12:18.168 --rc genhtml_function_coverage=1 00:12:18.168 --rc genhtml_legend=1 00:12:18.168 --rc geninfo_all_blocks=1 00:12:18.168 --rc geninfo_unexecuted_blocks=1 00:12:18.168 00:12:18.168 ' 00:12:18.168 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:18.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.168 --rc genhtml_branch_coverage=1 00:12:18.168 --rc genhtml_function_coverage=1 00:12:18.168 --rc genhtml_legend=1 00:12:18.168 --rc geninfo_all_blocks=1 00:12:18.168 --rc geninfo_unexecuted_blocks=1 00:12:18.168 00:12:18.168 ' 00:12:18.168 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:18.168 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:12:18.168 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:18.168 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:18.168 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:18.168 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:18.168 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:18.168 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:12:18.168 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:18.168 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:12:18.168 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:18.168 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:18.168 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:18.168 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:12:18.168 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:12:18.168 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:18.168 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:18.168 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:12:18.168 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:18.168 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:18.168 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:18.168 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.168 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.168 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.168 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:12:18.168 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.168 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:12:18.168 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:12:18.168 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:12:18.168 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:12:18.168 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@50 -- # : 0 00:12:18.168 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:12:18.168 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:12:18.168 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:12:18.168 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:18.168 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:18.168 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:12:18.168 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:12:18.168 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:12:18.168 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:12:18.168 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@54 -- # have_pci_nics=0 00:12:18.168 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:18.168 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:18.168 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:12:18.168 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:12:18.168 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:18.168 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # prepare_net_devs 00:12:18.168 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # local -g is_hw=no 00:12:18.168 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # remove_target_ns 00:12:18.168 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:12:18.168 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:12:18.168 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_target_ns 00:12:18.168 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:12:18.168 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:12:18.168 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # xtrace_disable 00:12:18.168 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:24.754 11:55:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:24.754 11:55:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@131 -- # pci_devs=() 00:12:24.754 11:55:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@131 -- # local -a pci_devs 00:12:24.754 11:55:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@132 -- # pci_net_devs=() 00:12:24.754 11:55:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:12:24.754 11:55:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@133 -- # pci_drivers=() 00:12:24.754 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@133 -- # local -A pci_drivers 00:12:24.754 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@135 -- # net_devs=() 00:12:24.754 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@135 -- # local -ga net_devs 00:12:24.754 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@136 -- # e810=() 00:12:24.754 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@136 -- # local -ga e810 00:12:24.754 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@137 -- # x722=() 00:12:24.754 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@137 -- # local -ga x722 00:12:24.754 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@138 -- # mlx=() 00:12:24.754 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@138 -- # local -ga mlx 00:12:24.754 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:24.754 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:24.754 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:24.754 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:24.754 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:24.754 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:24.754 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:24.754 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:24.754 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:24.754 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:24.754 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:24.754 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:24.754 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:12:24.754 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:12:24.754 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:12:24.754 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:12:24.754 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:12:24.754 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:12:24.754 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:12:24.754 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:24.754 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:24.754 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:12:24.754 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:12:24.754 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:24.754 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:24.754 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:12:24.754 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:12:24.754 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:24.754 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:24.754 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:12:24.754 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:12:24.754 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:24.754 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:24.754 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:12:24.754 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:12:24.754 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:12:24.754 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:12:24.754 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:12:24.754 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:24.754 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:12:24.754 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:24.754 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # [[ up == up ]] 00:12:24.754 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:12:24.754 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:24.754 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:24.754 Found net devices under 0000:86:00.0: cvl_0_0 00:12:24.754 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:12:24.754 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:12:24.754 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:24.754 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:12:24.754 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:24.754 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # [[ up == up ]] 00:12:24.754 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:12:24.754 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:24.754 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:24.754 Found net devices under 0000:86:00.1: cvl_0_1 00:12:24.754 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # is_hw=yes 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@257 -- # create_target_ns 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@27 -- # local -gA dev_map 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@28 -- # local -g _dev 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@44 -- # ips=() 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@11 -- # local val=167772161 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:12:24.755 10.0.0.1 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@11 -- # local val=167772162 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:12:24.755 10.0.0.2 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@38 -- # ping_ips 1 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@107 -- # local dev=initiator0 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:12:24.755 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:12:24.756 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:24.756 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.489 ms 00:12:24.756 00:12:24.756 --- 10.0.0.1 ping statistics --- 00:12:24.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:24.756 rtt min/avg/max/mdev = 0.489/0.489/0.489/0.000 ms 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # get_net_dev target0 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@107 -- # local dev=target0 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:12:24.756 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:24.756 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:12:24.756 00:12:24.756 --- 10.0.0.2 ping statistics --- 00:12:24.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:24.756 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # (( pair++ )) 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # return 0 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@107 -- # local dev=initiator0 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@107 -- # local dev=initiator1 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # return 1 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # dev= 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@169 -- # return 0 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # get_net_dev target0 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@107 -- # local dev=target0 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # get_net_dev target1 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@107 -- # local dev=target1 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # return 1 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # dev= 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@169 -- # return 0 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:12:24.756 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # nvmfpid=4143307 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # waitforlisten 4143307 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 4143307 ']' 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:24.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:24.757 [2024-12-05 11:55:58.493578] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:12:24.757 [2024-12-05 11:55:58.493622] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:24.757 [2024-12-05 11:55:58.570293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:24.757 [2024-12-05 11:55:58.613485] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:24.757 [2024-12-05 11:55:58.613523] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:24.757 [2024-12-05 11:55:58.613530] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:24.757 [2024-12-05 11:55:58.613538] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:24.757 [2024-12-05 11:55:58.613543] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:24.757 [2024-12-05 11:55:58.614998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:24.757 [2024-12-05 11:55:58.615110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:24.757 [2024-12-05 11:55:58.615216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.757 [2024-12-05 11:55:58.615216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:24.757 [2024-12-05 11:55:58.747233] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:24.757 Malloc0 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:24.757 [2024-12-05 11:55:58.794554] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=4143447 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=4143449 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:12:24.757 { 00:12:24.757 "params": { 00:12:24.757 "name": "Nvme$subsystem", 00:12:24.757 "trtype": "$TEST_TRANSPORT", 00:12:24.757 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:24.757 "adrfam": "ipv4", 00:12:24.757 "trsvcid": "$NVMF_PORT", 00:12:24.757 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:24.757 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:24.757 "hdgst": ${hdgst:-false}, 00:12:24.757 "ddgst": ${ddgst:-false} 00:12:24.757 }, 00:12:24.757 "method": "bdev_nvme_attach_controller" 00:12:24.757 } 00:12:24.757 EOF 00:12:24.757 )") 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=4143452 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:12:24.757 { 00:12:24.757 "params": { 00:12:24.757 "name": "Nvme$subsystem", 00:12:24.757 "trtype": "$TEST_TRANSPORT", 00:12:24.757 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:24.757 "adrfam": "ipv4", 00:12:24.757 "trsvcid": "$NVMF_PORT", 00:12:24.757 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:24.757 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:24.757 "hdgst": ${hdgst:-false}, 00:12:24.757 "ddgst": ${ddgst:-false} 00:12:24.757 }, 00:12:24.757 "method": "bdev_nvme_attach_controller" 00:12:24.757 } 00:12:24.757 EOF 00:12:24.757 )") 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=4143456 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:12:24.757 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:12:24.757 { 00:12:24.757 "params": { 00:12:24.758 "name": "Nvme$subsystem", 00:12:24.758 "trtype": "$TEST_TRANSPORT", 00:12:24.758 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:24.758 "adrfam": "ipv4", 00:12:24.758 "trsvcid": "$NVMF_PORT", 00:12:24.758 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:24.758 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:24.758 "hdgst": ${hdgst:-false}, 00:12:24.758 "ddgst": ${ddgst:-false} 00:12:24.758 }, 00:12:24.758 "method": "bdev_nvme_attach_controller" 00:12:24.758 } 00:12:24.758 EOF 00:12:24.758 )") 00:12:24.758 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:12:24.758 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:12:24.758 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:12:24.758 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:12:24.758 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:12:24.758 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:12:24.758 { 00:12:24.758 "params": { 00:12:24.758 "name": "Nvme$subsystem", 00:12:24.758 "trtype": "$TEST_TRANSPORT", 00:12:24.758 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:24.758 "adrfam": "ipv4", 00:12:24.758 "trsvcid": "$NVMF_PORT", 00:12:24.758 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:24.758 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:24.758 "hdgst": ${hdgst:-false}, 00:12:24.758 "ddgst": ${ddgst:-false} 00:12:24.758 }, 00:12:24.758 "method": "bdev_nvme_attach_controller" 00:12:24.758 } 00:12:24.758 EOF 00:12:24.758 )") 00:12:24.758 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:12:24.758 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 4143447 00:12:24.758 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:12:24.758 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:12:24.758 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:12:24.758 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:12:24.758 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:12:24.758 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:12:24.758 "params": { 00:12:24.758 "name": "Nvme1", 00:12:24.758 "trtype": "tcp", 00:12:24.758 "traddr": "10.0.0.2", 00:12:24.758 "adrfam": "ipv4", 00:12:24.758 "trsvcid": "4420", 00:12:24.758 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:24.758 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:24.758 "hdgst": false, 00:12:24.758 "ddgst": false 00:12:24.758 }, 00:12:24.758 "method": "bdev_nvme_attach_controller" 00:12:24.758 }' 00:12:24.758 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:12:24.758 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:12:24.758 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:12:24.758 "params": { 00:12:24.758 "name": "Nvme1", 00:12:24.758 "trtype": "tcp", 00:12:24.758 "traddr": "10.0.0.2", 00:12:24.758 "adrfam": "ipv4", 00:12:24.758 "trsvcid": "4420", 00:12:24.758 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:24.758 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:24.758 "hdgst": false, 00:12:24.758 "ddgst": false 00:12:24.758 }, 00:12:24.758 "method": "bdev_nvme_attach_controller" 00:12:24.758 }' 00:12:24.758 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:12:24.758 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:12:24.758 "params": { 00:12:24.758 "name": "Nvme1", 00:12:24.758 "trtype": "tcp", 00:12:24.758 "traddr": "10.0.0.2", 00:12:24.758 "adrfam": "ipv4", 00:12:24.758 "trsvcid": "4420", 00:12:24.758 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:24.758 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:24.758 "hdgst": false, 00:12:24.758 "ddgst": false 00:12:24.758 }, 00:12:24.758 "method": "bdev_nvme_attach_controller" 00:12:24.758 }' 00:12:24.758 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:12:24.758 11:55:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:12:24.758 "params": { 00:12:24.758 "name": "Nvme1", 00:12:24.758 "trtype": "tcp", 00:12:24.758 "traddr": "10.0.0.2", 00:12:24.758 "adrfam": "ipv4", 00:12:24.758 "trsvcid": "4420", 00:12:24.758 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:24.758 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:24.758 "hdgst": false, 00:12:24.758 "ddgst": false 00:12:24.758 }, 00:12:24.758 "method": "bdev_nvme_attach_controller" 00:12:24.758 }' 00:12:24.758 [2024-12-05 11:55:58.845830] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:12:24.758 [2024-12-05 11:55:58.845881] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:12:24.758 [2024-12-05 11:55:58.849398] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:12:24.758 [2024-12-05 11:55:58.849438] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:12:24.758 [2024-12-05 11:55:58.851186] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:12:24.758 [2024-12-05 11:55:58.851234] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:12:24.758 [2024-12-05 11:55:58.852456] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:12:24.758 [2024-12-05 11:55:58.852499] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:12:25.018 [2024-12-05 11:55:59.032518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:25.018 [2024-12-05 11:55:59.075371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:12:25.018 [2024-12-05 11:55:59.115183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:25.018 [2024-12-05 11:55:59.157947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:12:25.277 [2024-12-05 11:55:59.218818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:25.277 [2024-12-05 11:55:59.275579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:12:25.277 [2024-12-05 11:55:59.276953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:25.277 [2024-12-05 11:55:59.317734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:12:25.277 Running I/O for 1 seconds... 00:12:25.277 Running I/O for 1 seconds... 00:12:25.277 Running I/O for 1 seconds... 00:12:25.537 Running I/O for 1 seconds... 00:12:26.474 11945.00 IOPS, 46.66 MiB/s 00:12:26.474 Latency(us) 00:12:26.474 [2024-12-05T10:56:00.670Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:26.474 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:12:26.474 Nvme1n1 : 1.01 11989.76 46.83 0.00 0.00 10636.26 6023.07 15603.81 00:12:26.474 [2024-12-05T10:56:00.670Z] =================================================================================================================== 00:12:26.474 [2024-12-05T10:56:00.670Z] Total : 11989.76 46.83 0.00 0.00 10636.26 6023.07 15603.81 00:12:26.474 11781.00 IOPS, 46.02 MiB/s [2024-12-05T10:56:00.670Z] 240104.00 IOPS, 937.91 MiB/s 00:12:26.474 Latency(us) 00:12:26.474 [2024-12-05T10:56:00.670Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:26.474 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:12:26.474 Nvme1n1 : 1.00 239741.84 936.49 0.00 0.00 530.99 224.30 1490.16 00:12:26.474 [2024-12-05T10:56:00.670Z] =================================================================================================================== 00:12:26.474 [2024-12-05T10:56:00.670Z] Total : 239741.84 936.49 0.00 0.00 530.99 224.30 1490.16 00:12:26.474 00:12:26.474 Latency(us) 00:12:26.474 [2024-12-05T10:56:00.670Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:26.474 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:12:26.474 Nvme1n1 : 1.01 11850.56 46.29 0.00 0.00 10769.18 4525.10 18599.74 00:12:26.474 [2024-12-05T10:56:00.670Z] =================================================================================================================== 00:12:26.474 [2024-12-05T10:56:00.670Z] Total : 11850.56 46.29 0.00 0.00 10769.18 4525.10 18599.74 00:12:26.474 10244.00 IOPS, 40.02 MiB/s 00:12:26.474 Latency(us) 00:12:26.474 [2024-12-05T10:56:00.670Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:26.474 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:12:26.474 Nvme1n1 : 1.01 10324.02 40.33 0.00 0.00 12363.59 3947.76 23967.45 00:12:26.474 [2024-12-05T10:56:00.670Z] =================================================================================================================== 00:12:26.474 [2024-12-05T10:56:00.670Z] Total : 10324.02 40.33 0.00 0.00 12363.59 3947.76 23967.45 00:12:26.474 11:56:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 4143449 00:12:26.474 11:56:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 4143452 00:12:26.474 11:56:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 4143456 00:12:26.474 11:56:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:26.474 11:56:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.474 11:56:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:26.474 11:56:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.474 11:56:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:12:26.474 11:56:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:12:26.474 11:56:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # nvmfcleanup 00:12:26.474 11:56:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@99 -- # sync 00:12:26.474 11:56:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:12:26.474 11:56:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # set +e 00:12:26.734 11:56:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # for i in {1..20} 00:12:26.734 11:56:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:12:26.734 rmmod nvme_tcp 00:12:26.734 rmmod nvme_fabrics 00:12:26.734 rmmod nvme_keyring 00:12:26.734 11:56:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:12:26.734 11:56:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # set -e 00:12:26.734 11:56:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # return 0 00:12:26.734 11:56:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # '[' -n 4143307 ']' 00:12:26.734 11:56:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@337 -- # killprocess 4143307 00:12:26.734 11:56:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 4143307 ']' 00:12:26.734 11:56:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 4143307 00:12:26.734 11:56:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:12:26.734 11:56:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:26.734 11:56:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4143307 00:12:26.734 11:56:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:26.734 11:56:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:26.734 11:56:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4143307' 00:12:26.734 killing process with pid 4143307 00:12:26.734 11:56:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 4143307 00:12:26.734 11:56:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 4143307 00:12:26.734 11:56:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:12:26.734 11:56:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # nvmf_fini 00:12:26.734 11:56:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@264 -- # local dev 00:12:26.734 11:56:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@267 -- # remove_target_ns 00:12:26.734 11:56:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:12:26.734 11:56:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:12:26.734 11:56:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_target_ns 00:12:29.274 11:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@268 -- # delete_main_bridge 00:12:29.274 11:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:12:29.274 11:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@130 -- # return 0 00:12:29.274 11:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:12:29.274 11:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:12:29.274 11:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:12:29.274 11:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:12:29.274 11:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:12:29.274 11:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:12:29.274 11:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:12:29.274 11:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:12:29.274 11:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:12:29.274 11:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:12:29.274 11:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:12:29.274 11:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:12:29.274 11:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:12:29.274 11:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:12:29.274 11:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:12:29.274 11:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:12:29.274 11:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:12:29.274 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@41 -- # _dev=0 00:12:29.274 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@41 -- # dev_map=() 00:12:29.274 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@284 -- # iptr 00:12:29.274 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@542 -- # iptables-save 00:12:29.274 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:12:29.274 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@542 -- # iptables-restore 00:12:29.274 00:12:29.274 real 0m10.916s 00:12:29.274 user 0m16.005s 00:12:29.274 sys 0m6.311s 00:12:29.274 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:29.274 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:29.274 ************************************ 00:12:29.274 END TEST nvmf_bdev_io_wait 00:12:29.274 ************************************ 00:12:29.274 11:56:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@25 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:29.274 11:56:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:29.274 11:56:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:29.274 11:56:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:29.274 ************************************ 00:12:29.274 START TEST nvmf_queue_depth 00:12:29.274 ************************************ 00:12:29.274 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:29.274 * Looking for test storage... 00:12:29.274 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:29.275 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:29.275 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:12:29.275 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:29.275 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:29.275 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:29.275 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:29.275 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:29.275 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:12:29.275 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:12:29.275 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:12:29.275 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:12:29.275 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:12:29.275 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:12:29.275 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:12:29.275 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:29.275 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:12:29.275 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:12:29.275 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:29.275 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:29.275 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:12:29.275 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:12:29.275 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:29.275 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:12:29.275 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:12:29.275 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:12:29.275 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:12:29.275 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:29.275 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:12:29.275 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:12:29.275 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:29.275 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:29.275 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:12:29.275 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:29.275 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:29.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.275 --rc genhtml_branch_coverage=1 00:12:29.275 --rc genhtml_function_coverage=1 00:12:29.275 --rc genhtml_legend=1 00:12:29.275 --rc geninfo_all_blocks=1 00:12:29.275 --rc geninfo_unexecuted_blocks=1 00:12:29.275 00:12:29.275 ' 00:12:29.275 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:29.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.275 --rc genhtml_branch_coverage=1 00:12:29.275 --rc genhtml_function_coverage=1 00:12:29.275 --rc genhtml_legend=1 00:12:29.275 --rc geninfo_all_blocks=1 00:12:29.275 --rc geninfo_unexecuted_blocks=1 00:12:29.275 00:12:29.275 ' 00:12:29.275 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:29.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.275 --rc genhtml_branch_coverage=1 00:12:29.275 --rc genhtml_function_coverage=1 00:12:29.275 --rc genhtml_legend=1 00:12:29.275 --rc geninfo_all_blocks=1 00:12:29.275 --rc geninfo_unexecuted_blocks=1 00:12:29.275 00:12:29.275 ' 00:12:29.275 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:29.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.275 --rc genhtml_branch_coverage=1 00:12:29.275 --rc genhtml_function_coverage=1 00:12:29.275 --rc genhtml_legend=1 00:12:29.275 --rc geninfo_all_blocks=1 00:12:29.275 --rc geninfo_unexecuted_blocks=1 00:12:29.275 00:12:29.275 ' 00:12:29.275 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:29.275 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:12:29.275 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:29.275 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:29.275 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:29.275 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:29.275 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:29.275 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:12:29.275 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:29.275 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:12:29.275 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:29.275 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:29.275 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:29.275 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:12:29.275 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:12:29.275 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:29.275 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:29.275 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:12:29.275 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:29.275 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:29.275 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:29.275 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.275 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.275 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.275 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:12:29.275 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.275 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:12:29.275 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:12:29.275 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:12:29.275 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:12:29.276 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@50 -- # : 0 00:12:29.276 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:12:29.276 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:12:29.276 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:12:29.276 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:29.276 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:29.276 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:12:29.276 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:12:29.276 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:12:29.276 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:12:29.276 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@54 -- # have_pci_nics=0 00:12:29.276 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:12:29.276 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:12:29.276 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:29.276 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:12:29.276 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:12:29.276 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:29.276 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # prepare_net_devs 00:12:29.276 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # local -g is_hw=no 00:12:29.276 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@260 -- # remove_target_ns 00:12:29.276 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:12:29.276 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:12:29.276 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_target_ns 00:12:29.276 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:12:29.276 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:12:29.276 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # xtrace_disable 00:12:29.276 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:35.843 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:35.843 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@131 -- # pci_devs=() 00:12:35.843 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@131 -- # local -a pci_devs 00:12:35.843 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@132 -- # pci_net_devs=() 00:12:35.843 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:12:35.843 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@133 -- # pci_drivers=() 00:12:35.843 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@133 -- # local -A pci_drivers 00:12:35.843 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@135 -- # net_devs=() 00:12:35.843 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@135 -- # local -ga net_devs 00:12:35.843 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@136 -- # e810=() 00:12:35.843 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@136 -- # local -ga e810 00:12:35.843 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@137 -- # x722=() 00:12:35.843 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@137 -- # local -ga x722 00:12:35.843 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@138 -- # mlx=() 00:12:35.843 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@138 -- # local -ga mlx 00:12:35.843 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:35.843 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:35.843 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:35.843 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:35.843 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:35.843 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:35.843 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:35.843 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:35.843 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:35.843 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:35.844 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:35.844 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:35.844 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:12:35.844 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:12:35.844 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:12:35.844 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:12:35.844 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:12:35.844 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:12:35.844 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:12:35.844 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:35.844 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:35.844 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:12:35.844 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:12:35.844 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:35.844 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:35.844 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:12:35.844 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:12:35.844 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:35.844 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:35.844 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:12:35.844 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:12:35.844 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:35.844 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:35.844 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:12:35.844 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:12:35.844 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:12:35.844 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:12:35.844 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:12:35.844 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:35.844 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:12:35.844 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:35.844 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # [[ up == up ]] 00:12:35.844 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:12:35.844 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:35.844 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:35.844 Found net devices under 0000:86:00.0: cvl_0_0 00:12:35.844 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:12:35.844 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:12:35.844 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:35.844 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:12:35.844 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:35.844 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # [[ up == up ]] 00:12:35.844 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:12:35.844 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:35.844 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:35.844 Found net devices under 0000:86:00.1: cvl_0_1 00:12:35.844 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:12:35.844 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:12:35.844 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:12:35.844 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # is_hw=yes 00:12:35.844 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:12:35.844 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:12:35.844 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:12:35.844 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:12:35.844 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@257 -- # create_target_ns 00:12:35.844 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:12:35.844 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:12:35.844 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:12:35.844 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:35.844 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:12:35.844 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:12:35.844 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:35.844 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:35.844 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:12:35.844 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:12:35.844 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:12:35.844 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:12:35.844 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@27 -- # local -gA dev_map 00:12:35.844 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@28 -- # local -g _dev 00:12:35.844 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:12:35.844 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:12:35.844 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:12:35.844 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:12:35.844 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@44 -- # ips=() 00:12:35.844 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:12:35.844 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:12:35.844 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:12:35.844 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:12:35.844 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:12:35.844 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:12:35.844 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:12:35.844 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:12:35.844 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:12:35.844 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:12:35.844 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:12:35.844 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:12:35.844 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:12:35.844 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:12:35.844 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:12:35.844 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:12:35.844 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:12:35.844 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:12:35.844 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:35.844 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:12:35.844 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@11 -- # local val=167772161 00:12:35.844 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:12:35.844 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:12:35.844 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:12:35.844 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:12:35.844 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:12:35.844 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:12:35.844 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:12:35.844 10.0.0.1 00:12:35.844 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:12:35.844 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:12:35.844 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:35.844 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:35.844 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:12:35.844 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@11 -- # local val=167772162 00:12:35.844 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:12:35.844 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:12:35.845 10.0.0.2 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@38 -- # ping_ips 1 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@107 -- # local dev=initiator0 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:12:35.845 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:35.845 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:12:35.845 00:12:35.845 --- 10.0.0.1 ping statistics --- 00:12:35.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.845 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # get_net_dev target0 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@107 -- # local dev=target0 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:12:35.845 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:35.845 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:12:35.845 00:12:35.845 --- 10.0.0.2 ping statistics --- 00:12:35.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.845 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # (( pair++ )) 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@270 -- # return 0 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@107 -- # local dev=initiator0 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:12:35.845 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:12:35.846 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:12:35.846 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:12:35.846 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@107 -- # local dev=initiator1 00:12:35.846 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:12:35.846 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:12:35.846 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # return 1 00:12:35.846 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # dev= 00:12:35.846 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@169 -- # return 0 00:12:35.846 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:12:35.846 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:12:35.846 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:12:35.846 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:12:35.846 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:12:35.846 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:35.846 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:35.846 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # get_net_dev target0 00:12:35.846 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@107 -- # local dev=target0 00:12:35.846 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:12:35.846 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:12:35.846 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:12:35.846 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:12:35.846 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:12:35.846 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:12:35.846 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:12:35.846 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:12:35.846 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:12:35.846 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:35.846 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:12:35.846 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:12:35.846 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:12:35.846 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:12:35.846 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:35.846 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:35.846 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # get_net_dev target1 00:12:35.846 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@107 -- # local dev=target1 00:12:35.846 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:12:35.846 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:12:35.846 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # return 1 00:12:35.846 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # dev= 00:12:35.846 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@169 -- # return 0 00:12:35.846 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:12:35.846 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:35.846 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:12:35.846 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:12:35.846 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:35.846 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:12:35.846 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:12:35.846 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:12:35.846 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:12:35.846 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:35.846 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:35.846 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # nvmfpid=4147367 00:12:35.846 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@329 -- # waitforlisten 4147367 00:12:35.846 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:35.846 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 4147367 ']' 00:12:35.846 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:35.846 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:35.846 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:35.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:35.846 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:35.846 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:35.846 [2024-12-05 11:56:09.461685] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:12:35.846 [2024-12-05 11:56:09.461735] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:35.846 [2024-12-05 11:56:09.545079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:35.846 [2024-12-05 11:56:09.583941] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:35.846 [2024-12-05 11:56:09.583979] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:35.846 [2024-12-05 11:56:09.583985] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:35.846 [2024-12-05 11:56:09.583991] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:35.846 [2024-12-05 11:56:09.583996] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:35.846 [2024-12-05 11:56:09.584593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:36.412 11:56:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:36.412 11:56:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:12:36.412 11:56:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:12:36.412 11:56:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:36.412 11:56:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:36.412 11:56:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:36.412 11:56:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:36.412 11:56:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.412 11:56:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:36.412 [2024-12-05 11:56:10.353543] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:36.412 11:56:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.412 11:56:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:36.412 11:56:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.412 11:56:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:36.412 Malloc0 00:12:36.412 11:56:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.412 11:56:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:36.412 11:56:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.412 11:56:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:36.412 11:56:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.412 11:56:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:36.412 11:56:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.412 11:56:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:36.412 11:56:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.412 11:56:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:36.412 11:56:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.412 11:56:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:36.412 [2024-12-05 11:56:10.403912] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:36.412 11:56:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.412 11:56:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=4147609 00:12:36.412 11:56:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:12:36.412 11:56:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:36.412 11:56:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 4147609 /var/tmp/bdevperf.sock 00:12:36.412 11:56:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 4147609 ']' 00:12:36.412 11:56:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:36.412 11:56:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:36.412 11:56:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:36.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:36.412 11:56:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:36.412 11:56:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:36.412 [2024-12-05 11:56:10.455582] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:12:36.412 [2024-12-05 11:56:10.455628] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4147609 ] 00:12:36.412 [2024-12-05 11:56:10.529560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:36.412 [2024-12-05 11:56:10.571988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.671 11:56:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:36.671 11:56:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:12:36.671 11:56:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:12:36.671 11:56:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.671 11:56:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:36.671 NVMe0n1 00:12:36.672 11:56:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.672 11:56:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:36.672 Running I/O for 10 seconds... 00:12:38.984 12034.00 IOPS, 47.01 MiB/s [2024-12-05T10:56:14.118Z] 12278.00 IOPS, 47.96 MiB/s [2024-12-05T10:56:15.150Z] 12318.67 IOPS, 48.12 MiB/s [2024-12-05T10:56:16.093Z] 12450.25 IOPS, 48.63 MiB/s [2024-12-05T10:56:17.029Z] 12488.80 IOPS, 48.78 MiB/s [2024-12-05T10:56:17.964Z] 12522.50 IOPS, 48.92 MiB/s [2024-12-05T10:56:18.899Z] 12562.43 IOPS, 49.07 MiB/s [2024-12-05T10:56:20.273Z] 12536.12 IOPS, 48.97 MiB/s [2024-12-05T10:56:21.208Z] 12571.56 IOPS, 49.11 MiB/s [2024-12-05T10:56:21.208Z] 12566.50 IOPS, 49.09 MiB/s 00:12:47.012 Latency(us) 00:12:47.012 [2024-12-05T10:56:21.208Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:47.012 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:12:47.012 Verification LBA range: start 0x0 length 0x4000 00:12:47.012 NVMe0n1 : 10.06 12587.26 49.17 0.00 0.00 81093.03 18849.40 51679.82 00:12:47.012 [2024-12-05T10:56:21.208Z] =================================================================================================================== 00:12:47.012 [2024-12-05T10:56:21.208Z] Total : 12587.26 49.17 0.00 0.00 81093.03 18849.40 51679.82 00:12:47.012 { 00:12:47.012 "results": [ 00:12:47.012 { 00:12:47.012 "job": "NVMe0n1", 00:12:47.012 "core_mask": "0x1", 00:12:47.012 "workload": "verify", 00:12:47.012 "status": "finished", 00:12:47.012 "verify_range": { 00:12:47.012 "start": 0, 00:12:47.012 "length": 16384 00:12:47.012 }, 00:12:47.012 "queue_depth": 1024, 00:12:47.012 "io_size": 4096, 00:12:47.012 "runtime": 10.062156, 00:12:47.012 "iops": 12587.262610518064, 00:12:47.012 "mibps": 49.16899457233619, 00:12:47.012 "io_failed": 0, 00:12:47.012 "io_timeout": 0, 00:12:47.012 "avg_latency_us": 81093.02680067901, 00:12:47.012 "min_latency_us": 18849.401904761904, 00:12:47.012 "max_latency_us": 51679.817142857144 00:12:47.012 } 00:12:47.012 ], 00:12:47.012 "core_count": 1 00:12:47.012 } 00:12:47.012 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 4147609 00:12:47.012 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 4147609 ']' 00:12:47.012 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 4147609 00:12:47.012 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:12:47.012 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:47.012 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4147609 00:12:47.012 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:47.012 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:47.012 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4147609' 00:12:47.012 killing process with pid 4147609 00:12:47.012 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 4147609 00:12:47.012 Received shutdown signal, test time was about 10.000000 seconds 00:12:47.012 00:12:47.012 Latency(us) 00:12:47.012 [2024-12-05T10:56:21.208Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:47.012 [2024-12-05T10:56:21.208Z] =================================================================================================================== 00:12:47.012 [2024-12-05T10:56:21.208Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:47.012 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 4147609 00:12:47.012 11:56:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:47.012 11:56:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:12:47.012 11:56:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # nvmfcleanup 00:12:47.012 11:56:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@99 -- # sync 00:12:47.012 11:56:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:12:47.012 11:56:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@102 -- # set +e 00:12:47.012 11:56:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@103 -- # for i in {1..20} 00:12:47.012 11:56:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:12:47.012 rmmod nvme_tcp 00:12:47.012 rmmod nvme_fabrics 00:12:47.012 rmmod nvme_keyring 00:12:47.012 11:56:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:12:47.012 11:56:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # set -e 00:12:47.012 11:56:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # return 0 00:12:47.012 11:56:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # '[' -n 4147367 ']' 00:12:47.012 11:56:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@337 -- # killprocess 4147367 00:12:47.012 11:56:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 4147367 ']' 00:12:47.012 11:56:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 4147367 00:12:47.012 11:56:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:12:47.271 11:56:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:47.271 11:56:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4147367 00:12:47.271 11:56:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:47.271 11:56:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:47.271 11:56:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4147367' 00:12:47.271 killing process with pid 4147367 00:12:47.271 11:56:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 4147367 00:12:47.271 11:56:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 4147367 00:12:47.271 11:56:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:12:47.271 11:56:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # nvmf_fini 00:12:47.271 11:56:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@264 -- # local dev 00:12:47.271 11:56:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@267 -- # remove_target_ns 00:12:47.271 11:56:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:12:47.271 11:56:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:12:47.271 11:56:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_target_ns 00:12:49.805 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@268 -- # delete_main_bridge 00:12:49.805 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:12:49.805 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@130 -- # return 0 00:12:49.805 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:12:49.805 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:12:49.805 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:12:49.805 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:12:49.805 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:12:49.805 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:12:49.805 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:12:49.805 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:12:49.805 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:12:49.805 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:12:49.805 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:12:49.805 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:12:49.805 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:12:49.805 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:12:49.805 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@41 -- # _dev=0 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@41 -- # dev_map=() 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@284 -- # iptr 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@542 -- # iptables-save 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@542 -- # iptables-restore 00:12:49.806 00:12:49.806 real 0m20.442s 00:12:49.806 user 0m23.857s 00:12:49.806 sys 0m6.120s 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:49.806 ************************************ 00:12:49.806 END TEST nvmf_queue_depth 00:12:49.806 ************************************ 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:49.806 ************************************ 00:12:49.806 START TEST nvmf_nmic 00:12:49.806 ************************************ 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:49.806 * Looking for test storage... 00:12:49.806 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:49.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.806 --rc genhtml_branch_coverage=1 00:12:49.806 --rc genhtml_function_coverage=1 00:12:49.806 --rc genhtml_legend=1 00:12:49.806 --rc geninfo_all_blocks=1 00:12:49.806 --rc geninfo_unexecuted_blocks=1 00:12:49.806 00:12:49.806 ' 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:49.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.806 --rc genhtml_branch_coverage=1 00:12:49.806 --rc genhtml_function_coverage=1 00:12:49.806 --rc genhtml_legend=1 00:12:49.806 --rc geninfo_all_blocks=1 00:12:49.806 --rc geninfo_unexecuted_blocks=1 00:12:49.806 00:12:49.806 ' 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:49.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.806 --rc genhtml_branch_coverage=1 00:12:49.806 --rc genhtml_function_coverage=1 00:12:49.806 --rc genhtml_legend=1 00:12:49.806 --rc geninfo_all_blocks=1 00:12:49.806 --rc geninfo_unexecuted_blocks=1 00:12:49.806 00:12:49.806 ' 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:49.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.806 --rc genhtml_branch_coverage=1 00:12:49.806 --rc genhtml_function_coverage=1 00:12:49.806 --rc genhtml_legend=1 00:12:49.806 --rc geninfo_all_blocks=1 00:12:49.806 --rc geninfo_unexecuted_blocks=1 00:12:49.806 00:12:49.806 ' 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.806 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:12:49.807 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.807 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:12:49.807 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:12:49.807 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:12:49.807 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:12:49.807 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@50 -- # : 0 00:12:49.807 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:12:49.807 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:12:49.807 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:12:49.807 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:49.807 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:49.807 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:12:49.807 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:12:49.807 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:12:49.807 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:12:49.807 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@54 -- # have_pci_nics=0 00:12:49.807 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:49.807 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:49.807 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:12:49.807 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:12:49.807 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:49.807 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # prepare_net_devs 00:12:49.807 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # local -g is_hw=no 00:12:49.807 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@260 -- # remove_target_ns 00:12:49.807 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:12:49.807 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:12:49.807 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_target_ns 00:12:49.807 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:12:49.807 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:12:49.807 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # xtrace_disable 00:12:49.807 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@131 -- # pci_devs=() 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@131 -- # local -a pci_devs 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@132 -- # pci_net_devs=() 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@133 -- # pci_drivers=() 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@133 -- # local -A pci_drivers 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@135 -- # net_devs=() 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@135 -- # local -ga net_devs 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@136 -- # e810=() 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@136 -- # local -ga e810 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@137 -- # x722=() 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@137 -- # local -ga x722 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@138 -- # mlx=() 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@138 -- # local -ga mlx 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:56.376 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:56.376 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # [[ up == up ]] 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:56.376 Found net devices under 0000:86:00.0: cvl_0_0 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # [[ up == up ]] 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:56.376 Found net devices under 0000:86:00.1: cvl_0_1 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # is_hw=yes 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@257 -- # create_target_ns 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:12:56.376 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@27 -- # local -gA dev_map 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@28 -- # local -g _dev 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@44 -- # ips=() 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@11 -- # local val=167772161 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:12:56.377 10.0.0.1 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@11 -- # local val=167772162 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:12:56.377 10.0.0.2 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@38 -- # ping_ips 1 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@107 -- # local dev=initiator0 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:12:56.377 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:56.377 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.346 ms 00:12:56.377 00:12:56.377 --- 10.0.0.1 ping statistics --- 00:12:56.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.377 rtt min/avg/max/mdev = 0.346/0.346/0.346/0.000 ms 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:56.377 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # get_net_dev target0 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@107 -- # local dev=target0 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:12:56.378 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:56.378 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:12:56.378 00:12:56.378 --- 10.0.0.2 ping statistics --- 00:12:56.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.378 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # (( pair++ )) 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@270 -- # return 0 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@107 -- # local dev=initiator0 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@107 -- # local dev=initiator1 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # return 1 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # dev= 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@169 -- # return 0 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # get_net_dev target0 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@107 -- # local dev=target0 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # get_net_dev target1 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@107 -- # local dev=target1 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # return 1 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # dev= 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@169 -- # return 0 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # nvmfpid=4153004 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@329 -- # waitforlisten 4153004 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 4153004 ']' 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:56.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:56.378 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:56.378 [2024-12-05 11:56:29.979951] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:12:56.378 [2024-12-05 11:56:29.980003] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:56.378 [2024-12-05 11:56:30.062698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:56.378 [2024-12-05 11:56:30.108804] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:56.378 [2024-12-05 11:56:30.108844] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:56.378 [2024-12-05 11:56:30.108852] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:56.379 [2024-12-05 11:56:30.108858] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:56.379 [2024-12-05 11:56:30.108864] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:56.379 [2024-12-05 11:56:30.110407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:56.379 [2024-12-05 11:56:30.110516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:56.379 [2024-12-05 11:56:30.110626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.379 [2024-12-05 11:56:30.110627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:56.637 11:56:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:56.637 11:56:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:12:56.637 11:56:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:12:56.637 11:56:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:56.637 11:56:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:56.896 11:56:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:56.896 11:56:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:56.896 11:56:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.896 11:56:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:56.896 [2024-12-05 11:56:30.855033] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:56.896 11:56:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.896 11:56:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:56.896 11:56:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.896 11:56:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:56.896 Malloc0 00:12:56.896 11:56:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.896 11:56:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:56.896 11:56:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.896 11:56:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:56.896 11:56:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.896 11:56:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:56.896 11:56:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.896 11:56:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:56.896 11:56:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.896 11:56:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:56.896 11:56:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.896 11:56:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:56.896 [2024-12-05 11:56:30.915328] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:56.896 11:56:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.896 11:56:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:12:56.896 test case1: single bdev can't be used in multiple subsystems 00:12:56.896 11:56:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:12:56.896 11:56:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.896 11:56:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:56.896 11:56:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.896 11:56:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:56.896 11:56:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.896 11:56:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:56.896 11:56:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.896 11:56:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:12:56.896 11:56:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:12:56.896 11:56:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.896 11:56:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:56.896 [2024-12-05 11:56:30.943222] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:12:56.896 [2024-12-05 11:56:30.943242] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:12:56.896 [2024-12-05 11:56:30.943250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.896 request: 00:12:56.896 { 00:12:56.896 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:56.896 "namespace": { 00:12:56.896 "bdev_name": "Malloc0", 00:12:56.896 "no_auto_visible": false, 00:12:56.896 "hide_metadata": false 00:12:56.896 }, 00:12:56.896 "method": "nvmf_subsystem_add_ns", 00:12:56.896 "req_id": 1 00:12:56.896 } 00:12:56.896 Got JSON-RPC error response 00:12:56.896 response: 00:12:56.896 { 00:12:56.896 "code": -32602, 00:12:56.896 "message": "Invalid parameters" 00:12:56.896 } 00:12:56.896 11:56:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:56.896 11:56:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:12:56.896 11:56:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:12:56.896 11:56:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:12:56.897 Adding namespace failed - expected result. 00:12:56.897 11:56:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:12:56.897 test case2: host connect to nvmf target in multiple paths 00:12:56.897 11:56:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:12:56.897 11:56:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.897 11:56:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:56.897 [2024-12-05 11:56:30.955352] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:12:56.897 11:56:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.897 11:56:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:58.274 11:56:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:12:59.211 11:56:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:12:59.211 11:56:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:12:59.211 11:56:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:59.211 11:56:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:59.211 11:56:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:13:01.117 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:01.117 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:01.117 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:01.117 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:01.117 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:01.117 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:13:01.117 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:01.117 [global] 00:13:01.117 thread=1 00:13:01.117 invalidate=1 00:13:01.117 rw=write 00:13:01.117 time_based=1 00:13:01.117 runtime=1 00:13:01.117 ioengine=libaio 00:13:01.117 direct=1 00:13:01.117 bs=4096 00:13:01.117 iodepth=1 00:13:01.117 norandommap=0 00:13:01.117 numjobs=1 00:13:01.117 00:13:01.117 verify_dump=1 00:13:01.117 verify_backlog=512 00:13:01.117 verify_state_save=0 00:13:01.117 do_verify=1 00:13:01.117 verify=crc32c-intel 00:13:01.117 [job0] 00:13:01.117 filename=/dev/nvme0n1 00:13:01.375 Could not set queue depth (nvme0n1) 00:13:01.633 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:01.633 fio-3.35 00:13:01.633 Starting 1 thread 00:13:02.570 00:13:02.570 job0: (groupid=0, jobs=1): err= 0: pid=4154097: Thu Dec 5 11:56:36 2024 00:13:02.570 read: IOPS=22, BW=89.9KiB/s (92.1kB/s)(92.0KiB/1023msec) 00:13:02.570 slat (nsec): min=9878, max=25353, avg=21697.74, stdev=2741.31 00:13:02.570 clat (usec): min=40840, max=41079, avg=40957.89, stdev=69.44 00:13:02.570 lat (usec): min=40864, max=41101, avg=40979.59, stdev=69.85 00:13:02.570 clat percentiles (usec): 00:13:02.570 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:13:02.570 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:02.570 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:02.570 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:02.570 | 99.99th=[41157] 00:13:02.570 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:13:02.570 slat (nsec): min=10079, max=45143, avg=11124.26, stdev=1850.62 00:13:02.570 clat (usec): min=127, max=332, avg=143.50, stdev=12.83 00:13:02.570 lat (usec): min=138, max=377, avg=154.62, stdev=14.00 00:13:02.570 clat percentiles (usec): 00:13:02.570 | 1.00th=[ 131], 5.00th=[ 135], 10.00th=[ 135], 20.00th=[ 137], 00:13:02.570 | 30.00th=[ 139], 40.00th=[ 139], 50.00th=[ 141], 60.00th=[ 143], 00:13:02.570 | 70.00th=[ 145], 80.00th=[ 149], 90.00th=[ 155], 95.00th=[ 163], 00:13:02.570 | 99.00th=[ 184], 99.50th=[ 194], 99.90th=[ 334], 99.95th=[ 334], 00:13:02.570 | 99.99th=[ 334] 00:13:02.570 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:13:02.570 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:02.570 lat (usec) : 250=95.51%, 500=0.19% 00:13:02.570 lat (msec) : 50=4.30% 00:13:02.570 cpu : usr=0.68%, sys=0.49%, ctx=535, majf=0, minf=1 00:13:02.570 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:02.570 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:02.570 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:02.570 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:02.570 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:02.570 00:13:02.570 Run status group 0 (all jobs): 00:13:02.570 READ: bw=89.9KiB/s (92.1kB/s), 89.9KiB/s-89.9KiB/s (92.1kB/s-92.1kB/s), io=92.0KiB (94.2kB), run=1023-1023msec 00:13:02.570 WRITE: bw=2002KiB/s (2050kB/s), 2002KiB/s-2002KiB/s (2050kB/s-2050kB/s), io=2048KiB (2097kB), run=1023-1023msec 00:13:02.570 00:13:02.570 Disk stats (read/write): 00:13:02.570 nvme0n1: ios=69/512, merge=0/0, ticks=805/68, in_queue=873, util=91.78% 00:13:02.570 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:02.830 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:13:02.830 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:02.830 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:13:02.830 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:02.830 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:02.830 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:02.830 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:02.830 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:13:02.830 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:13:02.830 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:13:02.830 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # nvmfcleanup 00:13:02.830 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@99 -- # sync 00:13:02.830 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:13:02.830 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@102 -- # set +e 00:13:02.830 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@103 -- # for i in {1..20} 00:13:02.830 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:13:02.830 rmmod nvme_tcp 00:13:02.830 rmmod nvme_fabrics 00:13:02.830 rmmod nvme_keyring 00:13:02.830 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:13:02.830 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # set -e 00:13:02.830 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # return 0 00:13:02.830 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # '[' -n 4153004 ']' 00:13:02.830 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@337 -- # killprocess 4153004 00:13:02.830 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 4153004 ']' 00:13:02.830 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 4153004 00:13:02.830 11:56:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:13:02.830 11:56:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:02.830 11:56:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4153004 00:13:03.090 11:56:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:03.090 11:56:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:03.090 11:56:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4153004' 00:13:03.090 killing process with pid 4153004 00:13:03.090 11:56:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 4153004 00:13:03.090 11:56:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 4153004 00:13:03.090 11:56:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:13:03.090 11:56:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # nvmf_fini 00:13:03.090 11:56:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@264 -- # local dev 00:13:03.090 11:56:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@267 -- # remove_target_ns 00:13:03.090 11:56:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:13:03.090 11:56:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:13:03.090 11:56:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_target_ns 00:13:05.625 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@268 -- # delete_main_bridge 00:13:05.625 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:13:05.625 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@130 -- # return 0 00:13:05.625 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:13:05.625 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:13:05.625 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:13:05.625 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:13:05.625 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:13:05.625 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:13:05.625 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:13:05.625 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:13:05.625 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:13:05.625 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:13:05.625 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:13:05.625 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:13:05.625 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:13:05.625 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:13:05.625 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:13:05.625 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:13:05.625 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:13:05.625 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@41 -- # _dev=0 00:13:05.625 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@41 -- # dev_map=() 00:13:05.625 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@284 -- # iptr 00:13:05.625 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@542 -- # iptables-save 00:13:05.625 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:13:05.625 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@542 -- # iptables-restore 00:13:05.625 00:13:05.625 real 0m15.720s 00:13:05.625 user 0m35.757s 00:13:05.625 sys 0m5.348s 00:13:05.625 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:05.625 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:05.625 ************************************ 00:13:05.625 END TEST nvmf_nmic 00:13:05.625 ************************************ 00:13:05.625 11:56:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:05.625 11:56:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:05.625 11:56:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:05.625 11:56:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:05.625 ************************************ 00:13:05.625 START TEST nvmf_fio_target 00:13:05.625 ************************************ 00:13:05.625 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:05.625 * Looking for test storage... 00:13:05.625 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:05.625 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:05.625 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:13:05.625 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:05.625 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:05.625 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:05.625 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:05.625 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:05.625 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:13:05.625 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:13:05.625 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:13:05.625 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:13:05.625 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:13:05.625 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:13:05.625 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:13:05.625 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:05.625 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:13:05.625 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:13:05.625 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:05.625 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:05.625 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:13:05.625 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:13:05.625 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:05.625 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:13:05.625 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:13:05.625 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:13:05.625 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:13:05.625 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:05.626 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:13:05.626 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:13:05.626 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:05.626 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:05.626 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:13:05.626 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:05.626 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:05.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:05.626 --rc genhtml_branch_coverage=1 00:13:05.626 --rc genhtml_function_coverage=1 00:13:05.626 --rc genhtml_legend=1 00:13:05.626 --rc geninfo_all_blocks=1 00:13:05.626 --rc geninfo_unexecuted_blocks=1 00:13:05.626 00:13:05.626 ' 00:13:05.626 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:05.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:05.626 --rc genhtml_branch_coverage=1 00:13:05.626 --rc genhtml_function_coverage=1 00:13:05.626 --rc genhtml_legend=1 00:13:05.626 --rc geninfo_all_blocks=1 00:13:05.626 --rc geninfo_unexecuted_blocks=1 00:13:05.626 00:13:05.626 ' 00:13:05.626 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:05.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:05.626 --rc genhtml_branch_coverage=1 00:13:05.626 --rc genhtml_function_coverage=1 00:13:05.626 --rc genhtml_legend=1 00:13:05.626 --rc geninfo_all_blocks=1 00:13:05.626 --rc geninfo_unexecuted_blocks=1 00:13:05.626 00:13:05.626 ' 00:13:05.626 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:05.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:05.626 --rc genhtml_branch_coverage=1 00:13:05.626 --rc genhtml_function_coverage=1 00:13:05.626 --rc genhtml_legend=1 00:13:05.626 --rc geninfo_all_blocks=1 00:13:05.626 --rc geninfo_unexecuted_blocks=1 00:13:05.626 00:13:05.626 ' 00:13:05.626 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:05.626 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:13:05.626 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:05.626 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:05.626 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:05.626 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:05.626 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:05.626 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:13:05.626 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:05.626 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:13:05.626 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:05.626 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:05.626 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:05.626 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:13:05.626 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:13:05.626 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:05.626 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:05.626 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:13:05.626 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:05.626 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:05.626 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:05.626 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.626 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.626 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.626 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:13:05.626 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.626 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:13:05.626 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:13:05.626 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:13:05.626 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:13:05.626 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@50 -- # : 0 00:13:05.626 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:13:05.626 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:13:05.626 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:13:05.626 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:05.626 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:05.626 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:13:05.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:13:05.626 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:13:05.626 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:13:05.626 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@54 -- # have_pci_nics=0 00:13:05.626 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:05.626 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:05.626 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:05.626 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:13:05.626 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:13:05.626 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:05.626 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # prepare_net_devs 00:13:05.626 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # local -g is_hw=no 00:13:05.626 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@260 -- # remove_target_ns 00:13:05.626 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:13:05.626 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:13:05.626 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:13:05.626 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:13:05.626 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:13:05.626 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # xtrace_disable 00:13:05.626 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@131 -- # pci_devs=() 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@131 -- # local -a pci_devs 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@132 -- # pci_net_devs=() 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@133 -- # pci_drivers=() 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@133 -- # local -A pci_drivers 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@135 -- # net_devs=() 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@135 -- # local -ga net_devs 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@136 -- # e810=() 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@136 -- # local -ga e810 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@137 -- # x722=() 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@137 -- # local -ga x722 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@138 -- # mlx=() 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@138 -- # local -ga mlx 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:12.200 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:12.200 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # [[ up == up ]] 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:12.200 Found net devices under 0000:86:00.0: cvl_0_0 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # [[ up == up ]] 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:12.200 Found net devices under 0000:86:00.1: cvl_0_1 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # is_hw=yes 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@257 -- # create_target_ns 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:13:12.200 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@27 -- # local -gA dev_map 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@28 -- # local -g _dev 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@44 -- # ips=() 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@11 -- # local val=167772161 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:13:12.201 10.0.0.1 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@11 -- # local val=167772162 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:13:12.201 10.0.0.2 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@38 -- # ping_ips 1 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@107 -- # local dev=initiator0 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:13:12.201 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:12.201 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:13:12.201 00:13:12.201 --- 10.0.0.1 ping statistics --- 00:13:12.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.201 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # get_net_dev target0 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@107 -- # local dev=target0 00:13:12.201 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:13:12.202 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:12.202 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.150 ms 00:13:12.202 00:13:12.202 --- 10.0.0.2 ping statistics --- 00:13:12.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.202 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # (( pair++ )) 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@270 -- # return 0 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@107 -- # local dev=initiator0 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@107 -- # local dev=initiator1 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # return 1 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # dev= 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@169 -- # return 0 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # get_net_dev target0 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@107 -- # local dev=target0 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # get_net_dev target1 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@107 -- # local dev=target1 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # return 1 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # dev= 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@169 -- # return 0 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # nvmfpid=4157890 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@329 -- # waitforlisten 4157890 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 4157890 ']' 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:12.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:12.202 11:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.202 [2024-12-05 11:56:45.747063] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:13:12.202 [2024-12-05 11:56:45.747109] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:12.202 [2024-12-05 11:56:45.824109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:12.203 [2024-12-05 11:56:45.863690] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:12.203 [2024-12-05 11:56:45.863728] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:12.203 [2024-12-05 11:56:45.863735] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:12.203 [2024-12-05 11:56:45.863740] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:12.203 [2024-12-05 11:56:45.863745] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:12.203 [2024-12-05 11:56:45.865234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:12.203 [2024-12-05 11:56:45.865344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:12.203 [2024-12-05 11:56:45.865437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.203 [2024-12-05 11:56:45.865437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:12.461 11:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:12.461 11:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:13:12.461 11:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:13:12.461 11:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:12.461 11:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.461 11:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:12.462 11:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:12.720 [2024-12-05 11:56:46.797782] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:12.720 11:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:12.979 11:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:13:12.979 11:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:13.239 11:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:13:13.239 11:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:13.498 11:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:13:13.498 11:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:13.498 11:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:13:13.498 11:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:13:13.757 11:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:14.016 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:13:14.016 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:14.275 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:13:14.275 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:14.535 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:13:14.535 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:13:14.535 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:14.818 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:14.818 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:15.077 11:56:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:15.077 11:56:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:15.336 11:56:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:15.336 [2024-12-05 11:56:49.479564] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:15.336 11:56:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:13:15.595 11:56:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:13:15.855 11:56:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:17.233 11:56:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:13:17.233 11:56:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:13:17.233 11:56:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:17.233 11:56:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:13:17.233 11:56:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:13:17.233 11:56:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:13:19.150 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:19.150 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:19.150 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:19.150 11:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:13:19.150 11:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:19.150 11:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:13:19.150 11:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:19.150 [global] 00:13:19.150 thread=1 00:13:19.150 invalidate=1 00:13:19.150 rw=write 00:13:19.150 time_based=1 00:13:19.150 runtime=1 00:13:19.150 ioengine=libaio 00:13:19.150 direct=1 00:13:19.150 bs=4096 00:13:19.150 iodepth=1 00:13:19.150 norandommap=0 00:13:19.150 numjobs=1 00:13:19.150 00:13:19.150 verify_dump=1 00:13:19.150 verify_backlog=512 00:13:19.150 verify_state_save=0 00:13:19.150 do_verify=1 00:13:19.150 verify=crc32c-intel 00:13:19.150 [job0] 00:13:19.150 filename=/dev/nvme0n1 00:13:19.150 [job1] 00:13:19.150 filename=/dev/nvme0n2 00:13:19.150 [job2] 00:13:19.150 filename=/dev/nvme0n3 00:13:19.150 [job3] 00:13:19.150 filename=/dev/nvme0n4 00:13:19.150 Could not set queue depth (nvme0n1) 00:13:19.150 Could not set queue depth (nvme0n2) 00:13:19.150 Could not set queue depth (nvme0n3) 00:13:19.150 Could not set queue depth (nvme0n4) 00:13:19.408 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:19.408 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:19.408 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:19.408 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:19.408 fio-3.35 00:13:19.408 Starting 4 threads 00:13:20.782 00:13:20.782 job0: (groupid=0, jobs=1): err= 0: pid=4159311: Thu Dec 5 11:56:54 2024 00:13:20.782 read: IOPS=2537, BW=9.91MiB/s (10.4MB/s)(9.92MiB/1001msec) 00:13:20.782 slat (nsec): min=7192, max=27153, avg=8067.40, stdev=968.46 00:13:20.782 clat (usec): min=161, max=385, avg=220.72, stdev=27.79 00:13:20.782 lat (usec): min=170, max=394, avg=228.79, stdev=27.80 00:13:20.782 clat percentiles (usec): 00:13:20.782 | 1.00th=[ 174], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 194], 00:13:20.782 | 30.00th=[ 200], 40.00th=[ 206], 50.00th=[ 217], 60.00th=[ 235], 00:13:20.782 | 70.00th=[ 243], 80.00th=[ 249], 90.00th=[ 258], 95.00th=[ 262], 00:13:20.782 | 99.00th=[ 273], 99.50th=[ 277], 99.90th=[ 289], 99.95th=[ 297], 00:13:20.782 | 99.99th=[ 388] 00:13:20.782 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:13:20.782 slat (nsec): min=10377, max=39789, avg=11558.25, stdev=1304.70 00:13:20.782 clat (usec): min=113, max=277, avg=146.00, stdev=13.65 00:13:20.782 lat (usec): min=124, max=317, avg=157.56, stdev=13.87 00:13:20.782 clat percentiles (usec): 00:13:20.782 | 1.00th=[ 121], 5.00th=[ 126], 10.00th=[ 129], 20.00th=[ 135], 00:13:20.782 | 30.00th=[ 139], 40.00th=[ 143], 50.00th=[ 145], 60.00th=[ 149], 00:13:20.782 | 70.00th=[ 153], 80.00th=[ 157], 90.00th=[ 163], 95.00th=[ 169], 00:13:20.782 | 99.00th=[ 182], 99.50th=[ 188], 99.90th=[ 196], 99.95th=[ 208], 00:13:20.782 | 99.99th=[ 277] 00:13:20.782 bw ( KiB/s): min=12288, max=12288, per=47.91%, avg=12288.00, stdev= 0.00, samples=1 00:13:20.782 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:13:20.782 lat (usec) : 250=90.31%, 500=9.69% 00:13:20.782 cpu : usr=3.30%, sys=4.80%, ctx=5102, majf=0, minf=1 00:13:20.782 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:20.782 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:20.782 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:20.782 issued rwts: total=2540,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:20.782 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:20.782 job1: (groupid=0, jobs=1): err= 0: pid=4159325: Thu Dec 5 11:56:54 2024 00:13:20.782 read: IOPS=2329, BW=9319KiB/s (9542kB/s)(9328KiB/1001msec) 00:13:20.782 slat (nsec): min=6183, max=28470, avg=6995.67, stdev=894.81 00:13:20.782 clat (usec): min=178, max=466, avg=237.48, stdev=23.45 00:13:20.782 lat (usec): min=185, max=473, avg=244.47, stdev=23.45 00:13:20.782 clat percentiles (usec): 00:13:20.782 | 1.00th=[ 190], 5.00th=[ 198], 10.00th=[ 206], 20.00th=[ 217], 00:13:20.782 | 30.00th=[ 227], 40.00th=[ 235], 50.00th=[ 241], 60.00th=[ 245], 00:13:20.782 | 70.00th=[ 249], 80.00th=[ 255], 90.00th=[ 262], 95.00th=[ 265], 00:13:20.782 | 99.00th=[ 281], 99.50th=[ 306], 99.90th=[ 441], 99.95th=[ 449], 00:13:20.782 | 99.99th=[ 469] 00:13:20.782 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:13:20.782 slat (nsec): min=9020, max=39057, avg=10082.77, stdev=1094.00 00:13:20.782 clat (usec): min=112, max=330, avg=154.10, stdev=23.32 00:13:20.782 lat (usec): min=122, max=369, avg=164.19, stdev=23.48 00:13:20.782 clat percentiles (usec): 00:13:20.782 | 1.00th=[ 121], 5.00th=[ 127], 10.00th=[ 131], 20.00th=[ 135], 00:13:20.782 | 30.00th=[ 141], 40.00th=[ 145], 50.00th=[ 149], 60.00th=[ 155], 00:13:20.782 | 70.00th=[ 161], 80.00th=[ 169], 90.00th=[ 184], 95.00th=[ 198], 00:13:20.782 | 99.00th=[ 235], 99.50th=[ 245], 99.90th=[ 251], 99.95th=[ 258], 00:13:20.782 | 99.99th=[ 330] 00:13:20.782 bw ( KiB/s): min=11600, max=11600, per=45.23%, avg=11600.00, stdev= 0.00, samples=1 00:13:20.782 iops : min= 2900, max= 2900, avg=2900.00, stdev= 0.00, samples=1 00:13:20.782 lat (usec) : 250=85.94%, 500=14.06% 00:13:20.782 cpu : usr=2.50%, sys=4.20%, ctx=4892, majf=0, minf=2 00:13:20.782 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:20.782 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:20.782 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:20.782 issued rwts: total=2332,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:20.782 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:20.782 job2: (groupid=0, jobs=1): err= 0: pid=4159342: Thu Dec 5 11:56:54 2024 00:13:20.782 read: IOPS=510, BW=2042KiB/s (2091kB/s)(2120KiB/1038msec) 00:13:20.782 slat (nsec): min=6474, max=35226, avg=7681.07, stdev=2729.12 00:13:20.782 clat (usec): min=184, max=41315, avg=1597.54, stdev=7394.70 00:13:20.782 lat (usec): min=191, max=41325, avg=1605.22, stdev=7396.88 00:13:20.782 clat percentiles (usec): 00:13:20.782 | 1.00th=[ 188], 5.00th=[ 194], 10.00th=[ 196], 20.00th=[ 202], 00:13:20.782 | 30.00th=[ 204], 40.00th=[ 208], 50.00th=[ 210], 60.00th=[ 215], 00:13:20.782 | 70.00th=[ 219], 80.00th=[ 223], 90.00th=[ 233], 95.00th=[ 273], 00:13:20.782 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:20.782 | 99.99th=[41157] 00:13:20.782 write: IOPS=986, BW=3946KiB/s (4041kB/s)(4096KiB/1038msec); 0 zone resets 00:13:20.782 slat (nsec): min=9288, max=50929, avg=10893.83, stdev=3244.79 00:13:20.782 clat (usec): min=135, max=299, avg=168.87, stdev=21.84 00:13:20.782 lat (usec): min=146, max=330, avg=179.76, stdev=22.89 00:13:20.782 clat percentiles (usec): 00:13:20.782 | 1.00th=[ 139], 5.00th=[ 147], 10.00th=[ 149], 20.00th=[ 153], 00:13:20.782 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 163], 60.00th=[ 167], 00:13:20.782 | 70.00th=[ 174], 80.00th=[ 180], 90.00th=[ 194], 95.00th=[ 223], 00:13:20.782 | 99.00th=[ 243], 99.50th=[ 251], 99.90th=[ 273], 99.95th=[ 302], 00:13:20.782 | 99.99th=[ 302] 00:13:20.782 bw ( KiB/s): min= 8192, max= 8192, per=31.94%, avg=8192.00, stdev= 0.00, samples=1 00:13:20.782 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:20.782 lat (usec) : 250=97.49%, 500=1.35% 00:13:20.782 lat (msec) : 50=1.16% 00:13:20.782 cpu : usr=0.87%, sys=1.25%, ctx=1555, majf=0, minf=1 00:13:20.782 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:20.782 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:20.782 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:20.782 issued rwts: total=530,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:20.782 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:20.782 job3: (groupid=0, jobs=1): err= 0: pid=4159348: Thu Dec 5 11:56:54 2024 00:13:20.782 read: IOPS=22, BW=88.6KiB/s (90.8kB/s)(92.0KiB/1038msec) 00:13:20.782 slat (nsec): min=9553, max=23965, avg=22750.48, stdev=2899.64 00:13:20.782 clat (usec): min=40885, max=41201, avg=40977.56, stdev=73.15 00:13:20.782 lat (usec): min=40909, max=41210, avg=41000.31, stdev=71.35 00:13:20.782 clat percentiles (usec): 00:13:20.782 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:13:20.782 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:20.782 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:20.782 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:20.782 | 99.99th=[41157] 00:13:20.782 write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets 00:13:20.782 slat (nsec): min=9778, max=37870, avg=10935.78, stdev=1602.12 00:13:20.782 clat (usec): min=136, max=270, avg=170.62, stdev=17.65 00:13:20.782 lat (usec): min=147, max=308, avg=181.55, stdev=17.97 00:13:20.782 clat percentiles (usec): 00:13:20.782 | 1.00th=[ 145], 5.00th=[ 149], 10.00th=[ 151], 20.00th=[ 157], 00:13:20.782 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 174], 00:13:20.782 | 70.00th=[ 178], 80.00th=[ 184], 90.00th=[ 192], 95.00th=[ 204], 00:13:20.782 | 99.00th=[ 225], 99.50th=[ 235], 99.90th=[ 269], 99.95th=[ 269], 00:13:20.782 | 99.99th=[ 269] 00:13:20.782 bw ( KiB/s): min= 4096, max= 4096, per=15.97%, avg=4096.00, stdev= 0.00, samples=1 00:13:20.782 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:20.782 lat (usec) : 250=95.51%, 500=0.19% 00:13:20.782 lat (msec) : 50=4.30% 00:13:20.782 cpu : usr=0.10%, sys=0.68%, ctx=537, majf=0, minf=1 00:13:20.782 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:20.782 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:20.782 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:20.782 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:20.783 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:20.783 00:13:20.783 Run status group 0 (all jobs): 00:13:20.783 READ: bw=20.4MiB/s (21.4MB/s), 88.6KiB/s-9.91MiB/s (90.8kB/s-10.4MB/s), io=21.2MiB (22.2MB), run=1001-1038msec 00:13:20.783 WRITE: bw=25.0MiB/s (26.3MB/s), 1973KiB/s-9.99MiB/s (2020kB/s-10.5MB/s), io=26.0MiB (27.3MB), run=1001-1038msec 00:13:20.783 00:13:20.783 Disk stats (read/write): 00:13:20.783 nvme0n1: ios=2098/2435, merge=0/0, ticks=458/333, in_queue=791, util=87.27% 00:13:20.783 nvme0n2: ios=2093/2048, merge=0/0, ticks=541/313, in_queue=854, util=90.86% 00:13:20.783 nvme0n3: ios=582/1024, merge=0/0, ticks=702/171, in_queue=873, util=94.69% 00:13:20.783 nvme0n4: ios=41/512, merge=0/0, ticks=1640/89, in_queue=1729, util=94.12% 00:13:20.783 11:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:13:20.783 [global] 00:13:20.783 thread=1 00:13:20.783 invalidate=1 00:13:20.783 rw=randwrite 00:13:20.783 time_based=1 00:13:20.783 runtime=1 00:13:20.783 ioengine=libaio 00:13:20.783 direct=1 00:13:20.783 bs=4096 00:13:20.783 iodepth=1 00:13:20.783 norandommap=0 00:13:20.783 numjobs=1 00:13:20.783 00:13:20.783 verify_dump=1 00:13:20.783 verify_backlog=512 00:13:20.783 verify_state_save=0 00:13:20.783 do_verify=1 00:13:20.783 verify=crc32c-intel 00:13:20.783 [job0] 00:13:20.783 filename=/dev/nvme0n1 00:13:20.783 [job1] 00:13:20.783 filename=/dev/nvme0n2 00:13:20.783 [job2] 00:13:20.783 filename=/dev/nvme0n3 00:13:20.783 [job3] 00:13:20.783 filename=/dev/nvme0n4 00:13:20.783 Could not set queue depth (nvme0n1) 00:13:20.783 Could not set queue depth (nvme0n2) 00:13:20.783 Could not set queue depth (nvme0n3) 00:13:20.783 Could not set queue depth (nvme0n4) 00:13:20.783 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:20.783 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:20.783 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:20.783 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:20.783 fio-3.35 00:13:20.783 Starting 4 threads 00:13:22.175 00:13:22.175 job0: (groupid=0, jobs=1): err= 0: pid=4159797: Thu Dec 5 11:56:56 2024 00:13:22.175 read: IOPS=527, BW=2110KiB/s (2161kB/s)(2112KiB/1001msec) 00:13:22.175 slat (nsec): min=6257, max=26602, avg=7329.37, stdev=1492.79 00:13:22.175 clat (usec): min=190, max=41069, avg=1497.21, stdev=6987.46 00:13:22.175 lat (usec): min=197, max=41081, avg=1504.54, stdev=6988.41 00:13:22.175 clat percentiles (usec): 00:13:22.175 | 1.00th=[ 200], 5.00th=[ 223], 10.00th=[ 231], 20.00th=[ 239], 00:13:22.175 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 260], 00:13:22.175 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 318], 95.00th=[ 449], 00:13:22.175 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:22.175 | 99.99th=[41157] 00:13:22.175 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:13:22.175 slat (nsec): min=8906, max=61905, avg=10421.48, stdev=2413.24 00:13:22.175 clat (usec): min=122, max=404, avg=187.83, stdev=37.36 00:13:22.175 lat (usec): min=131, max=466, avg=198.26, stdev=37.67 00:13:22.175 clat percentiles (usec): 00:13:22.175 | 1.00th=[ 128], 5.00th=[ 137], 10.00th=[ 141], 20.00th=[ 151], 00:13:22.175 | 30.00th=[ 165], 40.00th=[ 174], 50.00th=[ 182], 60.00th=[ 200], 00:13:22.175 | 70.00th=[ 215], 80.00th=[ 223], 90.00th=[ 235], 95.00th=[ 247], 00:13:22.175 | 99.00th=[ 269], 99.50th=[ 306], 99.90th=[ 371], 99.95th=[ 404], 00:13:22.175 | 99.99th=[ 404] 00:13:22.175 bw ( KiB/s): min= 8192, max= 8192, per=28.95%, avg=8192.00, stdev= 0.00, samples=1 00:13:22.175 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:22.175 lat (usec) : 250=77.84%, 500=21.07%, 750=0.06% 00:13:22.175 lat (msec) : 50=1.03% 00:13:22.175 cpu : usr=0.70%, sys=1.50%, ctx=1553, majf=0, minf=1 00:13:22.175 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:22.175 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:22.175 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:22.175 issued rwts: total=528,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:22.175 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:22.175 job1: (groupid=0, jobs=1): err= 0: pid=4159821: Thu Dec 5 11:56:56 2024 00:13:22.175 read: IOPS=1486, BW=5944KiB/s (6087kB/s)(6164KiB/1037msec) 00:13:22.175 slat (nsec): min=6619, max=36168, avg=7483.25, stdev=1273.18 00:13:22.175 clat (usec): min=182, max=41571, avg=430.45, stdev=2749.85 00:13:22.175 lat (usec): min=189, max=41579, avg=437.94, stdev=2750.20 00:13:22.175 clat percentiles (usec): 00:13:22.175 | 1.00th=[ 192], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 208], 00:13:22.175 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 223], 60.00th=[ 227], 00:13:22.175 | 70.00th=[ 237], 80.00th=[ 258], 90.00th=[ 293], 95.00th=[ 388], 00:13:22.175 | 99.00th=[ 519], 99.50th=[ 6783], 99.90th=[41157], 99.95th=[41681], 00:13:22.175 | 99.99th=[41681] 00:13:22.175 write: IOPS=1974, BW=7900KiB/s (8089kB/s)(8192KiB/1037msec); 0 zone resets 00:13:22.175 slat (nsec): min=9196, max=31169, avg=10376.88, stdev=1181.09 00:13:22.175 clat (usec): min=119, max=284, avg=162.66, stdev=29.21 00:13:22.175 lat (usec): min=130, max=311, avg=173.04, stdev=29.42 00:13:22.175 clat percentiles (usec): 00:13:22.176 | 1.00th=[ 130], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 143], 00:13:22.176 | 30.00th=[ 145], 40.00th=[ 149], 50.00th=[ 153], 60.00th=[ 157], 00:13:22.176 | 70.00th=[ 165], 80.00th=[ 178], 90.00th=[ 217], 95.00th=[ 227], 00:13:22.176 | 99.00th=[ 243], 99.50th=[ 247], 99.90th=[ 281], 99.95th=[ 281], 00:13:22.176 | 99.99th=[ 285] 00:13:22.176 bw ( KiB/s): min= 5376, max=11008, per=28.95%, avg=8192.00, stdev=3982.43, samples=2 00:13:22.176 iops : min= 1344, max= 2752, avg=2048.00, stdev=995.61, samples=2 00:13:22.176 lat (usec) : 250=90.08%, 500=9.25%, 750=0.45% 00:13:22.176 lat (msec) : 10=0.03%, 50=0.20% 00:13:22.176 cpu : usr=1.74%, sys=3.28%, ctx=3590, majf=0, minf=1 00:13:22.176 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:22.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:22.176 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:22.176 issued rwts: total=1541,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:22.176 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:22.176 job2: (groupid=0, jobs=1): err= 0: pid=4159844: Thu Dec 5 11:56:56 2024 00:13:22.176 read: IOPS=1714, BW=6857KiB/s (7022kB/s)(6864KiB/1001msec) 00:13:22.176 slat (nsec): min=3711, max=14907, avg=5582.98, stdev=1477.05 00:13:22.176 clat (usec): min=203, max=555, avg=328.07, stdev=72.94 00:13:22.176 lat (usec): min=210, max=560, avg=333.65, stdev=73.26 00:13:22.176 clat percentiles (usec): 00:13:22.176 | 1.00th=[ 235], 5.00th=[ 249], 10.00th=[ 260], 20.00th=[ 273], 00:13:22.176 | 30.00th=[ 281], 40.00th=[ 293], 50.00th=[ 306], 60.00th=[ 322], 00:13:22.176 | 70.00th=[ 347], 80.00th=[ 375], 90.00th=[ 461], 95.00th=[ 498], 00:13:22.176 | 99.00th=[ 519], 99.50th=[ 523], 99.90th=[ 553], 99.95th=[ 553], 00:13:22.176 | 99.99th=[ 553] 00:13:22.176 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:13:22.176 slat (nsec): min=4674, max=37055, avg=7849.05, stdev=2933.49 00:13:22.176 clat (usec): min=123, max=365, avg=198.40, stdev=40.88 00:13:22.176 lat (usec): min=128, max=390, avg=206.25, stdev=42.63 00:13:22.176 clat percentiles (usec): 00:13:22.176 | 1.00th=[ 137], 5.00th=[ 147], 10.00th=[ 153], 20.00th=[ 161], 00:13:22.176 | 30.00th=[ 172], 40.00th=[ 182], 50.00th=[ 192], 60.00th=[ 204], 00:13:22.176 | 70.00th=[ 217], 80.00th=[ 233], 90.00th=[ 260], 95.00th=[ 277], 00:13:22.176 | 99.00th=[ 306], 99.50th=[ 326], 99.90th=[ 359], 99.95th=[ 363], 00:13:22.176 | 99.99th=[ 367] 00:13:22.176 bw ( KiB/s): min= 8192, max= 8192, per=28.95%, avg=8192.00, stdev= 0.00, samples=1 00:13:22.176 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:22.176 lat (usec) : 250=49.60%, 500=48.38%, 750=2.02% 00:13:22.176 cpu : usr=0.80%, sys=3.00%, ctx=3767, majf=0, minf=1 00:13:22.176 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:22.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:22.176 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:22.176 issued rwts: total=1716,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:22.176 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:22.176 job3: (groupid=0, jobs=1): err= 0: pid=4159845: Thu Dec 5 11:56:56 2024 00:13:22.176 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:13:22.176 slat (nsec): min=7539, max=43974, avg=8782.26, stdev=1395.72 00:13:22.176 clat (usec): min=186, max=1385, avg=269.70, stdev=72.61 00:13:22.176 lat (usec): min=194, max=1395, avg=278.49, stdev=72.76 00:13:22.176 clat percentiles (usec): 00:13:22.176 | 1.00th=[ 196], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 217], 00:13:22.176 | 30.00th=[ 225], 40.00th=[ 233], 50.00th=[ 247], 60.00th=[ 265], 00:13:22.176 | 70.00th=[ 289], 80.00th=[ 326], 90.00th=[ 347], 95.00th=[ 379], 00:13:22.176 | 99.00th=[ 506], 99.50th=[ 519], 99.90th=[ 914], 99.95th=[ 971], 00:13:22.176 | 99.99th=[ 1385] 00:13:22.176 write: IOPS=2213, BW=8855KiB/s (9068kB/s)(8864KiB/1001msec); 0 zone resets 00:13:22.176 slat (nsec): min=10129, max=47310, avg=11723.64, stdev=1935.31 00:13:22.176 clat (usec): min=122, max=444, avg=176.40, stdev=25.27 00:13:22.176 lat (usec): min=133, max=455, avg=188.12, stdev=25.53 00:13:22.176 clat percentiles (usec): 00:13:22.176 | 1.00th=[ 133], 5.00th=[ 145], 10.00th=[ 153], 20.00th=[ 159], 00:13:22.176 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 176], 00:13:22.176 | 70.00th=[ 182], 80.00th=[ 192], 90.00th=[ 212], 95.00th=[ 231], 00:13:22.176 | 99.00th=[ 249], 99.50th=[ 258], 99.90th=[ 297], 99.95th=[ 359], 00:13:22.176 | 99.99th=[ 445] 00:13:22.176 bw ( KiB/s): min=10896, max=10896, per=38.51%, avg=10896.00, stdev= 0.00, samples=1 00:13:22.176 iops : min= 2724, max= 2724, avg=2724.00, stdev= 0.00, samples=1 00:13:22.176 lat (usec) : 250=76.50%, 500=22.87%, 750=0.56%, 1000=0.05% 00:13:22.176 lat (msec) : 2=0.02% 00:13:22.176 cpu : usr=3.20%, sys=7.40%, ctx=4265, majf=0, minf=1 00:13:22.176 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:22.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:22.176 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:22.176 issued rwts: total=2048,2216,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:22.176 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:22.176 00:13:22.176 Run status group 0 (all jobs): 00:13:22.176 READ: bw=22.0MiB/s (23.0MB/s), 2110KiB/s-8184KiB/s (2161kB/s-8380kB/s), io=22.8MiB (23.9MB), run=1001-1037msec 00:13:22.176 WRITE: bw=27.6MiB/s (29.0MB/s), 4092KiB/s-8855KiB/s (4190kB/s-9068kB/s), io=28.7MiB (30.0MB), run=1001-1037msec 00:13:22.176 00:13:22.176 Disk stats (read/write): 00:13:22.176 nvme0n1: ios=572/1024, merge=0/0, ticks=598/183, in_queue=781, util=81.96% 00:13:22.176 nvme0n2: ios=1271/1536, merge=0/0, ticks=1418/247, in_queue=1665, util=89.09% 00:13:22.176 nvme0n3: ios=1434/1536, merge=0/0, ticks=1363/311, in_queue=1674, util=93.16% 00:13:22.176 nvme0n4: ios=1617/2048, merge=0/0, ticks=446/343, in_queue=789, util=94.14% 00:13:22.176 11:56:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:13:22.176 [global] 00:13:22.176 thread=1 00:13:22.176 invalidate=1 00:13:22.176 rw=write 00:13:22.176 time_based=1 00:13:22.176 runtime=1 00:13:22.176 ioengine=libaio 00:13:22.176 direct=1 00:13:22.176 bs=4096 00:13:22.176 iodepth=128 00:13:22.176 norandommap=0 00:13:22.176 numjobs=1 00:13:22.176 00:13:22.176 verify_dump=1 00:13:22.176 verify_backlog=512 00:13:22.176 verify_state_save=0 00:13:22.176 do_verify=1 00:13:22.176 verify=crc32c-intel 00:13:22.176 [job0] 00:13:22.176 filename=/dev/nvme0n1 00:13:22.176 [job1] 00:13:22.176 filename=/dev/nvme0n2 00:13:22.176 [job2] 00:13:22.176 filename=/dev/nvme0n3 00:13:22.176 [job3] 00:13:22.176 filename=/dev/nvme0n4 00:13:22.176 Could not set queue depth (nvme0n1) 00:13:22.176 Could not set queue depth (nvme0n2) 00:13:22.176 Could not set queue depth (nvme0n3) 00:13:22.176 Could not set queue depth (nvme0n4) 00:13:22.432 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:22.432 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:22.432 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:22.432 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:22.432 fio-3.35 00:13:22.432 Starting 4 threads 00:13:23.803 00:13:23.803 job0: (groupid=0, jobs=1): err= 0: pid=4160212: Thu Dec 5 11:56:57 2024 00:13:23.803 read: IOPS=2522, BW=9.85MiB/s (10.3MB/s)(10.0MiB/1015msec) 00:13:23.803 slat (nsec): min=1109, max=14300k, avg=123202.52, stdev=858268.69 00:13:23.803 clat (usec): min=4298, max=39887, avg=13948.11, stdev=5832.48 00:13:23.803 lat (usec): min=4303, max=39896, avg=14071.31, stdev=5915.38 00:13:23.803 clat percentiles (usec): 00:13:23.803 | 1.00th=[ 7111], 5.00th=[ 8586], 10.00th=[ 8848], 20.00th=[ 9634], 00:13:23.803 | 30.00th=[10290], 40.00th=[10552], 50.00th=[11338], 60.00th=[11731], 00:13:23.803 | 70.00th=[16057], 80.00th=[18744], 90.00th=[22414], 95.00th=[25297], 00:13:23.803 | 99.00th=[31327], 99.50th=[33424], 99.90th=[40109], 99.95th=[40109], 00:13:23.803 | 99.99th=[40109] 00:13:23.803 write: IOPS=2803, BW=11.0MiB/s (11.5MB/s)(11.1MiB/1015msec); 0 zone resets 00:13:23.803 slat (usec): min=2, max=13361, avg=236.64, stdev=1144.12 00:13:23.803 clat (usec): min=1025, max=102244, avg=32688.41, stdev=24523.82 00:13:23.803 lat (usec): min=1035, max=102252, avg=32925.05, stdev=24662.24 00:13:23.803 clat percentiles (msec): 00:13:23.803 | 1.00th=[ 6], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 9], 00:13:23.803 | 30.00th=[ 16], 40.00th=[ 18], 50.00th=[ 26], 60.00th=[ 33], 00:13:23.803 | 70.00th=[ 43], 80.00th=[ 52], 90.00th=[ 74], 95.00th=[ 86], 00:13:23.803 | 99.00th=[ 94], 99.50th=[ 97], 99.90th=[ 103], 99.95th=[ 103], 00:13:23.803 | 99.99th=[ 103] 00:13:23.803 bw ( KiB/s): min= 8320, max=13424, per=17.68%, avg=10872.00, stdev=3609.07, samples=2 00:13:23.803 iops : min= 2080, max= 3356, avg=2718.00, stdev=902.27, samples=2 00:13:23.803 lat (msec) : 2=0.04%, 4=0.22%, 10=24.07%, 20=36.70%, 50=26.43% 00:13:23.803 lat (msec) : 100=12.43%, 250=0.11% 00:13:23.803 cpu : usr=1.78%, sys=2.86%, ctx=268, majf=0, minf=1 00:13:23.803 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:13:23.803 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:23.803 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:23.803 issued rwts: total=2560,2846,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:23.803 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:23.803 job1: (groupid=0, jobs=1): err= 0: pid=4160213: Thu Dec 5 11:56:57 2024 00:13:23.803 read: IOPS=2017, BW=8071KiB/s (8265kB/s)(8192KiB/1015msec) 00:13:23.803 slat (nsec): min=1381, max=9069.3k, avg=106180.53, stdev=655571.21 00:13:23.803 clat (usec): min=5250, max=32765, avg=11999.92, stdev=5299.91 00:13:23.803 lat (usec): min=5261, max=32768, avg=12106.11, stdev=5360.38 00:13:23.803 clat percentiles (usec): 00:13:23.803 | 1.00th=[ 6849], 5.00th=[ 7767], 10.00th=[ 9241], 20.00th=[ 9503], 00:13:23.803 | 30.00th=[ 9634], 40.00th=[ 9634], 50.00th=[ 9765], 60.00th=[ 9896], 00:13:23.803 | 70.00th=[10814], 80.00th=[13829], 90.00th=[20055], 95.00th=[26084], 00:13:23.803 | 99.00th=[31327], 99.50th=[32375], 99.90th=[32637], 99.95th=[32637], 00:13:23.803 | 99.99th=[32637] 00:13:23.803 write: IOPS=2480, BW=9923KiB/s (10.2MB/s)(9.84MiB/1015msec); 0 zone resets 00:13:23.803 slat (usec): min=2, max=16839, avg=306.10, stdev=1395.10 00:13:23.803 clat (usec): min=1394, max=142696, avg=41356.06, stdev=32881.58 00:13:23.803 lat (usec): min=1404, max=142705, avg=41662.16, stdev=33075.80 00:13:23.803 clat percentiles (msec): 00:13:23.803 | 1.00th=[ 4], 5.00th=[ 9], 10.00th=[ 9], 20.00th=[ 9], 00:13:23.803 | 30.00th=[ 16], 40.00th=[ 26], 50.00th=[ 32], 60.00th=[ 48], 00:13:23.803 | 70.00th=[ 56], 80.00th=[ 65], 90.00th=[ 92], 95.00th=[ 108], 00:13:23.803 | 99.00th=[ 138], 99.50th=[ 142], 99.90th=[ 144], 99.95th=[ 144], 00:13:23.803 | 99.99th=[ 144] 00:13:23.803 bw ( KiB/s): min= 7216, max=11904, per=15.55%, avg=9560.00, stdev=3314.92, samples=2 00:13:23.803 iops : min= 1804, max= 2976, avg=2390.00, stdev=828.73, samples=2 00:13:23.803 lat (msec) : 2=0.11%, 4=0.53%, 10=40.87%, 20=17.41%, 50=20.72% 00:13:23.803 lat (msec) : 100=16.18%, 250=4.18% 00:13:23.803 cpu : usr=2.07%, sys=3.06%, ctx=274, majf=0, minf=1 00:13:23.803 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:13:23.803 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:23.803 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:23.803 issued rwts: total=2048,2518,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:23.803 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:23.803 job2: (groupid=0, jobs=1): err= 0: pid=4160220: Thu Dec 5 11:56:57 2024 00:13:23.803 read: IOPS=2730, BW=10.7MiB/s (11.2MB/s)(10.7MiB/1006msec) 00:13:23.803 slat (nsec): min=1699, max=17186k, avg=142994.69, stdev=1001209.69 00:13:23.803 clat (usec): min=3738, max=59000, avg=15796.44, stdev=8966.39 00:13:23.803 lat (usec): min=5905, max=59003, avg=15939.43, stdev=9062.23 00:13:23.803 clat percentiles (usec): 00:13:23.803 | 1.00th=[ 6390], 5.00th=[ 8717], 10.00th=[10159], 20.00th=[10814], 00:13:23.803 | 30.00th=[11076], 40.00th=[11469], 50.00th=[11994], 60.00th=[13042], 00:13:23.803 | 70.00th=[17433], 80.00th=[18744], 90.00th=[21627], 95.00th=[39584], 00:13:23.803 | 99.00th=[50594], 99.50th=[55837], 99.90th=[58983], 99.95th=[58983], 00:13:23.803 | 99.99th=[58983] 00:13:23.803 write: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec); 0 zone resets 00:13:23.803 slat (usec): min=3, max=49722, avg=191.37, stdev=1350.46 00:13:23.803 clat (usec): min=3079, max=88329, avg=24493.40, stdev=18632.54 00:13:23.803 lat (usec): min=3089, max=88554, avg=24684.76, stdev=18787.48 00:13:23.803 clat percentiles (usec): 00:13:23.803 | 1.00th=[ 4490], 5.00th=[ 8586], 10.00th=[ 9110], 20.00th=[ 9503], 00:13:23.803 | 30.00th=[ 9765], 40.00th=[16057], 50.00th=[17171], 60.00th=[21890], 00:13:23.803 | 70.00th=[27657], 80.00th=[40633], 90.00th=[51119], 95.00th=[55313], 00:13:23.803 | 99.00th=[85459], 99.50th=[86508], 99.90th=[88605], 99.95th=[88605], 00:13:23.803 | 99.99th=[88605] 00:13:23.803 bw ( KiB/s): min=12240, max=12336, per=19.98%, avg=12288.00, stdev=67.88, samples=2 00:13:23.803 iops : min= 3060, max= 3084, avg=3072.00, stdev=16.97, samples=2 00:13:23.803 lat (msec) : 4=0.33%, 10=20.97%, 20=48.84%, 50=22.65%, 100=7.22% 00:13:23.803 cpu : usr=2.79%, sys=4.08%, ctx=261, majf=0, minf=1 00:13:23.803 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:13:23.803 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:23.803 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:23.804 issued rwts: total=2747,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:23.804 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:23.804 job3: (groupid=0, jobs=1): err= 0: pid=4160221: Thu Dec 5 11:56:57 2024 00:13:23.804 read: IOPS=7124, BW=27.8MiB/s (29.2MB/s)(28.0MiB/1005msec) 00:13:23.804 slat (nsec): min=1309, max=8574.0k, avg=78376.40, stdev=563784.45 00:13:23.804 clat (usec): min=1690, max=17180, avg=9652.25, stdev=2397.48 00:13:23.804 lat (usec): min=3190, max=17191, avg=9730.62, stdev=2425.47 00:13:23.804 clat percentiles (usec): 00:13:23.804 | 1.00th=[ 3884], 5.00th=[ 6849], 10.00th=[ 7242], 20.00th=[ 8356], 00:13:23.804 | 30.00th=[ 8586], 40.00th=[ 8848], 50.00th=[ 8979], 60.00th=[ 9241], 00:13:23.804 | 70.00th=[ 9765], 80.00th=[11469], 90.00th=[13435], 95.00th=[14877], 00:13:23.804 | 99.00th=[16319], 99.50th=[16581], 99.90th=[16909], 99.95th=[17171], 00:13:23.804 | 99.99th=[17171] 00:13:23.804 write: IOPS=7132, BW=27.9MiB/s (29.2MB/s)(28.0MiB/1005msec); 0 zone resets 00:13:23.804 slat (usec): min=2, max=6631, avg=55.13, stdev=186.87 00:13:23.804 clat (usec): min=422, max=17144, avg=8118.64, stdev=1924.37 00:13:23.804 lat (usec): min=435, max=17148, avg=8173.78, stdev=1937.90 00:13:23.804 clat percentiles (usec): 00:13:23.804 | 1.00th=[ 2474], 5.00th=[ 3851], 10.00th=[ 4948], 20.00th=[ 6783], 00:13:23.804 | 30.00th=[ 8455], 40.00th=[ 8848], 50.00th=[ 8979], 60.00th=[ 9110], 00:13:23.804 | 70.00th=[ 9110], 80.00th=[ 9241], 90.00th=[ 9241], 95.00th=[ 9372], 00:13:23.804 | 99.00th=[10421], 99.50th=[13960], 99.90th=[16581], 99.95th=[16909], 00:13:23.804 | 99.99th=[17171] 00:13:23.804 bw ( KiB/s): min=28568, max=28776, per=46.63%, avg=28672.00, stdev=147.08, samples=2 00:13:23.804 iops : min= 7142, max= 7194, avg=7168.00, stdev=36.77, samples=2 00:13:23.804 lat (usec) : 500=0.02%, 1000=0.05% 00:13:23.804 lat (msec) : 2=0.20%, 4=2.98%, 10=82.94%, 20=13.81% 00:13:23.804 cpu : usr=5.28%, sys=7.47%, ctx=942, majf=0, minf=1 00:13:23.804 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:13:23.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:23.804 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:23.804 issued rwts: total=7160,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:23.804 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:23.804 00:13:23.804 Run status group 0 (all jobs): 00:13:23.804 READ: bw=55.9MiB/s (58.6MB/s), 8071KiB/s-27.8MiB/s (8265kB/s-29.2MB/s), io=56.7MiB (59.5MB), run=1005-1015msec 00:13:23.804 WRITE: bw=60.1MiB/s (63.0MB/s), 9923KiB/s-27.9MiB/s (10.2MB/s-29.2MB/s), io=61.0MiB (63.9MB), run=1005-1015msec 00:13:23.804 00:13:23.804 Disk stats (read/write): 00:13:23.804 nvme0n1: ios=2098/2407, merge=0/0, ticks=27874/66557, in_queue=94431, util=81.96% 00:13:23.804 nvme0n2: ios=1674/2048, merge=0/0, ticks=19901/80377, in_queue=100278, util=85.92% 00:13:23.804 nvme0n3: ios=2071/2303, merge=0/0, ticks=31962/54272, in_queue=86234, util=95.22% 00:13:23.804 nvme0n4: ios=5654/5871, merge=0/0, ticks=52821/46505, in_queue=99326, util=97.35% 00:13:23.804 11:56:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:13:23.804 [global] 00:13:23.804 thread=1 00:13:23.804 invalidate=1 00:13:23.804 rw=randwrite 00:13:23.804 time_based=1 00:13:23.804 runtime=1 00:13:23.804 ioengine=libaio 00:13:23.804 direct=1 00:13:23.804 bs=4096 00:13:23.804 iodepth=128 00:13:23.804 norandommap=0 00:13:23.804 numjobs=1 00:13:23.804 00:13:23.804 verify_dump=1 00:13:23.804 verify_backlog=512 00:13:23.804 verify_state_save=0 00:13:23.804 do_verify=1 00:13:23.804 verify=crc32c-intel 00:13:23.804 [job0] 00:13:23.804 filename=/dev/nvme0n1 00:13:23.804 [job1] 00:13:23.804 filename=/dev/nvme0n2 00:13:23.804 [job2] 00:13:23.804 filename=/dev/nvme0n3 00:13:23.804 [job3] 00:13:23.804 filename=/dev/nvme0n4 00:13:23.804 Could not set queue depth (nvme0n1) 00:13:23.804 Could not set queue depth (nvme0n2) 00:13:23.804 Could not set queue depth (nvme0n3) 00:13:23.804 Could not set queue depth (nvme0n4) 00:13:24.061 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:24.061 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:24.061 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:24.061 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:24.061 fio-3.35 00:13:24.061 Starting 4 threads 00:13:25.434 00:13:25.434 job0: (groupid=0, jobs=1): err= 0: pid=4160591: Thu Dec 5 11:56:59 2024 00:13:25.434 read: IOPS=6119, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1004msec) 00:13:25.434 slat (nsec): min=1250, max=9746.3k, avg=77485.97, stdev=425222.54 00:13:25.434 clat (usec): min=4257, max=19326, avg=10245.71, stdev=1278.51 00:13:25.434 lat (usec): min=4265, max=19386, avg=10323.19, stdev=1306.96 00:13:25.434 clat percentiles (usec): 00:13:25.434 | 1.00th=[ 7635], 5.00th=[ 8455], 10.00th=[ 8848], 20.00th=[ 9372], 00:13:25.434 | 30.00th=[ 9765], 40.00th=[ 9896], 50.00th=[10290], 60.00th=[10421], 00:13:25.434 | 70.00th=[10683], 80.00th=[11076], 90.00th=[11469], 95.00th=[11863], 00:13:25.434 | 99.00th=[14877], 99.50th=[17171], 99.90th=[18744], 99.95th=[18744], 00:13:25.434 | 99.99th=[19268] 00:13:25.434 write: IOPS=6438, BW=25.1MiB/s (26.4MB/s)(25.2MiB/1004msec); 0 zone resets 00:13:25.434 slat (usec): min=2, max=3517, avg=74.84, stdev=395.13 00:13:25.434 clat (usec): min=600, max=19341, avg=9870.80, stdev=1629.89 00:13:25.434 lat (usec): min=614, max=19350, avg=9945.64, stdev=1663.84 00:13:25.434 clat percentiles (usec): 00:13:25.434 | 1.00th=[ 3032], 5.00th=[ 7111], 10.00th=[ 8979], 20.00th=[ 9634], 00:13:25.435 | 30.00th=[ 9765], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10159], 00:13:25.435 | 70.00th=[10290], 80.00th=[10421], 90.00th=[10945], 95.00th=[11600], 00:13:25.435 | 99.00th=[13960], 99.50th=[16581], 99.90th=[18744], 99.95th=[19268], 00:13:25.435 | 99.99th=[19268] 00:13:25.435 bw ( KiB/s): min=25312, max=25384, per=32.75%, avg=25348.00, stdev=50.91, samples=2 00:13:25.435 iops : min= 6328, max= 6346, avg=6337.00, stdev=12.73, samples=2 00:13:25.435 lat (usec) : 750=0.04% 00:13:25.435 lat (msec) : 2=0.20%, 4=0.50%, 10=46.13%, 20=53.13% 00:13:25.435 cpu : usr=4.19%, sys=7.58%, ctx=559, majf=0, minf=1 00:13:25.435 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:13:25.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:25.435 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:25.435 issued rwts: total=6144,6464,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:25.435 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:25.435 job1: (groupid=0, jobs=1): err= 0: pid=4160592: Thu Dec 5 11:56:59 2024 00:13:25.435 read: IOPS=6083, BW=23.8MiB/s (24.9MB/s)(24.0MiB/1010msec) 00:13:25.435 slat (nsec): min=1459, max=9598.5k, avg=87014.34, stdev=638661.94 00:13:25.435 clat (usec): min=3396, max=20670, avg=10699.02, stdev=2540.75 00:13:25.435 lat (usec): min=3403, max=20679, avg=10786.04, stdev=2583.39 00:13:25.435 clat percentiles (usec): 00:13:25.435 | 1.00th=[ 4424], 5.00th=[ 7767], 10.00th=[ 8455], 20.00th=[ 9372], 00:13:25.435 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10290], 00:13:25.435 | 70.00th=[10552], 80.00th=[11994], 90.00th=[14877], 95.00th=[16450], 00:13:25.435 | 99.00th=[17957], 99.50th=[18482], 99.90th=[19006], 99.95th=[19006], 00:13:25.435 | 99.99th=[20579] 00:13:25.435 write: IOPS=6524, BW=25.5MiB/s (26.7MB/s)(25.7MiB/1010msec); 0 zone resets 00:13:25.435 slat (usec): min=2, max=7772, avg=65.11, stdev=307.87 00:13:25.435 clat (usec): min=1566, max=27938, avg=9419.35, stdev=2694.59 00:13:25.435 lat (usec): min=1579, max=27942, avg=9484.46, stdev=2717.69 00:13:25.435 clat percentiles (usec): 00:13:25.435 | 1.00th=[ 3228], 5.00th=[ 4817], 10.00th=[ 5997], 20.00th=[ 8160], 00:13:25.435 | 30.00th=[ 9241], 40.00th=[ 9634], 50.00th=[ 9896], 60.00th=[10028], 00:13:25.435 | 70.00th=[10159], 80.00th=[10159], 90.00th=[10683], 95.00th=[11338], 00:13:25.435 | 99.00th=[22414], 99.50th=[25297], 99.90th=[27395], 99.95th=[27919], 00:13:25.435 | 99.99th=[27919] 00:13:25.435 bw ( KiB/s): min=25096, max=26608, per=33.40%, avg=25852.00, stdev=1069.15, samples=2 00:13:25.435 iops : min= 6274, max= 6652, avg=6463.00, stdev=267.29, samples=2 00:13:25.435 lat (msec) : 2=0.02%, 4=1.48%, 10=54.19%, 20=43.69%, 50=0.62% 00:13:25.435 cpu : usr=4.66%, sys=6.44%, ctx=783, majf=0, minf=1 00:13:25.435 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:13:25.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:25.435 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:25.435 issued rwts: total=6144,6590,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:25.435 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:25.435 job2: (groupid=0, jobs=1): err= 0: pid=4160593: Thu Dec 5 11:56:59 2024 00:13:25.435 read: IOPS=3126, BW=12.2MiB/s (12.8MB/s)(12.3MiB/1010msec) 00:13:25.435 slat (nsec): min=1730, max=20999k, avg=166799.30, stdev=1154989.00 00:13:25.435 clat (msec): min=5, max=113, avg=18.19, stdev=14.95 00:13:25.435 lat (msec): min=5, max=113, avg=18.36, stdev=15.08 00:13:25.435 clat percentiles (msec): 00:13:25.435 | 1.00th=[ 7], 5.00th=[ 11], 10.00th=[ 12], 20.00th=[ 12], 00:13:25.435 | 30.00th=[ 13], 40.00th=[ 13], 50.00th=[ 13], 60.00th=[ 15], 00:13:25.435 | 70.00th=[ 17], 80.00th=[ 19], 90.00th=[ 27], 95.00th=[ 50], 00:13:25.435 | 99.00th=[ 100], 99.50th=[ 102], 99.90th=[ 113], 99.95th=[ 113], 00:13:25.435 | 99.99th=[ 113] 00:13:25.435 write: IOPS=3548, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1010msec); 0 zone resets 00:13:25.435 slat (usec): min=3, max=10095, avg=125.18, stdev=693.08 00:13:25.435 clat (msec): min=2, max=113, avg=19.64, stdev=15.10 00:13:25.435 lat (msec): min=2, max=113, avg=19.76, stdev=15.17 00:13:25.435 clat percentiles (msec): 00:13:25.435 | 1.00th=[ 6], 5.00th=[ 8], 10.00th=[ 11], 20.00th=[ 11], 00:13:25.435 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 14], 60.00th=[ 20], 00:13:25.435 | 70.00th=[ 22], 80.00th=[ 23], 90.00th=[ 40], 95.00th=[ 55], 00:13:25.435 | 99.00th=[ 86], 99.50th=[ 99], 99.90th=[ 105], 99.95th=[ 113], 00:13:25.435 | 99.99th=[ 113] 00:13:25.435 bw ( KiB/s): min=12288, max=16056, per=18.31%, avg=14172.00, stdev=2664.38, samples=2 00:13:25.435 iops : min= 3072, max= 4014, avg=3543.00, stdev=666.09, samples=2 00:13:25.435 lat (msec) : 4=0.18%, 10=5.87%, 20=64.31%, 50=24.15%, 100=4.94% 00:13:25.435 lat (msec) : 250=0.55% 00:13:25.435 cpu : usr=3.17%, sys=4.36%, ctx=327, majf=0, minf=1 00:13:25.435 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:13:25.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:25.435 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:25.435 issued rwts: total=3158,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:25.435 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:25.435 job3: (groupid=0, jobs=1): err= 0: pid=4160594: Thu Dec 5 11:56:59 2024 00:13:25.435 read: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec) 00:13:25.435 slat (nsec): min=1262, max=14411k, avg=124858.25, stdev=957130.55 00:13:25.435 clat (usec): min=5849, max=36202, avg=16299.41, stdev=5360.04 00:13:25.435 lat (usec): min=5855, max=36226, avg=16424.27, stdev=5468.70 00:13:25.435 clat percentiles (usec): 00:13:25.435 | 1.00th=[ 7308], 5.00th=[ 9896], 10.00th=[11863], 20.00th=[12256], 00:13:25.435 | 30.00th=[12518], 40.00th=[12780], 50.00th=[13698], 60.00th=[17433], 00:13:25.435 | 70.00th=[20579], 80.00th=[21627], 90.00th=[22938], 95.00th=[25035], 00:13:25.435 | 99.00th=[30540], 99.50th=[31851], 99.90th=[34866], 99.95th=[35914], 00:13:25.435 | 99.99th=[36439] 00:13:25.435 write: IOPS=2886, BW=11.3MiB/s (11.8MB/s)(11.3MiB/1006msec); 0 zone resets 00:13:25.435 slat (usec): min=2, max=11860, avg=211.32, stdev=1103.76 00:13:25.435 clat (usec): min=465, max=115273, avg=29609.28, stdev=28305.65 00:13:25.435 lat (usec): min=496, max=115288, avg=29820.60, stdev=28468.05 00:13:25.435 clat percentiles (usec): 00:13:25.435 | 1.00th=[ 1713], 5.00th=[ 5735], 10.00th=[ 6915], 20.00th=[ 10421], 00:13:25.435 | 30.00th=[ 11338], 40.00th=[ 20317], 50.00th=[ 21890], 60.00th=[ 22152], 00:13:25.435 | 70.00th=[ 22938], 80.00th=[ 43254], 90.00th=[ 82314], 95.00th=[101188], 00:13:25.435 | 99.00th=[112722], 99.50th=[114820], 99.90th=[114820], 99.95th=[114820], 00:13:25.435 | 99.99th=[114820] 00:13:25.435 bw ( KiB/s): min= 9928, max=12288, per=14.35%, avg=11108.00, stdev=1668.77, samples=2 00:13:25.435 iops : min= 2482, max= 3072, avg=2777.00, stdev=417.19, samples=2 00:13:25.435 lat (usec) : 500=0.02% 00:13:25.435 lat (msec) : 2=0.64%, 4=0.66%, 10=11.44%, 20=39.15%, 50=38.78% 00:13:25.435 lat (msec) : 100=6.33%, 250=2.98% 00:13:25.435 cpu : usr=1.69%, sys=3.88%, ctx=312, majf=0, minf=2 00:13:25.435 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:13:25.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:25.435 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:25.435 issued rwts: total=2560,2904,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:25.435 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:25.435 00:13:25.435 Run status group 0 (all jobs): 00:13:25.435 READ: bw=69.6MiB/s (73.0MB/s), 9.94MiB/s-23.9MiB/s (10.4MB/s-25.1MB/s), io=70.3MiB (73.8MB), run=1004-1010msec 00:13:25.435 WRITE: bw=75.6MiB/s (79.3MB/s), 11.3MiB/s-25.5MiB/s (11.8MB/s-26.7MB/s), io=76.3MiB (80.0MB), run=1004-1010msec 00:13:25.435 00:13:25.435 Disk stats (read/write): 00:13:25.435 nvme0n1: ios=5183/5632, merge=0/0, ticks=21572/20600, in_queue=42172, util=96.39% 00:13:25.435 nvme0n2: ios=5286/5632, merge=0/0, ticks=54288/50520, in_queue=104808, util=91.06% 00:13:25.435 nvme0n3: ios=2614/3004, merge=0/0, ticks=49168/56046, in_queue=105214, util=93.96% 00:13:25.435 nvme0n4: ios=2105/2223, merge=0/0, ticks=34635/64191, in_queue=98826, util=95.07% 00:13:25.435 11:56:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:13:25.435 11:56:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=4160825 00:13:25.435 11:56:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:13:25.435 11:56:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:13:25.435 [global] 00:13:25.435 thread=1 00:13:25.435 invalidate=1 00:13:25.435 rw=read 00:13:25.435 time_based=1 00:13:25.435 runtime=10 00:13:25.435 ioengine=libaio 00:13:25.435 direct=1 00:13:25.435 bs=4096 00:13:25.435 iodepth=1 00:13:25.435 norandommap=1 00:13:25.435 numjobs=1 00:13:25.435 00:13:25.435 [job0] 00:13:25.435 filename=/dev/nvme0n1 00:13:25.435 [job1] 00:13:25.435 filename=/dev/nvme0n2 00:13:25.435 [job2] 00:13:25.435 filename=/dev/nvme0n3 00:13:25.435 [job3] 00:13:25.435 filename=/dev/nvme0n4 00:13:25.435 Could not set queue depth (nvme0n1) 00:13:25.435 Could not set queue depth (nvme0n2) 00:13:25.435 Could not set queue depth (nvme0n3) 00:13:25.435 Could not set queue depth (nvme0n4) 00:13:25.694 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:25.694 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:25.694 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:25.694 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:25.694 fio-3.35 00:13:25.694 Starting 4 threads 00:13:28.980 11:57:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:13:28.980 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=46481408, buflen=4096 00:13:28.980 fio: pid=4160971, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:13:28.980 11:57:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:13:28.980 11:57:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:28.980 11:57:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:13:28.980 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=593920, buflen=4096 00:13:28.980 fio: pid=4160970, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:13:28.980 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=53694464, buflen=4096 00:13:28.980 fio: pid=4160968, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:13:28.980 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:28.980 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:13:29.239 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=57806848, buflen=4096 00:13:29.239 fio: pid=4160969, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:13:29.239 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:29.239 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:13:29.239 00:13:29.239 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=4160968: Thu Dec 5 11:57:03 2024 00:13:29.239 read: IOPS=4266, BW=16.7MiB/s (17.5MB/s)(51.2MiB/3073msec) 00:13:29.239 slat (usec): min=3, max=34540, avg=11.90, stdev=318.27 00:13:29.239 clat (usec): min=169, max=2906, avg=219.63, stdev=34.51 00:13:29.239 lat (usec): min=177, max=34884, avg=231.54, stdev=321.59 00:13:29.239 clat percentiles (usec): 00:13:29.239 | 1.00th=[ 188], 5.00th=[ 194], 10.00th=[ 198], 20.00th=[ 204], 00:13:29.239 | 30.00th=[ 208], 40.00th=[ 212], 50.00th=[ 217], 60.00th=[ 221], 00:13:29.239 | 70.00th=[ 225], 80.00th=[ 231], 90.00th=[ 245], 95.00th=[ 265], 00:13:29.239 | 99.00th=[ 293], 99.50th=[ 322], 99.90th=[ 420], 99.95th=[ 474], 00:13:29.239 | 99.99th=[ 1287] 00:13:29.239 bw ( KiB/s): min=15368, max=17864, per=36.55%, avg=17152.00, stdev=1042.57, samples=6 00:13:29.239 iops : min= 3842, max= 4466, avg=4288.00, stdev=260.64, samples=6 00:13:29.239 lat (usec) : 250=91.75%, 500=8.20%, 750=0.02% 00:13:29.239 lat (msec) : 2=0.01%, 4=0.01% 00:13:29.239 cpu : usr=2.18%, sys=6.90%, ctx=13112, majf=0, minf=1 00:13:29.239 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:29.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:29.239 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:29.239 issued rwts: total=13110,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:29.239 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:29.239 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=4160969: Thu Dec 5 11:57:03 2024 00:13:29.239 read: IOPS=4276, BW=16.7MiB/s (17.5MB/s)(55.1MiB/3300msec) 00:13:29.239 slat (usec): min=6, max=15082, avg=12.00, stdev=246.04 00:13:29.239 clat (usec): min=165, max=10349, avg=220.18, stdev=105.32 00:13:29.239 lat (usec): min=172, max=15497, avg=232.18, stdev=270.61 00:13:29.239 clat percentiles (usec): 00:13:29.239 | 1.00th=[ 182], 5.00th=[ 192], 10.00th=[ 198], 20.00th=[ 204], 00:13:29.239 | 30.00th=[ 208], 40.00th=[ 212], 50.00th=[ 217], 60.00th=[ 221], 00:13:29.239 | 70.00th=[ 225], 80.00th=[ 231], 90.00th=[ 239], 95.00th=[ 247], 00:13:29.239 | 99.00th=[ 289], 99.50th=[ 338], 99.90th=[ 529], 99.95th=[ 578], 00:13:29.239 | 99.99th=[ 5997] 00:13:29.239 bw ( KiB/s): min=16753, max=17664, per=37.00%, avg=17365.50, stdev=411.67, samples=6 00:13:29.239 iops : min= 4188, max= 4416, avg=4341.33, stdev=102.99, samples=6 00:13:29.239 lat (usec) : 250=96.07%, 500=3.79%, 750=0.10%, 1000=0.01% 00:13:29.239 lat (msec) : 4=0.01%, 10=0.01%, 20=0.01% 00:13:29.239 cpu : usr=0.82%, sys=4.00%, ctx=14121, majf=0, minf=2 00:13:29.239 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:29.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:29.239 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:29.239 issued rwts: total=14114,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:29.239 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:29.239 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=4160970: Thu Dec 5 11:57:03 2024 00:13:29.239 read: IOPS=50, BW=201KiB/s (206kB/s)(580KiB/2885msec) 00:13:29.239 slat (nsec): min=4881, max=30334, avg=12724.99, stdev=5115.63 00:13:29.239 clat (usec): min=208, max=44054, avg=19734.97, stdev=20395.07 00:13:29.239 lat (usec): min=224, max=44068, avg=19747.62, stdev=20396.43 00:13:29.239 clat percentiles (usec): 00:13:29.239 | 1.00th=[ 219], 5.00th=[ 235], 10.00th=[ 243], 20.00th=[ 269], 00:13:29.239 | 30.00th=[ 285], 40.00th=[ 306], 50.00th=[ 502], 60.00th=[40633], 00:13:29.239 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:29.239 | 99.00th=[43779], 99.50th=[44303], 99.90th=[44303], 99.95th=[44303], 00:13:29.239 | 99.99th=[44303] 00:13:29.239 bw ( KiB/s): min= 128, max= 368, per=0.42%, avg=198.40, stdev=99.82, samples=5 00:13:29.239 iops : min= 32, max= 92, avg=49.60, stdev=24.96, samples=5 00:13:29.239 lat (usec) : 250=13.01%, 500=36.30%, 750=2.05% 00:13:29.239 lat (msec) : 10=0.68%, 50=47.26% 00:13:29.239 cpu : usr=0.14%, sys=0.00%, ctx=146, majf=0, minf=2 00:13:29.239 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:29.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:29.239 complete : 0=0.7%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:29.239 issued rwts: total=146,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:29.239 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:29.239 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=4160971: Thu Dec 5 11:57:03 2024 00:13:29.239 read: IOPS=4237, BW=16.6MiB/s (17.4MB/s)(44.3MiB/2678msec) 00:13:29.239 slat (nsec): min=6445, max=30064, avg=7364.65, stdev=904.76 00:13:29.239 clat (usec): min=182, max=715, avg=225.21, stdev=21.24 00:13:29.239 lat (usec): min=189, max=723, avg=232.58, stdev=21.40 00:13:29.239 clat percentiles (usec): 00:13:29.239 | 1.00th=[ 194], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 210], 00:13:29.239 | 30.00th=[ 215], 40.00th=[ 219], 50.00th=[ 223], 60.00th=[ 227], 00:13:29.239 | 70.00th=[ 231], 80.00th=[ 237], 90.00th=[ 251], 95.00th=[ 265], 00:13:29.239 | 99.00th=[ 285], 99.50th=[ 293], 99.90th=[ 412], 99.95th=[ 490], 00:13:29.239 | 99.99th=[ 603] 00:13:29.239 bw ( KiB/s): min=15736, max=17512, per=36.47%, avg=17116.80, stdev=773.14, samples=5 00:13:29.239 iops : min= 3934, max= 4378, avg=4279.20, stdev=193.29, samples=5 00:13:29.239 lat (usec) : 250=89.83%, 500=10.12%, 750=0.04% 00:13:29.239 cpu : usr=1.27%, sys=3.66%, ctx=11351, majf=0, minf=2 00:13:29.239 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:29.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:29.239 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:29.239 issued rwts: total=11349,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:29.239 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:29.239 00:13:29.239 Run status group 0 (all jobs): 00:13:29.240 READ: bw=45.8MiB/s (48.1MB/s), 201KiB/s-16.7MiB/s (206kB/s-17.5MB/s), io=151MiB (159MB), run=2678-3300msec 00:13:29.240 00:13:29.240 Disk stats (read/write): 00:13:29.240 nvme0n1: ios=13077/0, merge=0/0, ticks=2737/0, in_queue=2737, util=92.79% 00:13:29.240 nvme0n2: ios=13242/0, merge=0/0, ticks=2872/0, in_queue=2872, util=94.19% 00:13:29.240 nvme0n3: ios=137/0, merge=0/0, ticks=2779/0, in_queue=2779, util=96.13% 00:13:29.240 nvme0n4: ios=10969/0, merge=0/0, ticks=2548/0, in_queue=2548, util=99.10% 00:13:29.499 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:29.499 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:13:29.758 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:29.758 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:13:29.758 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:29.758 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:13:30.016 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:30.016 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:13:30.275 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:13:30.275 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 4160825 00:13:30.275 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:13:30.275 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:30.275 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:30.275 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:30.275 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:13:30.275 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:30.275 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:30.275 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:30.275 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:30.534 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:13:30.534 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:13:30.534 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:13:30.534 nvmf hotplug test: fio failed as expected 00:13:30.534 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:30.534 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:13:30.534 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:13:30.534 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:13:30.534 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:13:30.534 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:13:30.534 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # nvmfcleanup 00:13:30.534 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@99 -- # sync 00:13:30.534 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:13:30.534 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@102 -- # set +e 00:13:30.534 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@103 -- # for i in {1..20} 00:13:30.534 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:13:30.534 rmmod nvme_tcp 00:13:30.534 rmmod nvme_fabrics 00:13:30.793 rmmod nvme_keyring 00:13:30.793 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:13:30.793 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # set -e 00:13:30.793 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # return 0 00:13:30.793 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # '[' -n 4157890 ']' 00:13:30.793 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@337 -- # killprocess 4157890 00:13:30.793 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 4157890 ']' 00:13:30.793 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 4157890 00:13:30.793 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:13:30.793 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:30.793 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4157890 00:13:30.793 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:30.793 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:30.793 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4157890' 00:13:30.793 killing process with pid 4157890 00:13:30.793 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 4157890 00:13:30.793 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 4157890 00:13:30.793 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:13:30.794 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # nvmf_fini 00:13:30.794 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@264 -- # local dev 00:13:30.794 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@267 -- # remove_target_ns 00:13:30.794 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:13:30.794 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:13:30.794 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@268 -- # delete_main_bridge 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@130 -- # return 0 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@41 -- # _dev=0 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@41 -- # dev_map=() 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@284 -- # iptr 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@542 -- # iptables-save 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@542 -- # iptables-restore 00:13:33.330 00:13:33.330 real 0m27.668s 00:13:33.330 user 1m50.150s 00:13:33.330 sys 0m9.212s 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.330 ************************************ 00:13:33.330 END TEST nvmf_fio_target 00:13:33.330 ************************************ 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:33.330 ************************************ 00:13:33.330 START TEST nvmf_bdevio 00:13:33.330 ************************************ 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:33.330 * Looking for test storage... 00:13:33.330 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:33.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.330 --rc genhtml_branch_coverage=1 00:13:33.330 --rc genhtml_function_coverage=1 00:13:33.330 --rc genhtml_legend=1 00:13:33.330 --rc geninfo_all_blocks=1 00:13:33.330 --rc geninfo_unexecuted_blocks=1 00:13:33.330 00:13:33.330 ' 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:33.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.330 --rc genhtml_branch_coverage=1 00:13:33.330 --rc genhtml_function_coverage=1 00:13:33.330 --rc genhtml_legend=1 00:13:33.330 --rc geninfo_all_blocks=1 00:13:33.330 --rc geninfo_unexecuted_blocks=1 00:13:33.330 00:13:33.330 ' 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:33.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.330 --rc genhtml_branch_coverage=1 00:13:33.330 --rc genhtml_function_coverage=1 00:13:33.330 --rc genhtml_legend=1 00:13:33.330 --rc geninfo_all_blocks=1 00:13:33.330 --rc geninfo_unexecuted_blocks=1 00:13:33.330 00:13:33.330 ' 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:33.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.330 --rc genhtml_branch_coverage=1 00:13:33.330 --rc genhtml_function_coverage=1 00:13:33.330 --rc genhtml_legend=1 00:13:33.330 --rc geninfo_all_blocks=1 00:13:33.330 --rc geninfo_unexecuted_blocks=1 00:13:33.330 00:13:33.330 ' 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:13:33.330 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:33.331 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:33.331 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:33.331 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:13:33.331 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:13:33.331 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:33.331 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:33.331 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:13:33.331 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:33.331 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:33.331 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:33.331 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.331 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.331 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.331 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:13:33.331 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.331 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:13:33.331 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:13:33.331 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:13:33.331 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:13:33.331 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@50 -- # : 0 00:13:33.331 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:13:33.331 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:13:33.331 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:13:33.331 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:33.331 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:33.331 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:13:33.331 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:13:33.331 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:13:33.331 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:13:33.331 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@54 -- # have_pci_nics=0 00:13:33.331 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:33.331 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:33.331 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:13:33.331 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:13:33.331 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:33.331 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # prepare_net_devs 00:13:33.331 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # local -g is_hw=no 00:13:33.331 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@260 -- # remove_target_ns 00:13:33.331 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:13:33.331 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:13:33.331 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_target_ns 00:13:33.331 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:13:33.331 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:13:33.331 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # xtrace_disable 00:13:33.331 11:57:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:40.084 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:40.084 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@131 -- # pci_devs=() 00:13:40.084 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@131 -- # local -a pci_devs 00:13:40.084 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@132 -- # pci_net_devs=() 00:13:40.084 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:13:40.084 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@133 -- # pci_drivers=() 00:13:40.084 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@133 -- # local -A pci_drivers 00:13:40.084 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@135 -- # net_devs=() 00:13:40.084 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@135 -- # local -ga net_devs 00:13:40.084 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@136 -- # e810=() 00:13:40.084 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@136 -- # local -ga e810 00:13:40.084 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@137 -- # x722=() 00:13:40.084 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@137 -- # local -ga x722 00:13:40.084 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@138 -- # mlx=() 00:13:40.084 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@138 -- # local -ga mlx 00:13:40.084 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:40.084 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:40.084 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:40.084 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:40.084 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:40.084 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:40.084 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:40.084 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:40.084 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:40.084 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:40.084 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:40.084 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:40.085 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:40.085 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # [[ up == up ]] 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:40.085 Found net devices under 0000:86:00.0: cvl_0_0 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # [[ up == up ]] 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:40.085 Found net devices under 0000:86:00.1: cvl_0_1 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # is_hw=yes 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@257 -- # create_target_ns 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@27 -- # local -gA dev_map 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@28 -- # local -g _dev 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@44 -- # ips=() 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@11 -- # local val=167772161 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:13:40.085 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:13:40.085 10.0.0.1 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@11 -- # local val=167772162 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:13:40.086 10.0.0.2 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@38 -- # ping_ips 1 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@107 -- # local dev=initiator0 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:13:40.086 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:40.086 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.357 ms 00:13:40.086 00:13:40.086 --- 10.0.0.1 ping statistics --- 00:13:40.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:40.086 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # get_net_dev target0 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@107 -- # local dev=target0 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:13:40.086 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:40.086 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:13:40.086 00:13:40.086 --- 10.0.0.2 ping statistics --- 00:13:40.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:40.086 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # (( pair++ )) 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@270 -- # return 0 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:13:40.086 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@107 -- # local dev=initiator0 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@107 -- # local dev=initiator1 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # return 1 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # dev= 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@169 -- # return 0 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # get_net_dev target0 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@107 -- # local dev=target0 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # get_net_dev target1 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@107 -- # local dev=target1 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # return 1 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # dev= 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@169 -- # return 0 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # nvmfpid=4165973 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@329 -- # waitforlisten 4165973 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 4165973 ']' 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:40.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:40.087 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:40.087 [2024-12-05 11:57:13.504661] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:13:40.087 [2024-12-05 11:57:13.504705] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:40.087 [2024-12-05 11:57:13.580841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:40.087 [2024-12-05 11:57:13.622705] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:40.087 [2024-12-05 11:57:13.622741] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:40.087 [2024-12-05 11:57:13.622748] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:40.087 [2024-12-05 11:57:13.622758] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:40.087 [2024-12-05 11:57:13.622763] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:40.087 [2024-12-05 11:57:13.624343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:13:40.087 [2024-12-05 11:57:13.624452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:13:40.087 [2024-12-05 11:57:13.624559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:13:40.087 [2024-12-05 11:57:13.624559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:40.346 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:40.346 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:13:40.346 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:13:40.346 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:40.346 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:40.346 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:40.346 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:40.346 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.346 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:40.346 [2024-12-05 11:57:14.383685] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:40.346 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.346 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:40.346 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.346 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:40.346 Malloc0 00:13:40.346 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.346 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:40.346 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.346 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:40.346 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.346 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:40.346 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.346 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:40.346 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.346 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:40.346 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.346 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:40.346 [2024-12-05 11:57:14.448427] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:40.346 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.346 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:13:40.346 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:40.346 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # config=() 00:13:40.346 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # local subsystem config 00:13:40.346 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:13:40.346 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:13:40.346 { 00:13:40.346 "params": { 00:13:40.346 "name": "Nvme$subsystem", 00:13:40.346 "trtype": "$TEST_TRANSPORT", 00:13:40.346 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:40.346 "adrfam": "ipv4", 00:13:40.346 "trsvcid": "$NVMF_PORT", 00:13:40.346 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:40.346 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:40.346 "hdgst": ${hdgst:-false}, 00:13:40.346 "ddgst": ${ddgst:-false} 00:13:40.346 }, 00:13:40.346 "method": "bdev_nvme_attach_controller" 00:13:40.346 } 00:13:40.347 EOF 00:13:40.347 )") 00:13:40.347 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # cat 00:13:40.347 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@396 -- # jq . 00:13:40.347 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@397 -- # IFS=, 00:13:40.347 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:13:40.347 "params": { 00:13:40.347 "name": "Nvme1", 00:13:40.347 "trtype": "tcp", 00:13:40.347 "traddr": "10.0.0.2", 00:13:40.347 "adrfam": "ipv4", 00:13:40.347 "trsvcid": "4420", 00:13:40.347 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:40.347 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:40.347 "hdgst": false, 00:13:40.347 "ddgst": false 00:13:40.347 }, 00:13:40.347 "method": "bdev_nvme_attach_controller" 00:13:40.347 }' 00:13:40.347 [2024-12-05 11:57:14.502318] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:13:40.347 [2024-12-05 11:57:14.502364] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4166077 ] 00:13:40.604 [2024-12-05 11:57:14.581285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:40.604 [2024-12-05 11:57:14.624630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:40.604 [2024-12-05 11:57:14.624660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.604 [2024-12-05 11:57:14.624660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:40.860 I/O targets: 00:13:40.860 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:40.860 00:13:40.860 00:13:40.860 CUnit - A unit testing framework for C - Version 2.1-3 00:13:40.860 http://cunit.sourceforge.net/ 00:13:40.860 00:13:40.860 00:13:40.860 Suite: bdevio tests on: Nvme1n1 00:13:40.860 Test: blockdev write read block ...passed 00:13:40.860 Test: blockdev write zeroes read block ...passed 00:13:40.860 Test: blockdev write zeroes read no split ...passed 00:13:40.860 Test: blockdev write zeroes read split ...passed 00:13:40.860 Test: blockdev write zeroes read split partial ...passed 00:13:40.860 Test: blockdev reset ...[2024-12-05 11:57:15.024841] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:13:40.860 [2024-12-05 11:57:15.024900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13e7350 (9): Bad file descriptor 00:13:41.115 [2024-12-05 11:57:15.126589] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:13:41.115 passed 00:13:41.115 Test: blockdev write read 8 blocks ...passed 00:13:41.115 Test: blockdev write read size > 128k ...passed 00:13:41.115 Test: blockdev write read invalid size ...passed 00:13:41.115 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:41.115 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:41.115 Test: blockdev write read max offset ...passed 00:13:41.115 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:41.115 Test: blockdev writev readv 8 blocks ...passed 00:13:41.115 Test: blockdev writev readv 30 x 1block ...passed 00:13:41.115 Test: blockdev writev readv block ...passed 00:13:41.115 Test: blockdev writev readv size > 128k ...passed 00:13:41.115 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:41.115 Test: blockdev comparev and writev ...[2024-12-05 11:57:15.297076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:41.115 [2024-12-05 11:57:15.297105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:41.115 [2024-12-05 11:57:15.297119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:41.115 [2024-12-05 11:57:15.297132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:41.115 [2024-12-05 11:57:15.297365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:41.115 [2024-12-05 11:57:15.297381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:41.115 [2024-12-05 11:57:15.297393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:41.115 [2024-12-05 11:57:15.297400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:41.115 [2024-12-05 11:57:15.297642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:41.115 [2024-12-05 11:57:15.297653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:41.115 [2024-12-05 11:57:15.297664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:41.115 [2024-12-05 11:57:15.297671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:41.115 [2024-12-05 11:57:15.297913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:41.115 [2024-12-05 11:57:15.297924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:41.115 [2024-12-05 11:57:15.297936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:41.115 [2024-12-05 11:57:15.297944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:41.371 passed 00:13:41.372 Test: blockdev nvme passthru rw ...passed 00:13:41.372 Test: blockdev nvme passthru vendor specific ...[2024-12-05 11:57:15.379705] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:41.372 [2024-12-05 11:57:15.379729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:41.372 [2024-12-05 11:57:15.379834] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:41.372 [2024-12-05 11:57:15.379845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:41.372 [2024-12-05 11:57:15.379944] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:41.372 [2024-12-05 11:57:15.379954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:41.372 [2024-12-05 11:57:15.380055] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:41.372 [2024-12-05 11:57:15.380066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:41.372 passed 00:13:41.372 Test: blockdev nvme admin passthru ...passed 00:13:41.372 Test: blockdev copy ...passed 00:13:41.372 00:13:41.372 Run Summary: Type Total Ran Passed Failed Inactive 00:13:41.372 suites 1 1 n/a 0 0 00:13:41.372 tests 23 23 23 0 0 00:13:41.372 asserts 152 152 152 0 n/a 00:13:41.372 00:13:41.372 Elapsed time = 1.227 seconds 00:13:41.629 11:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:41.629 11:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.629 11:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:41.629 11:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.629 11:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:41.629 11:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:13:41.629 11:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # nvmfcleanup 00:13:41.629 11:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@99 -- # sync 00:13:41.629 11:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:13:41.629 11:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@102 -- # set +e 00:13:41.629 11:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@103 -- # for i in {1..20} 00:13:41.629 11:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:13:41.629 rmmod nvme_tcp 00:13:41.629 rmmod nvme_fabrics 00:13:41.629 rmmod nvme_keyring 00:13:41.629 11:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:13:41.629 11:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # set -e 00:13:41.629 11:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # return 0 00:13:41.629 11:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # '[' -n 4165973 ']' 00:13:41.629 11:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@337 -- # killprocess 4165973 00:13:41.629 11:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 4165973 ']' 00:13:41.629 11:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 4165973 00:13:41.629 11:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:13:41.629 11:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:41.629 11:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4165973 00:13:41.629 11:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:13:41.629 11:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:13:41.629 11:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4165973' 00:13:41.629 killing process with pid 4165973 00:13:41.629 11:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 4165973 00:13:41.629 11:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 4165973 00:13:41.887 11:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:13:41.887 11:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # nvmf_fini 00:13:41.887 11:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@264 -- # local dev 00:13:41.887 11:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@267 -- # remove_target_ns 00:13:41.887 11:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:13:41.887 11:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:13:41.887 11:57:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_target_ns 00:13:43.791 11:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@268 -- # delete_main_bridge 00:13:43.791 11:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:13:43.791 11:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@130 -- # return 0 00:13:43.791 11:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:13:43.791 11:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:13:43.791 11:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:13:43.791 11:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:13:43.791 11:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:13:43.791 11:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:13:43.791 11:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:13:43.791 11:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:13:43.791 11:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:13:43.791 11:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:13:43.791 11:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:13:43.791 11:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:13:43.791 11:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:13:43.791 11:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:13:43.791 11:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:13:43.791 11:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:13:43.791 11:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:13:43.791 11:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@41 -- # _dev=0 00:13:43.791 11:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@41 -- # dev_map=() 00:13:43.791 11:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@284 -- # iptr 00:13:43.791 11:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@542 -- # iptables-save 00:13:43.791 11:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:13:43.791 11:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@542 -- # iptables-restore 00:13:43.791 00:13:43.791 real 0m10.833s 00:13:43.791 user 0m13.144s 00:13:43.791 sys 0m5.125s 00:13:43.791 11:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:43.791 11:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:43.791 ************************************ 00:13:43.791 END TEST nvmf_bdevio 00:13:43.791 ************************************ 00:13:44.051 11:57:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # [[ tcp == \t\c\p ]] 00:13:44.051 11:57:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # [[ phy != phy ]] 00:13:44.051 11:57:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:13:44.051 11:57:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:44.051 11:57:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:44.051 11:57:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:44.051 ************************************ 00:13:44.051 START TEST nvmf_zcopy 00:13:44.051 ************************************ 00:13:44.051 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:13:44.051 * Looking for test storage... 00:13:44.051 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:44.051 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:44.051 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:13:44.051 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:44.051 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:44.051 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:44.051 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:44.051 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:44.051 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:13:44.051 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:13:44.051 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:13:44.051 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:13:44.051 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:13:44.051 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:13:44.051 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:13:44.051 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:44.051 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:13:44.051 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:13:44.051 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:44.051 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:44.051 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:13:44.051 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:13:44.051 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:44.051 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:13:44.051 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:13:44.051 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:13:44.051 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:13:44.051 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:44.051 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:13:44.051 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:13:44.051 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:44.051 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:44.051 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:13:44.051 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:44.051 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:44.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:44.051 --rc genhtml_branch_coverage=1 00:13:44.051 --rc genhtml_function_coverage=1 00:13:44.051 --rc genhtml_legend=1 00:13:44.051 --rc geninfo_all_blocks=1 00:13:44.051 --rc geninfo_unexecuted_blocks=1 00:13:44.051 00:13:44.051 ' 00:13:44.051 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:44.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:44.051 --rc genhtml_branch_coverage=1 00:13:44.051 --rc genhtml_function_coverage=1 00:13:44.051 --rc genhtml_legend=1 00:13:44.051 --rc geninfo_all_blocks=1 00:13:44.051 --rc geninfo_unexecuted_blocks=1 00:13:44.051 00:13:44.051 ' 00:13:44.051 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:44.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:44.051 --rc genhtml_branch_coverage=1 00:13:44.051 --rc genhtml_function_coverage=1 00:13:44.051 --rc genhtml_legend=1 00:13:44.051 --rc geninfo_all_blocks=1 00:13:44.051 --rc geninfo_unexecuted_blocks=1 00:13:44.051 00:13:44.051 ' 00:13:44.051 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:44.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:44.051 --rc genhtml_branch_coverage=1 00:13:44.051 --rc genhtml_function_coverage=1 00:13:44.051 --rc genhtml_legend=1 00:13:44.051 --rc geninfo_all_blocks=1 00:13:44.051 --rc geninfo_unexecuted_blocks=1 00:13:44.051 00:13:44.051 ' 00:13:44.051 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:44.051 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:13:44.051 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:44.051 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:44.051 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:44.051 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:44.051 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:44.051 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:13:44.051 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:44.051 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:13:44.051 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:44.051 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:44.051 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:44.051 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:13:44.051 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:13:44.051 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:44.051 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:44.051 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:13:44.051 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:44.051 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:44.051 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:44.051 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.052 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.052 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.052 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:13:44.052 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.052 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:13:44.312 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:13:44.312 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:13:44.312 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:13:44.312 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@50 -- # : 0 00:13:44.312 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:13:44.312 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:13:44.312 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:13:44.312 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:44.312 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:44.312 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:13:44.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:13:44.312 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:13:44.312 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:13:44.312 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@54 -- # have_pci_nics=0 00:13:44.312 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:13:44.312 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:13:44.312 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:44.312 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # prepare_net_devs 00:13:44.312 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # local -g is_hw=no 00:13:44.312 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@260 -- # remove_target_ns 00:13:44.312 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:13:44.312 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:13:44.312 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_target_ns 00:13:44.312 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:13:44.312 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:13:44.312 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # xtrace_disable 00:13:44.312 11:57:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:50.887 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:50.887 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@131 -- # pci_devs=() 00:13:50.887 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@131 -- # local -a pci_devs 00:13:50.887 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@132 -- # pci_net_devs=() 00:13:50.887 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:13:50.887 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@133 -- # pci_drivers=() 00:13:50.887 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@133 -- # local -A pci_drivers 00:13:50.887 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@135 -- # net_devs=() 00:13:50.887 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@135 -- # local -ga net_devs 00:13:50.887 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@136 -- # e810=() 00:13:50.887 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@136 -- # local -ga e810 00:13:50.887 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@137 -- # x722=() 00:13:50.887 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@137 -- # local -ga x722 00:13:50.887 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@138 -- # mlx=() 00:13:50.887 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@138 -- # local -ga mlx 00:13:50.887 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:50.887 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:50.887 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:50.887 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:50.887 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:50.887 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:50.887 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:50.887 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:50.887 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:50.887 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:50.887 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:50.887 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:50.887 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:13:50.887 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:13:50.887 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:13:50.887 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:13:50.887 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:13:50.887 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:13:50.887 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:13:50.887 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:50.887 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:50.887 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:13:50.887 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:13:50.887 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:50.887 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:50.887 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:13:50.887 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:13:50.887 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:50.887 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:50.887 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:13:50.887 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:13:50.887 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:50.887 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:50.887 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:13:50.887 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:13:50.887 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:13:50.887 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:13:50.888 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:13:50.888 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:50.888 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:13:50.888 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:50.888 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # [[ up == up ]] 00:13:50.888 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:13:50.888 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:50.888 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:50.888 Found net devices under 0000:86:00.0: cvl_0_0 00:13:50.888 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:13:50.888 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:13:50.888 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:50.888 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:13:50.888 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:50.888 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # [[ up == up ]] 00:13:50.888 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:13:50.888 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:50.888 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:50.888 Found net devices under 0000:86:00.1: cvl_0_1 00:13:50.888 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:13:50.888 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:13:50.888 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:13:50.888 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # is_hw=yes 00:13:50.888 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:13:50.888 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:13:50.888 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:13:50.888 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:13:50.888 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@257 -- # create_target_ns 00:13:50.888 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:13:50.888 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:13:50.888 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:13:50.888 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:50.888 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:13:50.888 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:13:50.888 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:50.888 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:50.888 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:13:50.888 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:13:50.888 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:13:50.888 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:13:50.888 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@27 -- # local -gA dev_map 00:13:50.888 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@28 -- # local -g _dev 00:13:50.888 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:13:50.888 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:13:50.888 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:13:50.888 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:13:50.888 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@44 -- # ips=() 00:13:50.888 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:13:50.888 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:13:50.888 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:13:50.888 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:13:50.888 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:13:50.888 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:13:50.888 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:13:50.888 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:13:50.888 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:13:50.888 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:13:50.888 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:13:50.888 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:13:50.888 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:13:50.888 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:13:50.888 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:13:50.888 11:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:13:50.888 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:13:50.888 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:13:50.888 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:50.888 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:13:50.888 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@11 -- # local val=167772161 00:13:50.888 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:13:50.888 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:13:50.888 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:13:50.888 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:13:50.888 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:13:50.888 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:13:50.888 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:13:50.888 10.0.0.1 00:13:50.888 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:13:50.888 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:13:50.888 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:50.888 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:50.888 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:13:50.888 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@11 -- # local val=167772162 00:13:50.888 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:13:50.888 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:13:50.888 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:13:50.888 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:13:50.888 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:13:50.888 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:13:50.888 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:13:50.888 10.0.0.2 00:13:50.888 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:13:50.888 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:13:50.888 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:50.888 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:13:50.888 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:13:50.888 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:13:50.888 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:13:50.888 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:50.888 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:50.888 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:13:50.888 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:13:50.888 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:13:50.888 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:13:50.888 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:13:50.888 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:13:50.888 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:13:50.888 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:13:50.888 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:13:50.888 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:13:50.888 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:13:50.888 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@38 -- # ping_ips 1 00:13:50.888 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:13:50.888 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:13:50.888 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:13:50.888 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@107 -- # local dev=initiator0 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:13:50.889 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:50.889 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.471 ms 00:13:50.889 00:13:50.889 --- 10.0.0.1 ping statistics --- 00:13:50.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:50.889 rtt min/avg/max/mdev = 0.471/0.471/0.471/0.000 ms 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@168 -- # get_net_dev target0 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@107 -- # local dev=target0 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:13:50.889 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:50.889 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:13:50.889 00:13:50.889 --- 10.0.0.2 ping statistics --- 00:13:50.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:50.889 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # (( pair++ )) 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@270 -- # return 0 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@107 -- # local dev=initiator0 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@107 -- # local dev=initiator1 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@109 -- # return 1 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@168 -- # dev= 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@169 -- # return 0 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@168 -- # get_net_dev target0 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@107 -- # local dev=target0 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:50.889 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:50.890 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@168 -- # get_net_dev target1 00:13:50.890 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@107 -- # local dev=target1 00:13:50.890 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:13:50.890 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:13:50.890 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@109 -- # return 1 00:13:50.890 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@168 -- # dev= 00:13:50.890 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@169 -- # return 0 00:13:50.890 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:13:50.890 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:50.890 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:13:50.890 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:13:50.890 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:50.890 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:13:50.890 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:13:50.890 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:13:50.890 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:13:50.890 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:50.890 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:50.890 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # nvmfpid=4169906 00:13:50.890 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:50.890 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@329 -- # waitforlisten 4169906 00:13:50.890 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 4169906 ']' 00:13:50.890 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:50.890 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:50.890 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:50.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:50.890 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:50.890 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:50.890 [2024-12-05 11:57:24.430888] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:13:50.890 [2024-12-05 11:57:24.430934] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:50.890 [2024-12-05 11:57:24.506870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.890 [2024-12-05 11:57:24.549297] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:50.890 [2024-12-05 11:57:24.549331] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:50.890 [2024-12-05 11:57:24.549338] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:50.890 [2024-12-05 11:57:24.549344] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:50.890 [2024-12-05 11:57:24.549350] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:50.890 [2024-12-05 11:57:24.549912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:50.890 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:50.890 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:13:50.890 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:13:50.890 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:50.890 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:50.890 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:50.890 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:13:50.890 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.890 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:50.890 [2024-12-05 11:57:24.692419] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:50.890 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.890 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:50.890 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.890 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:50.890 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.890 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@20 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:50.890 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.890 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:50.890 [2024-12-05 11:57:24.712612] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:50.890 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.890 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:50.890 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.890 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:50.890 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.890 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:13:50.890 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.890 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:50.890 malloc0 00:13:50.890 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.890 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:50.890 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.890 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:50.890 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.890 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:13:50.890 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@28 -- # gen_nvmf_target_json 00:13:50.890 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # config=() 00:13:50.890 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # local subsystem config 00:13:50.890 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:13:50.890 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:13:50.890 { 00:13:50.890 "params": { 00:13:50.890 "name": "Nvme$subsystem", 00:13:50.890 "trtype": "$TEST_TRANSPORT", 00:13:50.890 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:50.890 "adrfam": "ipv4", 00:13:50.890 "trsvcid": "$NVMF_PORT", 00:13:50.890 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:50.890 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:50.890 "hdgst": ${hdgst:-false}, 00:13:50.890 "ddgst": ${ddgst:-false} 00:13:50.890 }, 00:13:50.890 "method": "bdev_nvme_attach_controller" 00:13:50.890 } 00:13:50.890 EOF 00:13:50.890 )") 00:13:50.890 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # cat 00:13:50.890 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@396 -- # jq . 00:13:50.890 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@397 -- # IFS=, 00:13:50.890 11:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:13:50.890 "params": { 00:13:50.890 "name": "Nvme1", 00:13:50.890 "trtype": "tcp", 00:13:50.890 "traddr": "10.0.0.2", 00:13:50.890 "adrfam": "ipv4", 00:13:50.890 "trsvcid": "4420", 00:13:50.890 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:50.890 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:50.890 "hdgst": false, 00:13:50.890 "ddgst": false 00:13:50.890 }, 00:13:50.890 "method": "bdev_nvme_attach_controller" 00:13:50.890 }' 00:13:50.890 [2024-12-05 11:57:24.796429] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:13:50.890 [2024-12-05 11:57:24.796474] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4170034 ] 00:13:50.890 [2024-12-05 11:57:24.870511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.890 [2024-12-05 11:57:24.911187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.890 Running I/O for 10 seconds... 00:13:53.205 8720.00 IOPS, 68.12 MiB/s [2024-12-05T10:57:28.339Z] 8793.00 IOPS, 68.70 MiB/s [2024-12-05T10:57:29.274Z] 8823.67 IOPS, 68.93 MiB/s [2024-12-05T10:57:30.208Z] 8837.75 IOPS, 69.04 MiB/s [2024-12-05T10:57:31.141Z] 8831.00 IOPS, 68.99 MiB/s [2024-12-05T10:57:32.512Z] 8811.00 IOPS, 68.84 MiB/s [2024-12-05T10:57:33.449Z] 8820.86 IOPS, 68.91 MiB/s [2024-12-05T10:57:34.384Z] 8826.38 IOPS, 68.96 MiB/s [2024-12-05T10:57:35.320Z] 8835.00 IOPS, 69.02 MiB/s [2024-12-05T10:57:35.320Z] 8841.70 IOPS, 69.08 MiB/s 00:14:01.124 Latency(us) 00:14:01.124 [2024-12-05T10:57:35.320Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:01.124 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:14:01.124 Verification LBA range: start 0x0 length 0x1000 00:14:01.124 Nvme1n1 : 10.01 8843.24 69.09 0.00 0.00 14432.83 2262.55 23468.13 00:14:01.124 [2024-12-05T10:57:35.320Z] =================================================================================================================== 00:14:01.124 [2024-12-05T10:57:35.320Z] Total : 8843.24 69.09 0.00 0.00 14432.83 2262.55 23468.13 00:14:01.124 11:57:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@34 -- # perfpid=4171648 00:14:01.124 11:57:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@36 -- # xtrace_disable 00:14:01.124 11:57:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:01.124 11:57:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:14:01.124 11:57:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@32 -- # gen_nvmf_target_json 00:14:01.124 11:57:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # config=() 00:14:01.124 11:57:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # local subsystem config 00:14:01.124 11:57:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:14:01.124 11:57:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:14:01.124 { 00:14:01.124 "params": { 00:14:01.124 "name": "Nvme$subsystem", 00:14:01.124 "trtype": "$TEST_TRANSPORT", 00:14:01.124 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:01.124 "adrfam": "ipv4", 00:14:01.124 "trsvcid": "$NVMF_PORT", 00:14:01.124 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:01.124 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:01.124 "hdgst": ${hdgst:-false}, 00:14:01.124 "ddgst": ${ddgst:-false} 00:14:01.124 }, 00:14:01.124 "method": "bdev_nvme_attach_controller" 00:14:01.124 } 00:14:01.124 EOF 00:14:01.124 )") 00:14:01.124 [2024-12-05 11:57:35.273513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.124 [2024-12-05 11:57:35.273549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.124 11:57:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # cat 00:14:01.124 11:57:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@396 -- # jq . 00:14:01.124 11:57:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@397 -- # IFS=, 00:14:01.124 11:57:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:14:01.124 "params": { 00:14:01.124 "name": "Nvme1", 00:14:01.124 "trtype": "tcp", 00:14:01.124 "traddr": "10.0.0.2", 00:14:01.124 "adrfam": "ipv4", 00:14:01.124 "trsvcid": "4420", 00:14:01.124 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:01.124 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:01.124 "hdgst": false, 00:14:01.124 "ddgst": false 00:14:01.124 }, 00:14:01.124 "method": "bdev_nvme_attach_controller" 00:14:01.124 }' 00:14:01.124 [2024-12-05 11:57:35.285515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.124 [2024-12-05 11:57:35.285529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.124 [2024-12-05 11:57:35.297545] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.124 [2024-12-05 11:57:35.297555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.124 [2024-12-05 11:57:35.309587] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.124 [2024-12-05 11:57:35.309601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.124 [2024-12-05 11:57:35.315437] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:14:01.124 [2024-12-05 11:57:35.315483] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4171648 ] 00:14:01.124 [2024-12-05 11:57:35.321608] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.124 [2024-12-05 11:57:35.321621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.383 [2024-12-05 11:57:35.333641] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.383 [2024-12-05 11:57:35.333659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.383 [2024-12-05 11:57:35.345676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.383 [2024-12-05 11:57:35.345689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.383 [2024-12-05 11:57:35.357707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.383 [2024-12-05 11:57:35.357720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.383 [2024-12-05 11:57:35.369739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.383 [2024-12-05 11:57:35.369750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.383 [2024-12-05 11:57:35.381771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.383 [2024-12-05 11:57:35.381783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.383 [2024-12-05 11:57:35.392932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:01.383 [2024-12-05 11:57:35.393800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.383 [2024-12-05 11:57:35.393810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.383 [2024-12-05 11:57:35.405835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.383 [2024-12-05 11:57:35.405852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.383 [2024-12-05 11:57:35.417866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.383 [2024-12-05 11:57:35.417878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.383 [2024-12-05 11:57:35.429897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.383 [2024-12-05 11:57:35.429909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.383 [2024-12-05 11:57:35.433475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.383 [2024-12-05 11:57:35.441931] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.383 [2024-12-05 11:57:35.441943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.383 [2024-12-05 11:57:35.453973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.383 [2024-12-05 11:57:35.453995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.383 [2024-12-05 11:57:35.466014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.383 [2024-12-05 11:57:35.466036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.383 [2024-12-05 11:57:35.478034] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.383 [2024-12-05 11:57:35.478047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.383 [2024-12-05 11:57:35.490065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.383 [2024-12-05 11:57:35.490078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.383 [2024-12-05 11:57:35.502094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.383 [2024-12-05 11:57:35.502106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.383 [2024-12-05 11:57:35.514123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.383 [2024-12-05 11:57:35.514134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.383 [2024-12-05 11:57:35.526173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.383 [2024-12-05 11:57:35.526195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.383 [2024-12-05 11:57:35.538197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.383 [2024-12-05 11:57:35.538214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.383 [2024-12-05 11:57:35.550229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.383 [2024-12-05 11:57:35.550251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.383 [2024-12-05 11:57:35.562261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.384 [2024-12-05 11:57:35.562276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.384 [2024-12-05 11:57:35.574288] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.384 [2024-12-05 11:57:35.574300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.661 [2024-12-05 11:57:35.586320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.661 [2024-12-05 11:57:35.586332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.661 [2024-12-05 11:57:35.598354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.661 [2024-12-05 11:57:35.598365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.661 [2024-12-05 11:57:35.610400] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.661 [2024-12-05 11:57:35.610415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.661 [2024-12-05 11:57:35.622427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.661 [2024-12-05 11:57:35.622438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.661 [2024-12-05 11:57:35.634455] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.661 [2024-12-05 11:57:35.634466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.661 [2024-12-05 11:57:35.646498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.661 [2024-12-05 11:57:35.646513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.661 [2024-12-05 11:57:35.658529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.661 [2024-12-05 11:57:35.658540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.661 [2024-12-05 11:57:35.704653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.661 [2024-12-05 11:57:35.704674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.661 [2024-12-05 11:57:35.714684] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.661 [2024-12-05 11:57:35.714698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.661 Running I/O for 5 seconds... 00:14:01.661 [2024-12-05 11:57:35.731436] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.661 [2024-12-05 11:57:35.731456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.661 [2024-12-05 11:57:35.746609] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.661 [2024-12-05 11:57:35.746640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.661 [2024-12-05 11:57:35.760007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.661 [2024-12-05 11:57:35.760027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.661 [2024-12-05 11:57:35.773791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.661 [2024-12-05 11:57:35.773815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.661 [2024-12-05 11:57:35.787648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.661 [2024-12-05 11:57:35.787667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.661 [2024-12-05 11:57:35.801070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.661 [2024-12-05 11:57:35.801088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.661 [2024-12-05 11:57:35.814579] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.661 [2024-12-05 11:57:35.814598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.661 [2024-12-05 11:57:35.828542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.661 [2024-12-05 11:57:35.828565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.661 [2024-12-05 11:57:35.842416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.661 [2024-12-05 11:57:35.842435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.661 [2024-12-05 11:57:35.856241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.661 [2024-12-05 11:57:35.856261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.921 [2024-12-05 11:57:35.870104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.921 [2024-12-05 11:57:35.870124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.921 [2024-12-05 11:57:35.883626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.921 [2024-12-05 11:57:35.883646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.921 [2024-12-05 11:57:35.897309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.921 [2024-12-05 11:57:35.897328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.921 [2024-12-05 11:57:35.910886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.921 [2024-12-05 11:57:35.910904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.921 [2024-12-05 11:57:35.925069] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.921 [2024-12-05 11:57:35.925092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.921 [2024-12-05 11:57:35.935914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.921 [2024-12-05 11:57:35.935933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.921 [2024-12-05 11:57:35.950050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.921 [2024-12-05 11:57:35.950070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.921 [2024-12-05 11:57:35.963737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.921 [2024-12-05 11:57:35.963755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.921 [2024-12-05 11:57:35.977513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.921 [2024-12-05 11:57:35.977532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.921 [2024-12-05 11:57:35.991190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.921 [2024-12-05 11:57:35.991209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.921 [2024-12-05 11:57:36.005253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.921 [2024-12-05 11:57:36.005272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.921 [2024-12-05 11:57:36.019104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.921 [2024-12-05 11:57:36.019123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.921 [2024-12-05 11:57:36.033146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.921 [2024-12-05 11:57:36.033164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.921 [2024-12-05 11:57:36.046623] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.921 [2024-12-05 11:57:36.046642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.921 [2024-12-05 11:57:36.060301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.921 [2024-12-05 11:57:36.060321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.921 [2024-12-05 11:57:36.074377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.921 [2024-12-05 11:57:36.074396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.921 [2024-12-05 11:57:36.088322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.921 [2024-12-05 11:57:36.088342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.921 [2024-12-05 11:57:36.102107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.921 [2024-12-05 11:57:36.102127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.921 [2024-12-05 11:57:36.115918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.921 [2024-12-05 11:57:36.115937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.180 [2024-12-05 11:57:36.129534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.180 [2024-12-05 11:57:36.129553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.180 [2024-12-05 11:57:36.143188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.180 [2024-12-05 11:57:36.143209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.180 [2024-12-05 11:57:36.157225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.180 [2024-12-05 11:57:36.157243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.180 [2024-12-05 11:57:36.171181] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.180 [2024-12-05 11:57:36.171201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.180 [2024-12-05 11:57:36.184982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.180 [2024-12-05 11:57:36.185002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.180 [2024-12-05 11:57:36.198669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.180 [2024-12-05 11:57:36.198688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.180 [2024-12-05 11:57:36.212332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.180 [2024-12-05 11:57:36.212351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.180 [2024-12-05 11:57:36.226433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.180 [2024-12-05 11:57:36.226454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.181 [2024-12-05 11:57:36.240023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.181 [2024-12-05 11:57:36.240049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.181 [2024-12-05 11:57:36.253617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.181 [2024-12-05 11:57:36.253637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.181 [2024-12-05 11:57:36.267303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.181 [2024-12-05 11:57:36.267321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.181 [2024-12-05 11:57:36.281178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.181 [2024-12-05 11:57:36.281198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.181 [2024-12-05 11:57:36.294724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.181 [2024-12-05 11:57:36.294743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.181 [2024-12-05 11:57:36.308543] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.181 [2024-12-05 11:57:36.308562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.181 [2024-12-05 11:57:36.322104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.181 [2024-12-05 11:57:36.322124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.181 [2024-12-05 11:57:36.336247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.181 [2024-12-05 11:57:36.336268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.181 [2024-12-05 11:57:36.350127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.181 [2024-12-05 11:57:36.350147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.181 [2024-12-05 11:57:36.363927] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.181 [2024-12-05 11:57:36.363946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.181 [2024-12-05 11:57:36.377759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.181 [2024-12-05 11:57:36.377779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.439 [2024-12-05 11:57:36.391208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.439 [2024-12-05 11:57:36.391227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.439 [2024-12-05 11:57:36.405166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.439 [2024-12-05 11:57:36.405186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.439 [2024-12-05 11:57:36.418739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.439 [2024-12-05 11:57:36.418757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.439 [2024-12-05 11:57:36.431964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.439 [2024-12-05 11:57:36.431987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.440 [2024-12-05 11:57:36.445912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.440 [2024-12-05 11:57:36.445931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.440 [2024-12-05 11:57:36.459454] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.440 [2024-12-05 11:57:36.459474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.440 [2024-12-05 11:57:36.473375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.440 [2024-12-05 11:57:36.473394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.440 [2024-12-05 11:57:36.487349] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.440 [2024-12-05 11:57:36.487374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.440 [2024-12-05 11:57:36.500903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.440 [2024-12-05 11:57:36.500922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.440 [2024-12-05 11:57:36.515282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.440 [2024-12-05 11:57:36.515301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.440 [2024-12-05 11:57:36.526400] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.440 [2024-12-05 11:57:36.526419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.440 [2024-12-05 11:57:36.540396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.440 [2024-12-05 11:57:36.540415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.440 [2024-12-05 11:57:36.554017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.440 [2024-12-05 11:57:36.554036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.440 [2024-12-05 11:57:36.567851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.440 [2024-12-05 11:57:36.567870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.440 [2024-12-05 11:57:36.581460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.440 [2024-12-05 11:57:36.581478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.440 [2024-12-05 11:57:36.595240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.440 [2024-12-05 11:57:36.595260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.440 [2024-12-05 11:57:36.608721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.440 [2024-12-05 11:57:36.608740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.440 [2024-12-05 11:57:36.622220] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.440 [2024-12-05 11:57:36.622239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.440 [2024-12-05 11:57:36.635763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.440 [2024-12-05 11:57:36.635783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.698 [2024-12-05 11:57:36.649317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.699 [2024-12-05 11:57:36.649336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.699 [2024-12-05 11:57:36.663142] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.699 [2024-12-05 11:57:36.663161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.699 [2024-12-05 11:57:36.676660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.699 [2024-12-05 11:57:36.676679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.699 [2024-12-05 11:57:36.690381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.699 [2024-12-05 11:57:36.690403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.699 [2024-12-05 11:57:36.703746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.699 [2024-12-05 11:57:36.703766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.699 [2024-12-05 11:57:36.717119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.699 [2024-12-05 11:57:36.717141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.699 16977.00 IOPS, 132.63 MiB/s [2024-12-05T10:57:36.895Z] [2024-12-05 11:57:36.731046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.699 [2024-12-05 11:57:36.731066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.699 [2024-12-05 11:57:36.744202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.699 [2024-12-05 11:57:36.744222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.699 [2024-12-05 11:57:36.758335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.699 [2024-12-05 11:57:36.758355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.699 [2024-12-05 11:57:36.767456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.699 [2024-12-05 11:57:36.767476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.699 [2024-12-05 11:57:36.781824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.699 [2024-12-05 11:57:36.781844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.699 [2024-12-05 11:57:36.795849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.699 [2024-12-05 11:57:36.795868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.699 [2024-12-05 11:57:36.809640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.699 [2024-12-05 11:57:36.809659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.699 [2024-12-05 11:57:36.823209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.699 [2024-12-05 11:57:36.823229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.699 [2024-12-05 11:57:36.836578] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.699 [2024-12-05 11:57:36.836597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.699 [2024-12-05 11:57:36.850206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.699 [2024-12-05 11:57:36.850236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.699 [2024-12-05 11:57:36.863604] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.699 [2024-12-05 11:57:36.863624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.699 [2024-12-05 11:57:36.878270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.699 [2024-12-05 11:57:36.878289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.699 [2024-12-05 11:57:36.889430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.699 [2024-12-05 11:57:36.889450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.959 [2024-12-05 11:57:36.903344] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.959 [2024-12-05 11:57:36.903363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.959 [2024-12-05 11:57:36.917064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.959 [2024-12-05 11:57:36.917084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.959 [2024-12-05 11:57:36.930976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.959 [2024-12-05 11:57:36.930996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.959 [2024-12-05 11:57:36.944907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.959 [2024-12-05 11:57:36.944926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.959 [2024-12-05 11:57:36.955640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.959 [2024-12-05 11:57:36.955660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.959 [2024-12-05 11:57:36.969700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.959 [2024-12-05 11:57:36.969722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.959 [2024-12-05 11:57:36.983443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.959 [2024-12-05 11:57:36.983463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.959 [2024-12-05 11:57:36.997375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.959 [2024-12-05 11:57:36.997395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.959 [2024-12-05 11:57:37.011261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.959 [2024-12-05 11:57:37.011281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.959 [2024-12-05 11:57:37.025225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.959 [2024-12-05 11:57:37.025245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.959 [2024-12-05 11:57:37.038649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.959 [2024-12-05 11:57:37.038669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.959 [2024-12-05 11:57:37.052362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.959 [2024-12-05 11:57:37.052388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.959 [2024-12-05 11:57:37.065868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.959 [2024-12-05 11:57:37.065888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.959 [2024-12-05 11:57:37.079531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.959 [2024-12-05 11:57:37.079551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.959 [2024-12-05 11:57:37.093220] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.959 [2024-12-05 11:57:37.093239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.959 [2024-12-05 11:57:37.107157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.959 [2024-12-05 11:57:37.107181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.959 [2024-12-05 11:57:37.121029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.959 [2024-12-05 11:57:37.121048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.959 [2024-12-05 11:57:37.134780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.959 [2024-12-05 11:57:37.134800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.959 [2024-12-05 11:57:37.148870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.959 [2024-12-05 11:57:37.148889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.218 [2024-12-05 11:57:37.162546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.218 [2024-12-05 11:57:37.162565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.218 [2024-12-05 11:57:37.176152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.218 [2024-12-05 11:57:37.176171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.218 [2024-12-05 11:57:37.190048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.218 [2024-12-05 11:57:37.190067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.218 [2024-12-05 11:57:37.203713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.218 [2024-12-05 11:57:37.203732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.218 [2024-12-05 11:57:37.217435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.218 [2024-12-05 11:57:37.217454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.218 [2024-12-05 11:57:37.231447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.218 [2024-12-05 11:57:37.231466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.218 [2024-12-05 11:57:37.245139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.218 [2024-12-05 11:57:37.245157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.218 [2024-12-05 11:57:37.259365] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.218 [2024-12-05 11:57:37.259389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.218 [2024-12-05 11:57:37.273199] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.218 [2024-12-05 11:57:37.273219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.218 [2024-12-05 11:57:37.287161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.218 [2024-12-05 11:57:37.287180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.218 [2024-12-05 11:57:37.300958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.218 [2024-12-05 11:57:37.300977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.218 [2024-12-05 11:57:37.314787] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.218 [2024-12-05 11:57:37.314805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.218 [2024-12-05 11:57:37.328648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.218 [2024-12-05 11:57:37.328667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.218 [2024-12-05 11:57:37.342078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.218 [2024-12-05 11:57:37.342098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.218 [2024-12-05 11:57:37.355866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.218 [2024-12-05 11:57:37.355886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.218 [2024-12-05 11:57:37.369227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.218 [2024-12-05 11:57:37.369251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.218 [2024-12-05 11:57:37.383045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.218 [2024-12-05 11:57:37.383064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.218 [2024-12-05 11:57:37.397213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.218 [2024-12-05 11:57:37.397231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.218 [2024-12-05 11:57:37.407738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.218 [2024-12-05 11:57:37.407757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.477 [2024-12-05 11:57:37.422022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.477 [2024-12-05 11:57:37.422040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.477 [2024-12-05 11:57:37.435208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.477 [2024-12-05 11:57:37.435228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.477 [2024-12-05 11:57:37.448643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.477 [2024-12-05 11:57:37.448662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.477 [2024-12-05 11:57:37.462499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.477 [2024-12-05 11:57:37.462518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.477 [2024-12-05 11:57:37.476199] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.477 [2024-12-05 11:57:37.476219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.477 [2024-12-05 11:57:37.490097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.477 [2024-12-05 11:57:37.490117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.477 [2024-12-05 11:57:37.504026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.477 [2024-12-05 11:57:37.504046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.477 [2024-12-05 11:57:37.518338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.477 [2024-12-05 11:57:37.518358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.477 [2024-12-05 11:57:37.528960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.477 [2024-12-05 11:57:37.528979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.477 [2024-12-05 11:57:37.543054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.477 [2024-12-05 11:57:37.543074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.477 [2024-12-05 11:57:37.557042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.477 [2024-12-05 11:57:37.557061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.478 [2024-12-05 11:57:37.570712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.478 [2024-12-05 11:57:37.570731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.478 [2024-12-05 11:57:37.583871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.478 [2024-12-05 11:57:37.583890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.478 [2024-12-05 11:57:37.598006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.478 [2024-12-05 11:57:37.598025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.478 [2024-12-05 11:57:37.611813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.478 [2024-12-05 11:57:37.611833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.478 [2024-12-05 11:57:37.625606] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.478 [2024-12-05 11:57:37.625639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.478 [2024-12-05 11:57:37.639139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.478 [2024-12-05 11:57:37.639158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.478 [2024-12-05 11:57:37.652908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.478 [2024-12-05 11:57:37.652928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.478 [2024-12-05 11:57:37.666540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.478 [2024-12-05 11:57:37.666559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.737 [2024-12-05 11:57:37.680668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.737 [2024-12-05 11:57:37.680687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.737 [2024-12-05 11:57:37.694220] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.737 [2024-12-05 11:57:37.694238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.737 [2024-12-05 11:57:37.707917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.737 [2024-12-05 11:57:37.707936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.737 [2024-12-05 11:57:37.721801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.737 [2024-12-05 11:57:37.721820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.737 16992.50 IOPS, 132.75 MiB/s [2024-12-05T10:57:37.933Z] [2024-12-05 11:57:37.735506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.737 [2024-12-05 11:57:37.735525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.737 [2024-12-05 11:57:37.749226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.737 [2024-12-05 11:57:37.749245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.737 [2024-12-05 11:57:37.762982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.737 [2024-12-05 11:57:37.763009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.737 [2024-12-05 11:57:37.777110] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.737 [2024-12-05 11:57:37.777129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.737 [2024-12-05 11:57:37.787626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.737 [2024-12-05 11:57:37.787644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.737 [2024-12-05 11:57:37.801626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.737 [2024-12-05 11:57:37.801644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.737 [2024-12-05 11:57:37.815523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.737 [2024-12-05 11:57:37.815542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.737 [2024-12-05 11:57:37.829210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.737 [2024-12-05 11:57:37.829228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.737 [2024-12-05 11:57:37.842656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.737 [2024-12-05 11:57:37.842675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.737 [2024-12-05 11:57:37.856262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.737 [2024-12-05 11:57:37.856280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.737 [2024-12-05 11:57:37.870056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.737 [2024-12-05 11:57:37.870074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.737 [2024-12-05 11:57:37.884250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.737 [2024-12-05 11:57:37.884270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.737 [2024-12-05 11:57:37.897918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.737 [2024-12-05 11:57:37.897938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.737 [2024-12-05 11:57:37.911705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.737 [2024-12-05 11:57:37.911724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.737 [2024-12-05 11:57:37.925494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.737 [2024-12-05 11:57:37.925514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.996 [2024-12-05 11:57:37.939009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.996 [2024-12-05 11:57:37.939029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.996 [2024-12-05 11:57:37.952971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.996 [2024-12-05 11:57:37.952990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.996 [2024-12-05 11:57:37.966766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.996 [2024-12-05 11:57:37.966785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.996 [2024-12-05 11:57:37.980191] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.996 [2024-12-05 11:57:37.980210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.997 [2024-12-05 11:57:37.994087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.997 [2024-12-05 11:57:37.994106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.997 [2024-12-05 11:57:38.007847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.997 [2024-12-05 11:57:38.007867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.997 [2024-12-05 11:57:38.021450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.997 [2024-12-05 11:57:38.021470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.997 [2024-12-05 11:57:38.035243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.997 [2024-12-05 11:57:38.035263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.997 [2024-12-05 11:57:38.048899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.997 [2024-12-05 11:57:38.048917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.997 [2024-12-05 11:57:38.062838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.997 [2024-12-05 11:57:38.062857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.997 [2024-12-05 11:57:38.076576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.997 [2024-12-05 11:57:38.076596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.997 [2024-12-05 11:57:38.090329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.997 [2024-12-05 11:57:38.090350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.997 [2024-12-05 11:57:38.104277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.997 [2024-12-05 11:57:38.104296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.997 [2024-12-05 11:57:38.118082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.997 [2024-12-05 11:57:38.118103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.997 [2024-12-05 11:57:38.131459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.997 [2024-12-05 11:57:38.131480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.997 [2024-12-05 11:57:38.145156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.997 [2024-12-05 11:57:38.145176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.997 [2024-12-05 11:57:38.159361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.997 [2024-12-05 11:57:38.159387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.997 [2024-12-05 11:57:38.169756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.997 [2024-12-05 11:57:38.169776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.997 [2024-12-05 11:57:38.184234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.997 [2024-12-05 11:57:38.184253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.256 [2024-12-05 11:57:38.197559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.256 [2024-12-05 11:57:38.197578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.256 [2024-12-05 11:57:38.211475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.256 [2024-12-05 11:57:38.211495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.256 [2024-12-05 11:57:38.225074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.256 [2024-12-05 11:57:38.225093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.256 [2024-12-05 11:57:38.238943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.256 [2024-12-05 11:57:38.238962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.256 [2024-12-05 11:57:38.252607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.256 [2024-12-05 11:57:38.252627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.256 [2024-12-05 11:57:38.266280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.256 [2024-12-05 11:57:38.266300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.256 [2024-12-05 11:57:38.280094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.256 [2024-12-05 11:57:38.280113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.256 [2024-12-05 11:57:38.294141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.256 [2024-12-05 11:57:38.294160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.256 [2024-12-05 11:57:38.307834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.256 [2024-12-05 11:57:38.307854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.256 [2024-12-05 11:57:38.321161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.256 [2024-12-05 11:57:38.321182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.256 [2024-12-05 11:57:38.335093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.256 [2024-12-05 11:57:38.335113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.256 [2024-12-05 11:57:38.348915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.256 [2024-12-05 11:57:38.348934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.256 [2024-12-05 11:57:38.363187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.256 [2024-12-05 11:57:38.363206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.256 [2024-12-05 11:57:38.373954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.256 [2024-12-05 11:57:38.373975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.256 [2024-12-05 11:57:38.388290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.256 [2024-12-05 11:57:38.388314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.256 [2024-12-05 11:57:38.401927] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.256 [2024-12-05 11:57:38.401947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.256 [2024-12-05 11:57:38.415745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.256 [2024-12-05 11:57:38.415765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.256 [2024-12-05 11:57:38.429465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.256 [2024-12-05 11:57:38.429484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.256 [2024-12-05 11:57:38.443432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.256 [2024-12-05 11:57:38.443451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.515 [2024-12-05 11:57:38.457532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.515 [2024-12-05 11:57:38.457552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.515 [2024-12-05 11:57:38.471706] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.515 [2024-12-05 11:57:38.471726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.515 [2024-12-05 11:57:38.485722] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.515 [2024-12-05 11:57:38.485742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.515 [2024-12-05 11:57:38.499468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.515 [2024-12-05 11:57:38.499487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.515 [2024-12-05 11:57:38.513458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.515 [2024-12-05 11:57:38.513484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.515 [2024-12-05 11:57:38.527269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.515 [2024-12-05 11:57:38.527289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.515 [2024-12-05 11:57:38.540942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.515 [2024-12-05 11:57:38.540962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.515 [2024-12-05 11:57:38.554435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.515 [2024-12-05 11:57:38.554454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.515 [2024-12-05 11:57:38.567987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.515 [2024-12-05 11:57:38.568007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.515 [2024-12-05 11:57:38.581633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.515 [2024-12-05 11:57:38.581652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.515 [2024-12-05 11:57:38.595136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.515 [2024-12-05 11:57:38.595156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.515 [2024-12-05 11:57:38.608869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.515 [2024-12-05 11:57:38.608888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.515 [2024-12-05 11:57:38.622831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.515 [2024-12-05 11:57:38.622850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.515 [2024-12-05 11:57:38.636540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.515 [2024-12-05 11:57:38.636559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.515 [2024-12-05 11:57:38.650364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.515 [2024-12-05 11:57:38.650393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.515 [2024-12-05 11:57:38.664281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.515 [2024-12-05 11:57:38.664300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.515 [2024-12-05 11:57:38.677920] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.515 [2024-12-05 11:57:38.677938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.515 [2024-12-05 11:57:38.691324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.515 [2024-12-05 11:57:38.691342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.515 [2024-12-05 11:57:38.704866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.515 [2024-12-05 11:57:38.704885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.774 [2024-12-05 11:57:38.718686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.774 [2024-12-05 11:57:38.718705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.774 16999.67 IOPS, 132.81 MiB/s [2024-12-05T10:57:38.970Z] [2024-12-05 11:57:38.732264] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.774 [2024-12-05 11:57:38.732283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.774 [2024-12-05 11:57:38.745695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.774 [2024-12-05 11:57:38.745714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.774 [2024-12-05 11:57:38.760092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.774 [2024-12-05 11:57:38.760111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.774 [2024-12-05 11:57:38.771386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.775 [2024-12-05 11:57:38.771406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.775 [2024-12-05 11:57:38.785061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.775 [2024-12-05 11:57:38.785082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.775 [2024-12-05 11:57:38.798880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.775 [2024-12-05 11:57:38.798900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.775 [2024-12-05 11:57:38.812682] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.775 [2024-12-05 11:57:38.812701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.775 [2024-12-05 11:57:38.826819] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.775 [2024-12-05 11:57:38.826838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.775 [2024-12-05 11:57:38.840653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.775 [2024-12-05 11:57:38.840672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.775 [2024-12-05 11:57:38.854312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.775 [2024-12-05 11:57:38.854332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.775 [2024-12-05 11:57:38.868200] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.775 [2024-12-05 11:57:38.868219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.775 [2024-12-05 11:57:38.882029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.775 [2024-12-05 11:57:38.882048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.775 [2024-12-05 11:57:38.895894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.775 [2024-12-05 11:57:38.895914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.775 [2024-12-05 11:57:38.909629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.775 [2024-12-05 11:57:38.909653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.775 [2024-12-05 11:57:38.923354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.775 [2024-12-05 11:57:38.923381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.775 [2024-12-05 11:57:38.936964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.775 [2024-12-05 11:57:38.936983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.775 [2024-12-05 11:57:38.950593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.775 [2024-12-05 11:57:38.950612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.775 [2024-12-05 11:57:38.964237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.775 [2024-12-05 11:57:38.964256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.033 [2024-12-05 11:57:38.978015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.033 [2024-12-05 11:57:38.978034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.033 [2024-12-05 11:57:38.991863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.033 [2024-12-05 11:57:38.991882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.033 [2024-12-05 11:57:39.005459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.033 [2024-12-05 11:57:39.005478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.033 [2024-12-05 11:57:39.019334] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.033 [2024-12-05 11:57:39.019355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.033 [2024-12-05 11:57:39.032853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.033 [2024-12-05 11:57:39.032872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.033 [2024-12-05 11:57:39.046764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.033 [2024-12-05 11:57:39.046782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.033 [2024-12-05 11:57:39.060995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.033 [2024-12-05 11:57:39.061014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.033 [2024-12-05 11:57:39.071791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.033 [2024-12-05 11:57:39.071811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.033 [2024-12-05 11:57:39.085754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.033 [2024-12-05 11:57:39.085775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.033 [2024-12-05 11:57:39.099525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.033 [2024-12-05 11:57:39.099544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.033 [2024-12-05 11:57:39.112943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.033 [2024-12-05 11:57:39.112962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.034 [2024-12-05 11:57:39.126510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.034 [2024-12-05 11:57:39.126529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.034 [2024-12-05 11:57:39.140783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.034 [2024-12-05 11:57:39.140802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.034 [2024-12-05 11:57:39.154124] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.034 [2024-12-05 11:57:39.154143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.034 [2024-12-05 11:57:39.167920] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.034 [2024-12-05 11:57:39.167939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.034 [2024-12-05 11:57:39.181175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.034 [2024-12-05 11:57:39.181194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.034 [2024-12-05 11:57:39.195383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.034 [2024-12-05 11:57:39.195420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.034 [2024-12-05 11:57:39.206154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.034 [2024-12-05 11:57:39.206172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.034 [2024-12-05 11:57:39.219950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.034 [2024-12-05 11:57:39.219969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.293 [2024-12-05 11:57:39.233418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.293 [2024-12-05 11:57:39.233438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.293 [2024-12-05 11:57:39.247276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.293 [2024-12-05 11:57:39.247296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.293 [2024-12-05 11:57:39.256062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.293 [2024-12-05 11:57:39.256081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.293 [2024-12-05 11:57:39.270482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.293 [2024-12-05 11:57:39.270500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.293 [2024-12-05 11:57:39.284032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.293 [2024-12-05 11:57:39.284050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.293 [2024-12-05 11:57:39.298437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.293 [2024-12-05 11:57:39.298456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.293 [2024-12-05 11:57:39.309474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.293 [2024-12-05 11:57:39.309494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.293 [2024-12-05 11:57:39.323320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.293 [2024-12-05 11:57:39.323340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.293 [2024-12-05 11:57:39.337272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.293 [2024-12-05 11:57:39.337291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.293 [2024-12-05 11:57:39.351114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.293 [2024-12-05 11:57:39.351133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.293 [2024-12-05 11:57:39.364942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.293 [2024-12-05 11:57:39.364961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.293 [2024-12-05 11:57:39.378766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.293 [2024-12-05 11:57:39.378786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.293 [2024-12-05 11:57:39.392558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.293 [2024-12-05 11:57:39.392577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.293 [2024-12-05 11:57:39.406419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.293 [2024-12-05 11:57:39.406437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.293 [2024-12-05 11:57:39.420277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.293 [2024-12-05 11:57:39.420295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.293 [2024-12-05 11:57:39.433934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.293 [2024-12-05 11:57:39.433953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.293 [2024-12-05 11:57:39.447567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.293 [2024-12-05 11:57:39.447587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.293 [2024-12-05 11:57:39.461640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.293 [2024-12-05 11:57:39.461658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.293 [2024-12-05 11:57:39.475156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.293 [2024-12-05 11:57:39.475174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.293 [2024-12-05 11:57:39.489225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.293 [2024-12-05 11:57:39.489246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.552 [2024-12-05 11:57:39.500215] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.552 [2024-12-05 11:57:39.500235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.552 [2024-12-05 11:57:39.514035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.552 [2024-12-05 11:57:39.514056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.552 [2024-12-05 11:57:39.528015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.552 [2024-12-05 11:57:39.528036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.552 [2024-12-05 11:57:39.541045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.553 [2024-12-05 11:57:39.541070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.553 [2024-12-05 11:57:39.555056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.553 [2024-12-05 11:57:39.555076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.553 [2024-12-05 11:57:39.569243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.553 [2024-12-05 11:57:39.569264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.553 [2024-12-05 11:57:39.579900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.553 [2024-12-05 11:57:39.579919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.553 [2024-12-05 11:57:39.594300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.553 [2024-12-05 11:57:39.594319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.553 [2024-12-05 11:57:39.607995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.553 [2024-12-05 11:57:39.608016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.553 [2024-12-05 11:57:39.622176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.553 [2024-12-05 11:57:39.622196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.553 [2024-12-05 11:57:39.636134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.553 [2024-12-05 11:57:39.636155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.553 [2024-12-05 11:57:39.650105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.553 [2024-12-05 11:57:39.650125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.553 [2024-12-05 11:57:39.664127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.553 [2024-12-05 11:57:39.664147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.553 [2024-12-05 11:57:39.677646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.553 [2024-12-05 11:57:39.677665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.553 [2024-12-05 11:57:39.691606] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.553 [2024-12-05 11:57:39.691626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.553 [2024-12-05 11:57:39.705382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.553 [2024-12-05 11:57:39.705417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.553 [2024-12-05 11:57:39.720138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.553 [2024-12-05 11:57:39.720158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.553 17008.25 IOPS, 132.88 MiB/s [2024-12-05T10:57:39.749Z] [2024-12-05 11:57:39.735531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.553 [2024-12-05 11:57:39.735550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.553 [2024-12-05 11:57:39.749124] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.553 [2024-12-05 11:57:39.749144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.812 [2024-12-05 11:57:39.762973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.812 [2024-12-05 11:57:39.762992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.812 [2024-12-05 11:57:39.773424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.812 [2024-12-05 11:57:39.773443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.812 [2024-12-05 11:57:39.787547] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.812 [2024-12-05 11:57:39.787567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.812 [2024-12-05 11:57:39.800986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.812 [2024-12-05 11:57:39.801006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.812 [2024-12-05 11:57:39.814810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.812 [2024-12-05 11:57:39.814831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.812 [2024-12-05 11:57:39.828312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.812 [2024-12-05 11:57:39.828331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.812 [2024-12-05 11:57:39.842299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.812 [2024-12-05 11:57:39.842319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.812 [2024-12-05 11:57:39.856410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.812 [2024-12-05 11:57:39.856431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.812 [2024-12-05 11:57:39.869971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.812 [2024-12-05 11:57:39.869990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.812 [2024-12-05 11:57:39.883737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.812 [2024-12-05 11:57:39.883756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.812 [2024-12-05 11:57:39.897342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.812 [2024-12-05 11:57:39.897361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.812 [2024-12-05 11:57:39.911272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.812 [2024-12-05 11:57:39.911291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.812 [2024-12-05 11:57:39.924936] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.813 [2024-12-05 11:57:39.924963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.813 [2024-12-05 11:57:39.939260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.813 [2024-12-05 11:57:39.939278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.813 [2024-12-05 11:57:39.954733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.813 [2024-12-05 11:57:39.954752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.813 [2024-12-05 11:57:39.968185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.813 [2024-12-05 11:57:39.968205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.813 [2024-12-05 11:57:39.981819] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.813 [2024-12-05 11:57:39.981839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.813 [2024-12-05 11:57:39.995499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.813 [2024-12-05 11:57:39.995518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.072 [2024-12-05 11:57:40.010586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.072 [2024-12-05 11:57:40.010606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.072 [2024-12-05 11:57:40.025697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.072 [2024-12-05 11:57:40.025717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.072 [2024-12-05 11:57:40.039060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.072 [2024-12-05 11:57:40.039080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.072 [2024-12-05 11:57:40.047799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.072 [2024-12-05 11:57:40.047819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.072 [2024-12-05 11:57:40.057222] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.072 [2024-12-05 11:57:40.057241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.072 [2024-12-05 11:57:40.071562] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.072 [2024-12-05 11:57:40.071582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.072 [2024-12-05 11:57:40.084496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.072 [2024-12-05 11:57:40.084516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.072 [2024-12-05 11:57:40.098674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.072 [2024-12-05 11:57:40.098693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.072 [2024-12-05 11:57:40.112137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.072 [2024-12-05 11:57:40.112156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.072 [2024-12-05 11:57:40.125879] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.072 [2024-12-05 11:57:40.125899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.072 [2024-12-05 11:57:40.139622] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.072 [2024-12-05 11:57:40.139641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.072 [2024-12-05 11:57:40.153591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.072 [2024-12-05 11:57:40.153611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.072 [2024-12-05 11:57:40.167206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.072 [2024-12-05 11:57:40.167225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.072 [2024-12-05 11:57:40.180834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.072 [2024-12-05 11:57:40.180859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.072 [2024-12-05 11:57:40.194604] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.072 [2024-12-05 11:57:40.194623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.072 [2024-12-05 11:57:40.208421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.072 [2024-12-05 11:57:40.208440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.072 [2024-12-05 11:57:40.221878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.072 [2024-12-05 11:57:40.221897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.072 [2024-12-05 11:57:40.235701] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.072 [2024-12-05 11:57:40.235720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.072 [2024-12-05 11:57:40.248832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.072 [2024-12-05 11:57:40.248850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.072 [2024-12-05 11:57:40.262741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.072 [2024-12-05 11:57:40.262760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.331 [2024-12-05 11:57:40.276497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.331 [2024-12-05 11:57:40.276516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.331 [2024-12-05 11:57:40.290229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.331 [2024-12-05 11:57:40.290248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.331 [2024-12-05 11:57:40.303679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.331 [2024-12-05 11:57:40.303699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.331 [2024-12-05 11:57:40.317345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.331 [2024-12-05 11:57:40.317364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.331 [2024-12-05 11:57:40.331068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.331 [2024-12-05 11:57:40.331086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.331 [2024-12-05 11:57:40.345173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.331 [2024-12-05 11:57:40.345192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.331 [2024-12-05 11:57:40.358931] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.331 [2024-12-05 11:57:40.358951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.331 [2024-12-05 11:57:40.373026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.331 [2024-12-05 11:57:40.373045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.331 [2024-12-05 11:57:40.386675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.331 [2024-12-05 11:57:40.386695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.331 [2024-12-05 11:57:40.400333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.331 [2024-12-05 11:57:40.400352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.331 [2024-12-05 11:57:40.414095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.331 [2024-12-05 11:57:40.414115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.331 [2024-12-05 11:57:40.427403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.331 [2024-12-05 11:57:40.427422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.331 [2024-12-05 11:57:40.440977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.331 [2024-12-05 11:57:40.441001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.331 [2024-12-05 11:57:40.454502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.332 [2024-12-05 11:57:40.454521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.332 [2024-12-05 11:57:40.468242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.332 [2024-12-05 11:57:40.468260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.332 [2024-12-05 11:57:40.481991] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.332 [2024-12-05 11:57:40.482010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.332 [2024-12-05 11:57:40.495844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.332 [2024-12-05 11:57:40.495863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.332 [2024-12-05 11:57:40.509600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.332 [2024-12-05 11:57:40.509618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.332 [2024-12-05 11:57:40.523620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.332 [2024-12-05 11:57:40.523640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.591 [2024-12-05 11:57:40.537224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.591 [2024-12-05 11:57:40.537244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.591 [2024-12-05 11:57:40.550673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.591 [2024-12-05 11:57:40.550691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.591 [2024-12-05 11:57:40.564501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.591 [2024-12-05 11:57:40.564520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.591 [2024-12-05 11:57:40.578047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.591 [2024-12-05 11:57:40.578066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.591 [2024-12-05 11:57:40.591773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.591 [2024-12-05 11:57:40.591792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.591 [2024-12-05 11:57:40.605710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.591 [2024-12-05 11:57:40.605730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.591 [2024-12-05 11:57:40.619556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.591 [2024-12-05 11:57:40.619574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.591 [2024-12-05 11:57:40.633584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.591 [2024-12-05 11:57:40.633603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.591 [2024-12-05 11:57:40.647580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.591 [2024-12-05 11:57:40.647599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.591 [2024-12-05 11:57:40.661943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.591 [2024-12-05 11:57:40.661962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.591 [2024-12-05 11:57:40.675649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.591 [2024-12-05 11:57:40.675667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.591 [2024-12-05 11:57:40.689438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.591 [2024-12-05 11:57:40.689457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.591 [2024-12-05 11:57:40.703509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.591 [2024-12-05 11:57:40.703533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.591 [2024-12-05 11:57:40.717236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.591 [2024-12-05 11:57:40.717256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.591 16997.20 IOPS, 132.79 MiB/s [2024-12-05T10:57:40.787Z] [2024-12-05 11:57:40.730963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.591 [2024-12-05 11:57:40.730983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.591 00:14:06.591 Latency(us) 00:14:06.591 [2024-12-05T10:57:40.787Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:06.591 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:14:06.591 Nvme1n1 : 5.01 17000.64 132.82 0.00 0.00 7522.22 3120.76 19598.38 00:14:06.591 [2024-12-05T10:57:40.787Z] =================================================================================================================== 00:14:06.591 [2024-12-05T10:57:40.787Z] Total : 17000.64 132.82 0.00 0.00 7522.22 3120.76 19598.38 00:14:06.591 [2024-12-05 11:57:40.740538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.591 [2024-12-05 11:57:40.740556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.591 [2024-12-05 11:57:40.752562] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.591 [2024-12-05 11:57:40.752578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.591 [2024-12-05 11:57:40.764609] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.591 [2024-12-05 11:57:40.764625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.591 [2024-12-05 11:57:40.776631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.591 [2024-12-05 11:57:40.776647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.591 [2024-12-05 11:57:40.788662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.591 [2024-12-05 11:57:40.788676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.850 [2024-12-05 11:57:40.800691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.850 [2024-12-05 11:57:40.800706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.850 [2024-12-05 11:57:40.812722] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.850 [2024-12-05 11:57:40.812736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.850 [2024-12-05 11:57:40.824753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.850 [2024-12-05 11:57:40.824768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.850 [2024-12-05 11:57:40.836788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.850 [2024-12-05 11:57:40.836807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.850 [2024-12-05 11:57:40.848816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.850 [2024-12-05 11:57:40.848828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.850 [2024-12-05 11:57:40.860846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.850 [2024-12-05 11:57:40.860856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.850 [2024-12-05 11:57:40.872880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.850 [2024-12-05 11:57:40.872892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.850 [2024-12-05 11:57:40.884909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.850 [2024-12-05 11:57:40.884921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.850 [2024-12-05 11:57:40.896942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.850 [2024-12-05 11:57:40.896953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.850 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 37: kill: (4171648) - No such process 00:14:06.850 11:57:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@44 -- # wait 4171648 00:14:06.850 11:57:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@47 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:06.850 11:57:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.850 11:57:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:06.850 11:57:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.850 11:57:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@48 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:06.850 11:57:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.850 11:57:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:06.850 delay0 00:14:06.850 11:57:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.850 11:57:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:14:06.850 11:57:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.850 11:57:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:06.850 11:57:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.850 11:57:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:14:06.850 [2024-12-05 11:57:41.046086] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:14:13.433 Initializing NVMe Controllers 00:14:13.433 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:13.433 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:13.433 Initialization complete. Launching workers. 00:14:13.433 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 742 00:14:13.433 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1029, failed to submit 33 00:14:13.433 success 839, unsuccessful 190, failed 0 00:14:13.433 11:57:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:14:13.433 11:57:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@55 -- # nvmftestfini 00:14:13.433 11:57:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # nvmfcleanup 00:14:13.433 11:57:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@99 -- # sync 00:14:13.433 11:57:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:14:13.433 11:57:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@102 -- # set +e 00:14:13.433 11:57:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@103 -- # for i in {1..20} 00:14:13.433 11:57:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:14:13.433 rmmod nvme_tcp 00:14:13.433 rmmod nvme_fabrics 00:14:13.433 rmmod nvme_keyring 00:14:13.433 11:57:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:14:13.433 11:57:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # set -e 00:14:13.433 11:57:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # return 0 00:14:13.433 11:57:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # '[' -n 4169906 ']' 00:14:13.433 11:57:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@337 -- # killprocess 4169906 00:14:13.433 11:57:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 4169906 ']' 00:14:13.433 11:57:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 4169906 00:14:13.433 11:57:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:14:13.433 11:57:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:13.433 11:57:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4169906 00:14:13.433 11:57:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:13.433 11:57:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:13.433 11:57:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4169906' 00:14:13.433 killing process with pid 4169906 00:14:13.433 11:57:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 4169906 00:14:13.433 11:57:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 4169906 00:14:13.433 11:57:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:14:13.433 11:57:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # nvmf_fini 00:14:13.433 11:57:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@264 -- # local dev 00:14:13.433 11:57:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@267 -- # remove_target_ns 00:14:13.433 11:57:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:14:13.433 11:57:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:14:13.433 11:57:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_target_ns 00:14:15.966 11:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@268 -- # delete_main_bridge 00:14:15.966 11:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:14:15.966 11:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@130 -- # return 0 00:14:15.966 11:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:14:15.966 11:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:14:15.966 11:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:14:15.966 11:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:14:15.966 11:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:14:15.966 11:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:14:15.966 11:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:14:15.966 11:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:14:15.966 11:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:14:15.966 11:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:14:15.966 11:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:14:15.966 11:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:14:15.966 11:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:14:15.966 11:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:14:15.966 11:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:14:15.966 11:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:14:15.966 11:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:14:15.966 11:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@41 -- # _dev=0 00:14:15.966 11:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@41 -- # dev_map=() 00:14:15.966 11:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@284 -- # iptr 00:14:15.966 11:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@542 -- # iptables-save 00:14:15.966 11:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:14:15.966 11:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@542 -- # iptables-restore 00:14:15.966 00:14:15.966 real 0m31.595s 00:14:15.966 user 0m42.097s 00:14:15.966 sys 0m11.228s 00:14:15.966 11:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:15.966 11:57:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:15.966 ************************************ 00:14:15.966 END TEST nvmf_zcopy 00:14:15.966 ************************************ 00:14:15.966 11:57:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:14:15.966 00:14:15.966 real 4m31.068s 00:14:15.966 user 10m27.119s 00:14:15.966 sys 1m34.621s 00:14:15.966 11:57:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:15.966 11:57:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:15.966 ************************************ 00:14:15.966 END TEST nvmf_target_core 00:14:15.966 ************************************ 00:14:15.966 11:57:49 nvmf_tcp -- nvmf/nvmf.sh@11 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:14:15.966 11:57:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:15.966 11:57:49 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:15.966 11:57:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:15.966 ************************************ 00:14:15.966 START TEST nvmf_target_extra 00:14:15.966 ************************************ 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:14:15.967 * Looking for test storage... 00:14:15.967 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:15.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.967 --rc genhtml_branch_coverage=1 00:14:15.967 --rc genhtml_function_coverage=1 00:14:15.967 --rc genhtml_legend=1 00:14:15.967 --rc geninfo_all_blocks=1 00:14:15.967 --rc geninfo_unexecuted_blocks=1 00:14:15.967 00:14:15.967 ' 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:15.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.967 --rc genhtml_branch_coverage=1 00:14:15.967 --rc genhtml_function_coverage=1 00:14:15.967 --rc genhtml_legend=1 00:14:15.967 --rc geninfo_all_blocks=1 00:14:15.967 --rc geninfo_unexecuted_blocks=1 00:14:15.967 00:14:15.967 ' 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:15.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.967 --rc genhtml_branch_coverage=1 00:14:15.967 --rc genhtml_function_coverage=1 00:14:15.967 --rc genhtml_legend=1 00:14:15.967 --rc geninfo_all_blocks=1 00:14:15.967 --rc geninfo_unexecuted_blocks=1 00:14:15.967 00:14:15.967 ' 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:15.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.967 --rc genhtml_branch_coverage=1 00:14:15.967 --rc genhtml_function_coverage=1 00:14:15.967 --rc genhtml_legend=1 00:14:15.967 --rc geninfo_all_blocks=1 00:14:15.967 --rc geninfo_unexecuted_blocks=1 00:14:15.967 00:14:15.967 ' 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@50 -- # : 0 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:14:15.967 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@54 -- # have_pci_nics=0 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:15.967 ************************************ 00:14:15.967 START TEST nvmf_example 00:14:15.967 ************************************ 00:14:15.967 11:57:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:14:15.967 * Looking for test storage... 00:14:15.968 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:15.968 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:15.968 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:14:15.968 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:15.968 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:15.968 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:15.968 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:15.968 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:15.968 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:14:15.968 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:14:15.968 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:14:15.968 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:14:15.968 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:14:15.968 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:14:15.968 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:14:15.968 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:15.968 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:14:15.968 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:14:15.968 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:15.968 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:15.968 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:14:15.968 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:14:15.968 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:15.968 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:14:15.968 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:14:15.968 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:14:15.968 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:14:15.968 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:15.968 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:14:15.968 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:14:15.968 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:15.968 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:15.968 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:14:15.968 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:15.968 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:15.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.968 --rc genhtml_branch_coverage=1 00:14:15.968 --rc genhtml_function_coverage=1 00:14:15.968 --rc genhtml_legend=1 00:14:15.968 --rc geninfo_all_blocks=1 00:14:15.968 --rc geninfo_unexecuted_blocks=1 00:14:15.968 00:14:15.968 ' 00:14:15.968 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:15.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.968 --rc genhtml_branch_coverage=1 00:14:15.968 --rc genhtml_function_coverage=1 00:14:15.968 --rc genhtml_legend=1 00:14:15.968 --rc geninfo_all_blocks=1 00:14:15.968 --rc geninfo_unexecuted_blocks=1 00:14:15.968 00:14:15.968 ' 00:14:15.968 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:15.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.968 --rc genhtml_branch_coverage=1 00:14:15.968 --rc genhtml_function_coverage=1 00:14:15.968 --rc genhtml_legend=1 00:14:15.968 --rc geninfo_all_blocks=1 00:14:15.968 --rc geninfo_unexecuted_blocks=1 00:14:15.968 00:14:15.968 ' 00:14:15.968 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:15.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.968 --rc genhtml_branch_coverage=1 00:14:15.968 --rc genhtml_function_coverage=1 00:14:15.968 --rc genhtml_legend=1 00:14:15.968 --rc geninfo_all_blocks=1 00:14:15.968 --rc geninfo_unexecuted_blocks=1 00:14:15.968 00:14:15.968 ' 00:14:15.968 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:16.227 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:14:16.227 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:16.227 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:16.227 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:16.227 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:16.227 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:16.227 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:14:16.227 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:16.227 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:14:16.227 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:16.227 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:16.227 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:16.227 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:14:16.227 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:14:16.227 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:16.227 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:16.227 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:14:16.227 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:16.227 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:16.227 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:16.227 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.227 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.227 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.227 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:14:16.228 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.228 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:14:16.228 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:14:16.228 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:14:16.228 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:14:16.228 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@50 -- # : 0 00:14:16.228 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:14:16.228 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:14:16.228 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:14:16.228 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:16.228 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:16.228 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:14:16.228 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:14:16.228 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:14:16.228 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:14:16.228 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@54 -- # have_pci_nics=0 00:14:16.228 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:14:16.228 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:14:16.228 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:14:16.228 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:14:16.228 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:14:16.228 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:14:16.228 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:14:16.228 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:14:16.228 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:16.228 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:16.228 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:14:16.228 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:14:16.228 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:16.228 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # prepare_net_devs 00:14:16.228 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # local -g is_hw=no 00:14:16.228 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@260 -- # remove_target_ns 00:14:16.228 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:14:16.228 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:14:16.228 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_target_ns 00:14:16.228 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:14:16.228 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:14:16.228 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # xtrace_disable 00:14:16.228 11:57:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:22.815 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:22.815 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@131 -- # pci_devs=() 00:14:22.815 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@131 -- # local -a pci_devs 00:14:22.815 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@132 -- # pci_net_devs=() 00:14:22.815 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:14:22.815 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@133 -- # pci_drivers=() 00:14:22.815 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@133 -- # local -A pci_drivers 00:14:22.815 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@135 -- # net_devs=() 00:14:22.815 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@135 -- # local -ga net_devs 00:14:22.815 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@136 -- # e810=() 00:14:22.815 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@136 -- # local -ga e810 00:14:22.815 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@137 -- # x722=() 00:14:22.815 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@137 -- # local -ga x722 00:14:22.815 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@138 -- # mlx=() 00:14:22.815 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@138 -- # local -ga mlx 00:14:22.815 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:22.815 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:22.815 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:22.815 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:22.815 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:22.815 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:22.815 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:22.815 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:22.815 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:22.815 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:22.816 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:22.816 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # [[ up == up ]] 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:22.816 Found net devices under 0000:86:00.0: cvl_0_0 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # [[ up == up ]] 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:22.816 Found net devices under 0000:86:00.1: cvl_0_1 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # is_hw=yes 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@257 -- # create_target_ns 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@27 -- # local -gA dev_map 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@28 -- # local -g _dev 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@44 -- # ips=() 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:14:22.816 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:14:22.816 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:14:22.816 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:14:22.816 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:22.816 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:14:22.816 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@11 -- # local val=167772161 00:14:22.816 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:14:22.816 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:14:22.816 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:14:22.816 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:14:22.816 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:14:22.816 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:14:22.816 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:14:22.816 10.0.0.1 00:14:22.816 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:14:22.816 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:14:22.816 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:22.816 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:22.816 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:14:22.816 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@11 -- # local val=167772162 00:14:22.816 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:14:22.816 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:14:22.816 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:14:22.816 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:14:22.817 10.0.0.2 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@38 -- # ping_ips 1 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@107 -- # local dev=initiator0 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:14:22.817 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:22.817 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.339 ms 00:14:22.817 00:14:22.817 --- 10.0.0.1 ping statistics --- 00:14:22.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:22.817 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@168 -- # get_net_dev target0 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@107 -- # local dev=target0 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:14:22.817 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:22.817 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:14:22.817 00:14:22.817 --- 10.0.0.2 ping statistics --- 00:14:22.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:22.817 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@98 -- # (( pair++ )) 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@270 -- # return 0 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@107 -- # local dev=initiator0 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:14:22.817 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:14:22.818 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:14:22.818 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:14:22.818 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@107 -- # local dev=initiator1 00:14:22.818 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:14:22.818 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:14:22.818 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@109 -- # return 1 00:14:22.818 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@168 -- # dev= 00:14:22.818 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@169 -- # return 0 00:14:22.818 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:14:22.818 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:14:22.818 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:14:22.818 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:14:22.818 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:14:22.818 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:22.818 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:22.818 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@168 -- # get_net_dev target0 00:14:22.818 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@107 -- # local dev=target0 00:14:22.818 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:14:22.818 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:14:22.818 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:14:22.818 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:14:22.818 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:14:22.818 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:14:22.818 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:14:22.818 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:14:22.818 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:14:22.818 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:22.818 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:14:22.818 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:14:22.818 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:14:22.818 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:14:22.818 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:22.818 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:22.818 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@168 -- # get_net_dev target1 00:14:22.818 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@107 -- # local dev=target1 00:14:22.818 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:14:22.818 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:14:22.818 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@109 -- # return 1 00:14:22.818 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@168 -- # dev= 00:14:22.818 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@169 -- # return 0 00:14:22.818 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:14:22.818 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:22.818 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:14:22.818 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:14:22.818 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:22.818 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:14:22.818 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:14:22.818 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:14:22.818 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:14:22.818 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:22.818 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:22.818 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:14:22.818 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:14:22.818 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=4177327 00:14:22.818 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:14:22.818 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:22.818 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 4177327 00:14:22.818 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 4177327 ']' 00:14:22.818 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:22.818 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:22.818 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:22.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:22.818 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:22.818 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:23.076 11:57:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:23.076 11:57:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:14:23.076 11:57:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:14:23.076 11:57:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:23.076 11:57:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:23.076 11:57:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:23.076 11:57:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.076 11:57:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:23.076 11:57:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.076 11:57:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:14:23.076 11:57:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.076 11:57:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:23.334 11:57:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.334 11:57:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:14:23.334 11:57:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:23.334 11:57:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.334 11:57:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:23.334 11:57:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.334 11:57:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:14:23.334 11:57:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:23.334 11:57:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.334 11:57:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:23.334 11:57:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.334 11:57:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:23.334 11:57:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.334 11:57:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:23.334 11:57:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.334 11:57:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:14:23.334 11:57:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:33.302 Initializing NVMe Controllers 00:14:33.302 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:33.302 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:33.302 Initialization complete. Launching workers. 00:14:33.302 ======================================================== 00:14:33.302 Latency(us) 00:14:33.302 Device Information : IOPS MiB/s Average min max 00:14:33.302 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18276.70 71.39 3501.50 633.69 15533.82 00:14:33.302 ======================================================== 00:14:33.302 Total : 18276.70 71.39 3501.50 633.69 15533.82 00:14:33.302 00:14:33.302 11:58:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:14:33.302 11:58:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:14:33.302 11:58:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # nvmfcleanup 00:14:33.302 11:58:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@99 -- # sync 00:14:33.302 11:58:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:14:33.302 11:58:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@102 -- # set +e 00:14:33.302 11:58:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@103 -- # for i in {1..20} 00:14:33.302 11:58:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:14:33.302 rmmod nvme_tcp 00:14:33.560 rmmod nvme_fabrics 00:14:33.560 rmmod nvme_keyring 00:14:33.560 11:58:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:14:33.560 11:58:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # set -e 00:14:33.560 11:58:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # return 0 00:14:33.561 11:58:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # '[' -n 4177327 ']' 00:14:33.561 11:58:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@337 -- # killprocess 4177327 00:14:33.561 11:58:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 4177327 ']' 00:14:33.561 11:58:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 4177327 00:14:33.561 11:58:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:14:33.561 11:58:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:33.561 11:58:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4177327 00:14:33.561 11:58:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:14:33.561 11:58:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:14:33.561 11:58:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4177327' 00:14:33.561 killing process with pid 4177327 00:14:33.561 11:58:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 4177327 00:14:33.561 11:58:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 4177327 00:14:33.820 nvmf threads initialize successfully 00:14:33.820 bdev subsystem init successfully 00:14:33.820 created a nvmf target service 00:14:33.820 create targets's poll groups done 00:14:33.820 all subsystems of target started 00:14:33.820 nvmf target is running 00:14:33.820 all subsystems of target stopped 00:14:33.820 destroy targets's poll groups done 00:14:33.820 destroyed the nvmf target service 00:14:33.820 bdev subsystem finish successfully 00:14:33.820 nvmf threads destroy successfully 00:14:33.820 11:58:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:14:33.820 11:58:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # nvmf_fini 00:14:33.820 11:58:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@264 -- # local dev 00:14:33.820 11:58:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@267 -- # remove_target_ns 00:14:33.820 11:58:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:14:33.820 11:58:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:14:33.820 11:58:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_target_ns 00:14:35.745 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@268 -- # delete_main_bridge 00:14:35.745 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:14:35.745 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@130 -- # return 0 00:14:35.745 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:14:35.745 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:14:35.745 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:14:35.745 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:14:35.745 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:14:35.745 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:14:35.745 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:14:35.745 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:14:35.745 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:14:35.745 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:14:35.745 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:14:35.745 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:14:35.745 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:14:35.745 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:14:35.745 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:14:35.745 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:14:35.745 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:14:35.745 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@41 -- # _dev=0 00:14:35.745 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@41 -- # dev_map=() 00:14:35.745 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@284 -- # iptr 00:14:35.745 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@542 -- # iptables-save 00:14:35.745 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:14:35.745 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@542 -- # iptables-restore 00:14:35.745 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:14:35.745 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:35.745 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:35.745 00:14:35.745 real 0m19.914s 00:14:35.745 user 0m45.916s 00:14:35.745 sys 0m6.197s 00:14:35.745 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:35.745 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:35.745 ************************************ 00:14:35.745 END TEST nvmf_example 00:14:35.745 ************************************ 00:14:36.019 11:58:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:14:36.019 11:58:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:36.019 11:58:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:36.019 11:58:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:36.019 ************************************ 00:14:36.019 START TEST nvmf_filesystem 00:14:36.019 ************************************ 00:14:36.019 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:14:36.019 * Looking for test storage... 00:14:36.019 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:36.019 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:36.019 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:14:36.019 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:36.019 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:36.019 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:36.019 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:36.019 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:36.019 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:14:36.019 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:14:36.019 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:14:36.019 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:14:36.019 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:14:36.019 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:14:36.019 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:14:36.019 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:36.019 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:14:36.019 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:14:36.019 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:36.019 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:36.019 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:14:36.019 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:14:36.019 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:36.019 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:14:36.019 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:14:36.019 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:14:36.019 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:14:36.019 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:36.019 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:14:36.019 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:14:36.019 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:36.019 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:36.019 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:14:36.019 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:36.019 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:36.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:36.019 --rc genhtml_branch_coverage=1 00:14:36.019 --rc genhtml_function_coverage=1 00:14:36.019 --rc genhtml_legend=1 00:14:36.019 --rc geninfo_all_blocks=1 00:14:36.019 --rc geninfo_unexecuted_blocks=1 00:14:36.019 00:14:36.019 ' 00:14:36.019 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:36.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:36.019 --rc genhtml_branch_coverage=1 00:14:36.019 --rc genhtml_function_coverage=1 00:14:36.019 --rc genhtml_legend=1 00:14:36.019 --rc geninfo_all_blocks=1 00:14:36.019 --rc geninfo_unexecuted_blocks=1 00:14:36.019 00:14:36.019 ' 00:14:36.019 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:36.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:36.019 --rc genhtml_branch_coverage=1 00:14:36.019 --rc genhtml_function_coverage=1 00:14:36.019 --rc genhtml_legend=1 00:14:36.019 --rc geninfo_all_blocks=1 00:14:36.019 --rc geninfo_unexecuted_blocks=1 00:14:36.019 00:14:36.019 ' 00:14:36.019 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:36.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:36.019 --rc genhtml_branch_coverage=1 00:14:36.019 --rc genhtml_function_coverage=1 00:14:36.019 --rc genhtml_legend=1 00:14:36.019 --rc geninfo_all_blocks=1 00:14:36.019 --rc geninfo_unexecuted_blocks=1 00:14:36.019 00:14:36.019 ' 00:14:36.019 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:14:36.019 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:14:36.019 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:14:36.019 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:14:36.020 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:14:36.021 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:14:36.021 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:14:36.021 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:14:36.021 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:14:36.021 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:14:36.021 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:14:36.021 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:14:36.021 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:14:36.021 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:14:36.021 #define SPDK_CONFIG_H 00:14:36.021 #define SPDK_CONFIG_AIO_FSDEV 1 00:14:36.021 #define SPDK_CONFIG_APPS 1 00:14:36.021 #define SPDK_CONFIG_ARCH native 00:14:36.021 #undef SPDK_CONFIG_ASAN 00:14:36.021 #undef SPDK_CONFIG_AVAHI 00:14:36.021 #undef SPDK_CONFIG_CET 00:14:36.021 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:14:36.021 #define SPDK_CONFIG_COVERAGE 1 00:14:36.021 #define SPDK_CONFIG_CROSS_PREFIX 00:14:36.021 #undef SPDK_CONFIG_CRYPTO 00:14:36.021 #undef SPDK_CONFIG_CRYPTO_MLX5 00:14:36.021 #undef SPDK_CONFIG_CUSTOMOCF 00:14:36.021 #undef SPDK_CONFIG_DAOS 00:14:36.021 #define SPDK_CONFIG_DAOS_DIR 00:14:36.021 #define SPDK_CONFIG_DEBUG 1 00:14:36.021 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:14:36.021 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:14:36.021 #define SPDK_CONFIG_DPDK_INC_DIR 00:14:36.021 #define SPDK_CONFIG_DPDK_LIB_DIR 00:14:36.021 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:14:36.021 #undef SPDK_CONFIG_DPDK_UADK 00:14:36.021 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:14:36.021 #define SPDK_CONFIG_EXAMPLES 1 00:14:36.021 #undef SPDK_CONFIG_FC 00:14:36.021 #define SPDK_CONFIG_FC_PATH 00:14:36.021 #define SPDK_CONFIG_FIO_PLUGIN 1 00:14:36.021 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:14:36.021 #define SPDK_CONFIG_FSDEV 1 00:14:36.021 #undef SPDK_CONFIG_FUSE 00:14:36.021 #undef SPDK_CONFIG_FUZZER 00:14:36.021 #define SPDK_CONFIG_FUZZER_LIB 00:14:36.021 #undef SPDK_CONFIG_GOLANG 00:14:36.021 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:14:36.021 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:14:36.021 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:14:36.021 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:14:36.021 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:14:36.021 #undef SPDK_CONFIG_HAVE_LIBBSD 00:14:36.021 #undef SPDK_CONFIG_HAVE_LZ4 00:14:36.021 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:14:36.021 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:14:36.021 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:14:36.021 #define SPDK_CONFIG_IDXD 1 00:14:36.021 #define SPDK_CONFIG_IDXD_KERNEL 1 00:14:36.021 #undef SPDK_CONFIG_IPSEC_MB 00:14:36.021 #define SPDK_CONFIG_IPSEC_MB_DIR 00:14:36.021 #define SPDK_CONFIG_ISAL 1 00:14:36.021 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:14:36.021 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:14:36.021 #define SPDK_CONFIG_LIBDIR 00:14:36.021 #undef SPDK_CONFIG_LTO 00:14:36.021 #define SPDK_CONFIG_MAX_LCORES 128 00:14:36.021 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:14:36.021 #define SPDK_CONFIG_NVME_CUSE 1 00:14:36.021 #undef SPDK_CONFIG_OCF 00:14:36.021 #define SPDK_CONFIG_OCF_PATH 00:14:36.021 #define SPDK_CONFIG_OPENSSL_PATH 00:14:36.021 #undef SPDK_CONFIG_PGO_CAPTURE 00:14:36.021 #define SPDK_CONFIG_PGO_DIR 00:14:36.021 #undef SPDK_CONFIG_PGO_USE 00:14:36.021 #define SPDK_CONFIG_PREFIX /usr/local 00:14:36.021 #undef SPDK_CONFIG_RAID5F 00:14:36.021 #undef SPDK_CONFIG_RBD 00:14:36.021 #define SPDK_CONFIG_RDMA 1 00:14:36.021 #define SPDK_CONFIG_RDMA_PROV verbs 00:14:36.021 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:14:36.021 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:14:36.021 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:14:36.021 #define SPDK_CONFIG_SHARED 1 00:14:36.021 #undef SPDK_CONFIG_SMA 00:14:36.021 #define SPDK_CONFIG_TESTS 1 00:14:36.021 #undef SPDK_CONFIG_TSAN 00:14:36.021 #define SPDK_CONFIG_UBLK 1 00:14:36.021 #define SPDK_CONFIG_UBSAN 1 00:14:36.021 #undef SPDK_CONFIG_UNIT_TESTS 00:14:36.021 #undef SPDK_CONFIG_URING 00:14:36.021 #define SPDK_CONFIG_URING_PATH 00:14:36.021 #undef SPDK_CONFIG_URING_ZNS 00:14:36.021 #undef SPDK_CONFIG_USDT 00:14:36.021 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:14:36.021 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:14:36.021 #define SPDK_CONFIG_VFIO_USER 1 00:14:36.021 #define SPDK_CONFIG_VFIO_USER_DIR 00:14:36.021 #define SPDK_CONFIG_VHOST 1 00:14:36.021 #define SPDK_CONFIG_VIRTIO 1 00:14:36.021 #undef SPDK_CONFIG_VTUNE 00:14:36.021 #define SPDK_CONFIG_VTUNE_DIR 00:14:36.021 #define SPDK_CONFIG_WERROR 1 00:14:36.021 #define SPDK_CONFIG_WPDK_DIR 00:14:36.021 #undef SPDK_CONFIG_XNVME 00:14:36.021 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:14:36.021 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:14:36.021 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:36.021 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:14:36.021 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:36.021 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:36.021 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:36.021 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.021 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.021 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.021 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:14:36.021 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.021 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:14:36.021 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:14:36.021 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:14:36.021 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:14:36.021 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:14:36.021 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:14:36.021 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:14:36.021 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:14:36.021 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:14:36.299 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 4179729 ]] 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 4179729 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.tWNTmI 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:14:36.300 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.tWNTmI/tests/target /tmp/spdk.tWNTmI 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=189791780864 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=195963969536 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6172188672 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97975128064 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981984768 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6856704 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=39169753088 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=39192797184 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23044096 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97981579264 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981984768 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=405504 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=19596382208 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=19596394496 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:14:36.301 * Looking for test storage... 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=189791780864 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=8386781184 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:36.301 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:36.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:36.301 --rc genhtml_branch_coverage=1 00:14:36.301 --rc genhtml_function_coverage=1 00:14:36.301 --rc genhtml_legend=1 00:14:36.301 --rc geninfo_all_blocks=1 00:14:36.301 --rc geninfo_unexecuted_blocks=1 00:14:36.301 00:14:36.301 ' 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:36.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:36.301 --rc genhtml_branch_coverage=1 00:14:36.301 --rc genhtml_function_coverage=1 00:14:36.301 --rc genhtml_legend=1 00:14:36.301 --rc geninfo_all_blocks=1 00:14:36.301 --rc geninfo_unexecuted_blocks=1 00:14:36.301 00:14:36.301 ' 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:36.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:36.301 --rc genhtml_branch_coverage=1 00:14:36.301 --rc genhtml_function_coverage=1 00:14:36.301 --rc genhtml_legend=1 00:14:36.301 --rc geninfo_all_blocks=1 00:14:36.301 --rc geninfo_unexecuted_blocks=1 00:14:36.301 00:14:36.301 ' 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:36.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:36.301 --rc genhtml_branch_coverage=1 00:14:36.301 --rc genhtml_function_coverage=1 00:14:36.301 --rc genhtml_legend=1 00:14:36.301 --rc geninfo_all_blocks=1 00:14:36.301 --rc geninfo_unexecuted_blocks=1 00:14:36.301 00:14:36.301 ' 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:14:36.301 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:36.302 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:36.302 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:36.302 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:14:36.302 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:14:36.302 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:36.302 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:36.302 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:14:36.302 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:36.302 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:36.302 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:36.302 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.302 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.302 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.302 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:14:36.302 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.302 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:14:36.302 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:14:36.302 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:14:36.302 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:14:36.302 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@50 -- # : 0 00:14:36.302 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:14:36.302 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:14:36.302 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:14:36.302 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:36.302 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:36.302 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:14:36.302 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:14:36.302 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:14:36.302 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:14:36.302 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@54 -- # have_pci_nics=0 00:14:36.302 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:14:36.302 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:36.302 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:14:36.302 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:14:36.302 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:36.302 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # prepare_net_devs 00:14:36.302 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # local -g is_hw=no 00:14:36.302 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@260 -- # remove_target_ns 00:14:36.302 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:14:36.302 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:14:36.302 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_target_ns 00:14:36.302 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:14:36.302 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:14:36.302 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # xtrace_disable 00:14:36.302 11:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@131 -- # pci_devs=() 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@131 -- # local -a pci_devs 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@132 -- # pci_net_devs=() 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@133 -- # pci_drivers=() 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@133 -- # local -A pci_drivers 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@135 -- # net_devs=() 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@135 -- # local -ga net_devs 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@136 -- # e810=() 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@136 -- # local -ga e810 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@137 -- # x722=() 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@137 -- # local -ga x722 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@138 -- # mlx=() 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@138 -- # local -ga mlx 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:42.992 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:42.992 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # [[ up == up ]] 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:42.992 Found net devices under 0000:86:00.0: cvl_0_0 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # [[ up == up ]] 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:42.992 Found net devices under 0000:86:00.1: cvl_0_1 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # is_hw=yes 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@257 -- # create_target_ns 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@27 -- # local -gA dev_map 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@28 -- # local -g _dev 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@44 -- # ips=() 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:14:42.992 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@11 -- # local val=167772161 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:14:42.993 10.0.0.1 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@11 -- # local val=167772162 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:14:42.993 10.0.0.2 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@38 -- # ping_ips 1 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@107 -- # local dev=initiator0 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:14:42.993 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:42.993 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.520 ms 00:14:42.993 00:14:42.993 --- 10.0.0.1 ping statistics --- 00:14:42.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:42.993 rtt min/avg/max/mdev = 0.520/0.520/0.520/0.000 ms 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@168 -- # get_net_dev target0 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@107 -- # local dev=target0 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:14:42.993 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:14:42.994 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:42.994 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:14:42.994 00:14:42.994 --- 10.0.0.2 ping statistics --- 00:14:42.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:42.994 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@98 -- # (( pair++ )) 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@270 -- # return 0 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@107 -- # local dev=initiator0 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@107 -- # local dev=initiator1 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@109 -- # return 1 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@168 -- # dev= 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@169 -- # return 0 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@168 -- # get_net_dev target0 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@107 -- # local dev=target0 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@168 -- # get_net_dev target1 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@107 -- # local dev=target1 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@109 -- # return 1 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@168 -- # dev= 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@169 -- # return 0 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:14:42.994 ************************************ 00:14:42.994 START TEST nvmf_filesystem_no_in_capsule 00:14:42.994 ************************************ 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@328 -- # nvmfpid=4183010 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@329 -- # waitforlisten 4183010 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 4183010 ']' 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:42.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:42.994 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:42.994 [2024-12-05 11:58:16.670784] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:14:42.994 [2024-12-05 11:58:16.670826] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:42.994 [2024-12-05 11:58:16.733266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:42.994 [2024-12-05 11:58:16.777158] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:42.995 [2024-12-05 11:58:16.777193] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:42.995 [2024-12-05 11:58:16.777201] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:42.995 [2024-12-05 11:58:16.777207] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:42.995 [2024-12-05 11:58:16.777212] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:42.995 [2024-12-05 11:58:16.778679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:42.995 [2024-12-05 11:58:16.778790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:42.995 [2024-12-05 11:58:16.778889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:42.995 [2024-12-05 11:58:16.778890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:42.995 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:42.995 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:14:42.995 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:14:42.995 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:42.995 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:42.995 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:42.995 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:14:42.995 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:14:42.995 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.995 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:42.995 [2024-12-05 11:58:16.916041] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:42.995 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.995 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:14:42.995 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.995 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:42.995 Malloc1 00:14:42.995 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.995 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:42.995 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.995 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:42.995 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.995 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:42.995 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.995 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:42.995 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.995 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:42.995 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.995 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:42.995 [2024-12-05 11:58:17.065096] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:42.995 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.995 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:14:42.995 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:14:42.995 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:14:42.995 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:14:42.995 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:14:42.995 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:14:42.995 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.995 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:42.995 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.995 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:14:42.995 { 00:14:42.995 "name": "Malloc1", 00:14:42.995 "aliases": [ 00:14:42.995 "0863f420-dde5-4e98-a072-0afa6958c5da" 00:14:42.995 ], 00:14:42.995 "product_name": "Malloc disk", 00:14:42.995 "block_size": 512, 00:14:42.995 "num_blocks": 1048576, 00:14:42.995 "uuid": "0863f420-dde5-4e98-a072-0afa6958c5da", 00:14:42.995 "assigned_rate_limits": { 00:14:42.995 "rw_ios_per_sec": 0, 00:14:42.995 "rw_mbytes_per_sec": 0, 00:14:42.995 "r_mbytes_per_sec": 0, 00:14:42.995 "w_mbytes_per_sec": 0 00:14:42.995 }, 00:14:42.995 "claimed": true, 00:14:42.995 "claim_type": "exclusive_write", 00:14:42.995 "zoned": false, 00:14:42.995 "supported_io_types": { 00:14:42.995 "read": true, 00:14:42.995 "write": true, 00:14:42.995 "unmap": true, 00:14:42.995 "flush": true, 00:14:42.995 "reset": true, 00:14:42.995 "nvme_admin": false, 00:14:42.995 "nvme_io": false, 00:14:42.995 "nvme_io_md": false, 00:14:42.995 "write_zeroes": true, 00:14:42.995 "zcopy": true, 00:14:42.995 "get_zone_info": false, 00:14:42.995 "zone_management": false, 00:14:42.995 "zone_append": false, 00:14:42.995 "compare": false, 00:14:42.995 "compare_and_write": false, 00:14:42.995 "abort": true, 00:14:42.995 "seek_hole": false, 00:14:42.995 "seek_data": false, 00:14:42.995 "copy": true, 00:14:42.995 "nvme_iov_md": false 00:14:42.995 }, 00:14:42.995 "memory_domains": [ 00:14:42.995 { 00:14:42.995 "dma_device_id": "system", 00:14:42.995 "dma_device_type": 1 00:14:42.995 }, 00:14:42.995 { 00:14:42.995 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:42.995 "dma_device_type": 2 00:14:42.995 } 00:14:42.995 ], 00:14:42.995 "driver_specific": {} 00:14:42.995 } 00:14:42.995 ]' 00:14:42.995 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:14:42.995 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:14:42.995 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:14:42.995 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:14:42.995 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:14:42.995 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:14:42.995 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:14:42.995 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:44.370 11:58:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:14:44.370 11:58:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:14:44.371 11:58:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:44.371 11:58:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:44.371 11:58:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:14:46.271 11:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:46.271 11:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:46.271 11:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:46.271 11:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:46.271 11:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:46.271 11:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:14:46.271 11:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:14:46.271 11:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:14:46.271 11:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:14:46.271 11:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:14:46.271 11:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:14:46.271 11:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:14:46.271 11:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:14:46.271 11:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:14:46.271 11:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:14:46.271 11:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:14:46.271 11:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:14:46.528 11:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:14:47.458 11:58:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:14:48.388 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:14:48.388 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:14:48.388 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:48.388 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:48.388 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:48.388 ************************************ 00:14:48.388 START TEST filesystem_ext4 00:14:48.388 ************************************ 00:14:48.388 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:14:48.388 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:14:48.388 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:48.388 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:14:48.388 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:14:48.388 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:14:48.389 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:14:48.389 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:14:48.389 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:14:48.389 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:14:48.389 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:14:48.389 mke2fs 1.47.0 (5-Feb-2023) 00:14:48.389 Discarding device blocks: 0/522240 done 00:14:48.389 Creating filesystem with 522240 1k blocks and 130560 inodes 00:14:48.389 Filesystem UUID: b8ae9035-385c-4d27-9548-c6390668fd18 00:14:48.389 Superblock backups stored on blocks: 00:14:48.389 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:14:48.389 00:14:48.389 Allocating group tables: 0/64 done 00:14:48.389 Writing inode tables: 0/64 done 00:14:48.645 Creating journal (8192 blocks): done 00:14:48.645 Writing superblocks and filesystem accounting information: 0/64 done 00:14:48.645 00:14:48.645 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:14:48.645 11:58:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:53.929 11:58:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:53.929 11:58:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:14:53.929 11:58:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:53.929 11:58:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:14:53.929 11:58:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:14:53.929 11:58:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:53.929 11:58:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 4183010 00:14:53.929 11:58:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:53.929 11:58:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:53.929 11:58:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:53.929 11:58:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:54.188 00:14:54.188 real 0m5.689s 00:14:54.188 user 0m0.030s 00:14:54.188 sys 0m0.067s 00:14:54.188 11:58:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:54.188 11:58:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:14:54.188 ************************************ 00:14:54.188 END TEST filesystem_ext4 00:14:54.188 ************************************ 00:14:54.188 11:58:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:14:54.188 11:58:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:54.188 11:58:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:54.188 11:58:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:54.188 ************************************ 00:14:54.188 START TEST filesystem_btrfs 00:14:54.188 ************************************ 00:14:54.188 11:58:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:14:54.188 11:58:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:14:54.188 11:58:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:54.188 11:58:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:14:54.188 11:58:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:14:54.188 11:58:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:14:54.188 11:58:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:14:54.188 11:58:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:14:54.188 11:58:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:14:54.188 11:58:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:14:54.188 11:58:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:14:54.446 btrfs-progs v6.8.1 00:14:54.447 See https://btrfs.readthedocs.io for more information. 00:14:54.447 00:14:54.447 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:14:54.447 NOTE: several default settings have changed in version 5.15, please make sure 00:14:54.447 this does not affect your deployments: 00:14:54.447 - DUP for metadata (-m dup) 00:14:54.447 - enabled no-holes (-O no-holes) 00:14:54.447 - enabled free-space-tree (-R free-space-tree) 00:14:54.447 00:14:54.447 Label: (null) 00:14:54.447 UUID: 407c84c1-9873-45f1-82c0-2978e4eaea75 00:14:54.447 Node size: 16384 00:14:54.447 Sector size: 4096 (CPU page size: 4096) 00:14:54.447 Filesystem size: 510.00MiB 00:14:54.447 Block group profiles: 00:14:54.447 Data: single 8.00MiB 00:14:54.447 Metadata: DUP 32.00MiB 00:14:54.447 System: DUP 8.00MiB 00:14:54.447 SSD detected: yes 00:14:54.447 Zoned device: no 00:14:54.447 Features: extref, skinny-metadata, no-holes, free-space-tree 00:14:54.447 Checksum: crc32c 00:14:54.447 Number of devices: 1 00:14:54.447 Devices: 00:14:54.447 ID SIZE PATH 00:14:54.447 1 510.00MiB /dev/nvme0n1p1 00:14:54.447 00:14:54.447 11:58:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:14:54.447 11:58:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:55.382 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:55.382 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:14:55.382 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:55.382 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:14:55.382 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:14:55.382 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:55.382 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 4183010 00:14:55.382 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:55.382 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:55.382 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:55.382 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:55.382 00:14:55.382 real 0m1.279s 00:14:55.382 user 0m0.035s 00:14:55.382 sys 0m0.102s 00:14:55.382 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:55.382 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:14:55.382 ************************************ 00:14:55.382 END TEST filesystem_btrfs 00:14:55.382 ************************************ 00:14:55.382 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:14:55.382 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:55.382 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:55.382 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:55.382 ************************************ 00:14:55.382 START TEST filesystem_xfs 00:14:55.382 ************************************ 00:14:55.382 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:14:55.382 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:14:55.382 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:55.382 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:14:55.382 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:14:55.382 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:14:55.382 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:14:55.382 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:14:55.382 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:14:55.382 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:14:55.382 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:14:55.641 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:14:55.641 = sectsz=512 attr=2, projid32bit=1 00:14:55.641 = crc=1 finobt=1, sparse=1, rmapbt=0 00:14:55.641 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:14:55.641 data = bsize=4096 blocks=130560, imaxpct=25 00:14:55.641 = sunit=0 swidth=0 blks 00:14:55.641 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:14:55.641 log =internal log bsize=4096 blocks=16384, version=2 00:14:55.641 = sectsz=512 sunit=0 blks, lazy-count=1 00:14:55.641 realtime =none extsz=4096 blocks=0, rtextents=0 00:14:56.577 Discarding blocks...Done. 00:14:56.577 11:58:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:14:56.577 11:58:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:58.479 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:58.479 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:14:58.479 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:58.479 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:14:58.479 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:14:58.479 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:58.479 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 4183010 00:14:58.479 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:58.479 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:58.479 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:58.479 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:58.479 00:14:58.480 real 0m2.861s 00:14:58.480 user 0m0.031s 00:14:58.480 sys 0m0.066s 00:14:58.480 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:58.480 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:14:58.480 ************************************ 00:14:58.480 END TEST filesystem_xfs 00:14:58.480 ************************************ 00:14:58.480 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:14:58.739 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:14:58.739 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:58.739 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:58.739 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:58.739 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:14:58.739 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:58.739 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:58.739 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:58.739 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:58.739 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:14:58.739 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:58.739 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.739 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:58.739 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.739 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:58.739 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 4183010 00:14:58.739 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 4183010 ']' 00:14:58.739 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 4183010 00:14:58.739 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:14:58.739 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:58.739 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4183010 00:14:58.998 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:58.998 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:58.998 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4183010' 00:14:58.998 killing process with pid 4183010 00:14:58.998 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 4183010 00:14:58.998 11:58:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 4183010 00:14:59.257 11:58:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:14:59.257 00:14:59.257 real 0m16.665s 00:14:59.257 user 1m5.676s 00:14:59.257 sys 0m1.332s 00:14:59.257 11:58:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:59.257 11:58:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:59.257 ************************************ 00:14:59.257 END TEST nvmf_filesystem_no_in_capsule 00:14:59.257 ************************************ 00:14:59.257 11:58:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:14:59.257 11:58:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:59.257 11:58:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:59.257 11:58:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:14:59.257 ************************************ 00:14:59.257 START TEST nvmf_filesystem_in_capsule 00:14:59.257 ************************************ 00:14:59.257 11:58:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:14:59.257 11:58:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:14:59.257 11:58:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:14:59.257 11:58:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:14:59.257 11:58:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:59.257 11:58:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:59.257 11:58:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@328 -- # nvmfpid=4186002 00:14:59.257 11:58:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@329 -- # waitforlisten 4186002 00:14:59.257 11:58:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:59.257 11:58:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 4186002 ']' 00:14:59.257 11:58:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:59.257 11:58:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:59.258 11:58:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:59.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:59.258 11:58:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:59.258 11:58:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:59.258 [2024-12-05 11:58:33.414179] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:14:59.258 [2024-12-05 11:58:33.414224] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:59.516 [2024-12-05 11:58:33.494457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:59.516 [2024-12-05 11:58:33.532941] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:59.516 [2024-12-05 11:58:33.532980] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:59.516 [2024-12-05 11:58:33.532987] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:59.516 [2024-12-05 11:58:33.532992] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:59.516 [2024-12-05 11:58:33.532997] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:59.516 [2024-12-05 11:58:33.534598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:59.516 [2024-12-05 11:58:33.534707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:59.516 [2024-12-05 11:58:33.534790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:59.516 [2024-12-05 11:58:33.534791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:00.082 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:00.082 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:15:00.082 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:15:00.082 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:00.082 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:00.082 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:00.082 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:15:00.082 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:15:00.082 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.082 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:00.082 [2024-12-05 11:58:34.274559] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:00.341 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.341 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:15:00.341 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.341 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:00.341 Malloc1 00:15:00.341 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.341 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:00.341 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.341 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:00.341 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.341 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:00.341 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.341 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:00.341 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.341 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:00.341 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.341 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:00.341 [2024-12-05 11:58:34.430535] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:00.341 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.341 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:15:00.341 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:15:00.341 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:15:00.341 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:15:00.341 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:15:00.341 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:15:00.341 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.341 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:00.341 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.341 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:15:00.341 { 00:15:00.341 "name": "Malloc1", 00:15:00.341 "aliases": [ 00:15:00.341 "08ebfd7e-3b8f-4673-839e-9870f9a35c83" 00:15:00.341 ], 00:15:00.341 "product_name": "Malloc disk", 00:15:00.341 "block_size": 512, 00:15:00.341 "num_blocks": 1048576, 00:15:00.341 "uuid": "08ebfd7e-3b8f-4673-839e-9870f9a35c83", 00:15:00.341 "assigned_rate_limits": { 00:15:00.341 "rw_ios_per_sec": 0, 00:15:00.341 "rw_mbytes_per_sec": 0, 00:15:00.341 "r_mbytes_per_sec": 0, 00:15:00.341 "w_mbytes_per_sec": 0 00:15:00.341 }, 00:15:00.341 "claimed": true, 00:15:00.341 "claim_type": "exclusive_write", 00:15:00.341 "zoned": false, 00:15:00.341 "supported_io_types": { 00:15:00.341 "read": true, 00:15:00.341 "write": true, 00:15:00.341 "unmap": true, 00:15:00.341 "flush": true, 00:15:00.341 "reset": true, 00:15:00.341 "nvme_admin": false, 00:15:00.341 "nvme_io": false, 00:15:00.341 "nvme_io_md": false, 00:15:00.341 "write_zeroes": true, 00:15:00.341 "zcopy": true, 00:15:00.341 "get_zone_info": false, 00:15:00.341 "zone_management": false, 00:15:00.341 "zone_append": false, 00:15:00.341 "compare": false, 00:15:00.341 "compare_and_write": false, 00:15:00.341 "abort": true, 00:15:00.341 "seek_hole": false, 00:15:00.341 "seek_data": false, 00:15:00.341 "copy": true, 00:15:00.341 "nvme_iov_md": false 00:15:00.341 }, 00:15:00.341 "memory_domains": [ 00:15:00.341 { 00:15:00.341 "dma_device_id": "system", 00:15:00.341 "dma_device_type": 1 00:15:00.341 }, 00:15:00.341 { 00:15:00.341 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.341 "dma_device_type": 2 00:15:00.341 } 00:15:00.341 ], 00:15:00.341 "driver_specific": {} 00:15:00.341 } 00:15:00.341 ]' 00:15:00.341 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:15:00.341 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:15:00.341 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:15:00.599 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:15:00.599 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:15:00.599 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:15:00.599 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:15:00.599 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:01.531 11:58:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:15:01.531 11:58:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:15:01.531 11:58:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:01.531 11:58:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:01.531 11:58:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:15:04.061 11:58:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:04.061 11:58:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:04.061 11:58:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:04.061 11:58:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:04.061 11:58:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:04.061 11:58:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:15:04.061 11:58:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:15:04.061 11:58:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:15:04.061 11:58:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:15:04.061 11:58:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:15:04.061 11:58:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:15:04.061 11:58:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:15:04.061 11:58:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:15:04.061 11:58:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:15:04.061 11:58:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:15:04.061 11:58:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:15:04.061 11:58:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:15:04.061 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:15:04.319 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:15:05.254 11:58:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:15:05.254 11:58:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:15:05.254 11:58:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:05.254 11:58:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:05.254 11:58:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:05.254 ************************************ 00:15:05.254 START TEST filesystem_in_capsule_ext4 00:15:05.254 ************************************ 00:15:05.254 11:58:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:15:05.254 11:58:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:15:05.254 11:58:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:05.254 11:58:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:15:05.254 11:58:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:15:05.254 11:58:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:15:05.254 11:58:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:15:05.254 11:58:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:15:05.254 11:58:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:15:05.254 11:58:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:15:05.254 11:58:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:15:05.254 mke2fs 1.47.0 (5-Feb-2023) 00:15:05.513 Discarding device blocks: 0/522240 done 00:15:05.513 Creating filesystem with 522240 1k blocks and 130560 inodes 00:15:05.513 Filesystem UUID: 13256e9a-3634-402a-a6a1-005eb0eb829a 00:15:05.513 Superblock backups stored on blocks: 00:15:05.513 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:15:05.513 00:15:05.513 Allocating group tables: 0/64 done 00:15:05.513 Writing inode tables: 0/64 done 00:15:05.513 Creating journal (8192 blocks): done 00:15:05.513 Writing superblocks and filesystem accounting information: 0/64 done 00:15:05.513 00:15:05.513 11:58:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:15:05.513 11:58:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:12.075 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:12.075 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:15:12.075 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:12.075 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:15:12.075 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:15:12.075 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:12.075 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 4186002 00:15:12.075 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:12.075 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:12.075 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:12.075 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:12.075 00:15:12.075 real 0m6.074s 00:15:12.075 user 0m0.035s 00:15:12.075 sys 0m0.062s 00:15:12.075 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:12.075 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:15:12.075 ************************************ 00:15:12.075 END TEST filesystem_in_capsule_ext4 00:15:12.075 ************************************ 00:15:12.075 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:15:12.075 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:12.075 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:12.075 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:12.075 ************************************ 00:15:12.075 START TEST filesystem_in_capsule_btrfs 00:15:12.075 ************************************ 00:15:12.075 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:15:12.075 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:15:12.076 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:12.076 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:15:12.076 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:15:12.076 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:15:12.076 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:15:12.076 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:15:12.076 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:15:12.076 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:15:12.076 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:15:12.076 btrfs-progs v6.8.1 00:15:12.076 See https://btrfs.readthedocs.io for more information. 00:15:12.076 00:15:12.076 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:15:12.076 NOTE: several default settings have changed in version 5.15, please make sure 00:15:12.076 this does not affect your deployments: 00:15:12.076 - DUP for metadata (-m dup) 00:15:12.076 - enabled no-holes (-O no-holes) 00:15:12.076 - enabled free-space-tree (-R free-space-tree) 00:15:12.076 00:15:12.076 Label: (null) 00:15:12.076 UUID: 65cfc9a6-7223-4b79-84a1-80319e1b6f3d 00:15:12.076 Node size: 16384 00:15:12.076 Sector size: 4096 (CPU page size: 4096) 00:15:12.076 Filesystem size: 510.00MiB 00:15:12.076 Block group profiles: 00:15:12.076 Data: single 8.00MiB 00:15:12.076 Metadata: DUP 32.00MiB 00:15:12.076 System: DUP 8.00MiB 00:15:12.076 SSD detected: yes 00:15:12.076 Zoned device: no 00:15:12.076 Features: extref, skinny-metadata, no-holes, free-space-tree 00:15:12.076 Checksum: crc32c 00:15:12.076 Number of devices: 1 00:15:12.076 Devices: 00:15:12.076 ID SIZE PATH 00:15:12.076 1 510.00MiB /dev/nvme0n1p1 00:15:12.076 00:15:12.076 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:15:12.076 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:12.076 11:58:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:12.076 11:58:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:15:12.076 11:58:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:12.076 11:58:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:15:12.076 11:58:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:15:12.076 11:58:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:12.076 11:58:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 4186002 00:15:12.076 11:58:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:12.076 11:58:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:12.076 11:58:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:12.076 11:58:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:12.076 00:15:12.076 real 0m0.584s 00:15:12.076 user 0m0.019s 00:15:12.076 sys 0m0.120s 00:15:12.076 11:58:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:12.076 11:58:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:15:12.076 ************************************ 00:15:12.076 END TEST filesystem_in_capsule_btrfs 00:15:12.076 ************************************ 00:15:12.076 11:58:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:15:12.076 11:58:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:12.076 11:58:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:12.076 11:58:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:12.076 ************************************ 00:15:12.076 START TEST filesystem_in_capsule_xfs 00:15:12.076 ************************************ 00:15:12.076 11:58:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:15:12.076 11:58:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:15:12.076 11:58:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:12.076 11:58:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:15:12.076 11:58:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:15:12.076 11:58:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:15:12.076 11:58:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:15:12.076 11:58:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:15:12.076 11:58:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:15:12.076 11:58:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:15:12.076 11:58:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:15:12.335 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:15:12.335 = sectsz=512 attr=2, projid32bit=1 00:15:12.335 = crc=1 finobt=1, sparse=1, rmapbt=0 00:15:12.335 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:15:12.335 data = bsize=4096 blocks=130560, imaxpct=25 00:15:12.335 = sunit=0 swidth=0 blks 00:15:12.335 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:15:12.335 log =internal log bsize=4096 blocks=16384, version=2 00:15:12.335 = sectsz=512 sunit=0 blks, lazy-count=1 00:15:12.335 realtime =none extsz=4096 blocks=0, rtextents=0 00:15:13.272 Discarding blocks...Done. 00:15:13.272 11:58:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:15:13.272 11:58:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:15.177 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:15.177 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:15:15.177 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:15.177 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:15:15.177 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:15:15.177 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:15.177 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 4186002 00:15:15.177 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:15.177 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:15.177 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:15.177 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:15.177 00:15:15.177 real 0m2.928s 00:15:15.177 user 0m0.023s 00:15:15.177 sys 0m0.075s 00:15:15.177 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:15.177 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:15:15.177 ************************************ 00:15:15.177 END TEST filesystem_in_capsule_xfs 00:15:15.177 ************************************ 00:15:15.177 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:15:15.177 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:15:15.177 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:15.177 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:15.177 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:15.177 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:15:15.177 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:15.177 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:15.177 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:15.177 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:15.177 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:15:15.177 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:15.177 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.178 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:15.178 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.178 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:15.178 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 4186002 00:15:15.178 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 4186002 ']' 00:15:15.178 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 4186002 00:15:15.178 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:15:15.178 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:15.178 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4186002 00:15:15.437 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:15.437 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:15.437 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4186002' 00:15:15.437 killing process with pid 4186002 00:15:15.437 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 4186002 00:15:15.437 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 4186002 00:15:15.696 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:15:15.696 00:15:15.696 real 0m16.351s 00:15:15.696 user 1m4.424s 00:15:15.696 sys 0m1.418s 00:15:15.696 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:15.696 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:15.696 ************************************ 00:15:15.696 END TEST nvmf_filesystem_in_capsule 00:15:15.696 ************************************ 00:15:15.696 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:15:15.696 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # nvmfcleanup 00:15:15.696 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@99 -- # sync 00:15:15.696 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:15:15.696 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@102 -- # set +e 00:15:15.696 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@103 -- # for i in {1..20} 00:15:15.696 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:15:15.696 rmmod nvme_tcp 00:15:15.696 rmmod nvme_fabrics 00:15:15.696 rmmod nvme_keyring 00:15:15.696 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:15:15.696 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # set -e 00:15:15.696 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # return 0 00:15:15.696 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # '[' -n '' ']' 00:15:15.696 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:15:15.696 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # nvmf_fini 00:15:15.696 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@264 -- # local dev 00:15:15.696 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@267 -- # remove_target_ns 00:15:15.696 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:15:15.697 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:15:15.697 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_target_ns 00:15:18.233 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@268 -- # delete_main_bridge 00:15:18.233 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:15:18.233 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@130 -- # return 0 00:15:18.233 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:15:18.233 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:15:18.233 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:15:18.233 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:15:18.233 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:15:18.233 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:15:18.233 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:15:18.233 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:15:18.233 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:15:18.233 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:15:18.233 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:15:18.233 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:15:18.233 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:15:18.233 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:15:18.233 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:15:18.233 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:15:18.233 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:15:18.233 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@41 -- # _dev=0 00:15:18.233 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@41 -- # dev_map=() 00:15:18.233 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@284 -- # iptr 00:15:18.233 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@542 -- # iptables-save 00:15:18.233 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:15:18.233 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@542 -- # iptables-restore 00:15:18.233 00:15:18.233 real 0m41.909s 00:15:18.233 user 2m12.224s 00:15:18.233 sys 0m7.539s 00:15:18.233 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:18.233 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:15:18.233 ************************************ 00:15:18.233 END TEST nvmf_filesystem 00:15:18.233 ************************************ 00:15:18.233 11:58:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:15:18.233 11:58:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:18.233 11:58:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:18.233 11:58:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:18.233 ************************************ 00:15:18.233 START TEST nvmf_target_discovery 00:15:18.233 ************************************ 00:15:18.233 11:58:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:15:18.233 * Looking for test storage... 00:15:18.233 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:18.233 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:18.233 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:15:18.233 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:18.233 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:18.233 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:18.233 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:18.233 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:18.233 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:15:18.233 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:15:18.233 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:15:18.233 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:15:18.233 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:15:18.233 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:15:18.233 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:15:18.233 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:18.233 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:15:18.234 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:15:18.234 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:18.234 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:18.234 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:15:18.234 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:15:18.234 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:18.234 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:15:18.234 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:15:18.234 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:15:18.234 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:15:18.234 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:18.234 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:15:18.234 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:15:18.234 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:18.234 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:18.234 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:15:18.234 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:18.234 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:18.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:18.234 --rc genhtml_branch_coverage=1 00:15:18.234 --rc genhtml_function_coverage=1 00:15:18.234 --rc genhtml_legend=1 00:15:18.234 --rc geninfo_all_blocks=1 00:15:18.234 --rc geninfo_unexecuted_blocks=1 00:15:18.234 00:15:18.234 ' 00:15:18.234 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:18.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:18.234 --rc genhtml_branch_coverage=1 00:15:18.234 --rc genhtml_function_coverage=1 00:15:18.234 --rc genhtml_legend=1 00:15:18.234 --rc geninfo_all_blocks=1 00:15:18.234 --rc geninfo_unexecuted_blocks=1 00:15:18.234 00:15:18.234 ' 00:15:18.234 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:18.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:18.234 --rc genhtml_branch_coverage=1 00:15:18.234 --rc genhtml_function_coverage=1 00:15:18.234 --rc genhtml_legend=1 00:15:18.234 --rc geninfo_all_blocks=1 00:15:18.234 --rc geninfo_unexecuted_blocks=1 00:15:18.234 00:15:18.234 ' 00:15:18.234 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:18.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:18.234 --rc genhtml_branch_coverage=1 00:15:18.234 --rc genhtml_function_coverage=1 00:15:18.234 --rc genhtml_legend=1 00:15:18.234 --rc geninfo_all_blocks=1 00:15:18.234 --rc geninfo_unexecuted_blocks=1 00:15:18.234 00:15:18.234 ' 00:15:18.234 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:18.234 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:15:18.234 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:18.234 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:18.234 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:18.234 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:18.234 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:18.234 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:15:18.234 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:18.234 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:15:18.234 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:18.234 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:15:18.234 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:18.234 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:15:18.234 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:15:18.234 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:18.234 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:18.234 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:15:18.234 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:18.234 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:18.234 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:18.234 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.234 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.234 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.234 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:15:18.234 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.234 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:15:18.234 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:15:18.234 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:15:18.234 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:15:18.234 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@50 -- # : 0 00:15:18.234 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:15:18.234 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:15:18.234 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:15:18.234 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:18.234 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:18.234 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:15:18.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:15:18.234 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:15:18.234 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:15:18.234 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@54 -- # have_pci_nics=0 00:15:18.235 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:15:18.235 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:15:18.235 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:15:18.235 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # nvmftestinit 00:15:18.235 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:15:18.235 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:18.235 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # prepare_net_devs 00:15:18.235 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # local -g is_hw=no 00:15:18.235 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@260 -- # remove_target_ns 00:15:18.235 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:15:18.235 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:15:18.235 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_target_ns 00:15:18.235 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:15:18.235 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:15:18.235 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # xtrace_disable 00:15:18.235 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@131 -- # pci_devs=() 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@131 -- # local -a pci_devs 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@132 -- # pci_net_devs=() 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@133 -- # pci_drivers=() 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@133 -- # local -A pci_drivers 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@135 -- # net_devs=() 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@135 -- # local -ga net_devs 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@136 -- # e810=() 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@136 -- # local -ga e810 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@137 -- # x722=() 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@137 -- # local -ga x722 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@138 -- # mlx=() 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@138 -- # local -ga mlx 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:24.812 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:24.812 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # [[ up == up ]] 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:24.812 Found net devices under 0000:86:00.0: cvl_0_0 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # [[ up == up ]] 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:24.812 Found net devices under 0000:86:00.1: cvl_0_1 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # is_hw=yes 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@257 -- # create_target_ns 00:15:24.812 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:15:24.813 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:15:24.813 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:15:24.813 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:24.813 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:15:24.813 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:15:24.813 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:24.813 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:24.813 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:15:24.813 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:15:24.813 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:15:24.813 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:15:24.813 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@27 -- # local -gA dev_map 00:15:24.813 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@28 -- # local -g _dev 00:15:24.813 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:15:24.813 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:15:24.813 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:15:24.813 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:15:24.813 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@44 -- # ips=() 00:15:24.813 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:15:24.813 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:15:24.813 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:15:24.813 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:15:24.813 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:15:24.813 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:15:24.813 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:15:24.813 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:15:24.813 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:15:24.813 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:15:24.813 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:15:24.813 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:15:24.813 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:15:24.813 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:15:24.813 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:15:24.813 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:15:24.813 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:15:24.813 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:15:24.813 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:15:24.813 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:15:24.813 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@11 -- # local val=167772161 00:15:24.813 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:15:24.813 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:15:24.813 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:15:24.813 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:15:24.813 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:15:24.813 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:15:24.813 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:15:24.813 10.0.0.1 00:15:24.813 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:15:24.813 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:15:24.813 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:24.813 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:24.813 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:15:24.813 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@11 -- # local val=167772162 00:15:24.813 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:15:24.813 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:15:24.813 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:15:24.813 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:15:24.813 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:15:24.813 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:15:24.813 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:15:24.813 10.0.0.2 00:15:24.813 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:15:24.813 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:15:24.813 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:15:24.813 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:15:24.813 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:15:24.813 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:15:24.813 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:15:24.813 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:24.813 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:24.813 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:15:24.813 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:15:24.813 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:15:24.813 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:15:24.813 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:15:24.813 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:15:24.813 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:15:24.813 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:15:24.813 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:15:24.813 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:15:24.813 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:15:24.813 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@38 -- # ping_ips 1 00:15:24.813 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:15:24.813 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:15:24.813 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:15:24.813 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:15:24.813 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:15:24.813 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:15:24.813 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:15:24.813 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:15:24.813 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:15:24.813 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@107 -- # local dev=initiator0 00:15:24.813 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:15:24.813 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:15:24.813 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:15:24.813 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:15:24.813 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:15:24.813 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:15:24.813 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:15:24.813 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:15:24.814 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:24.814 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.421 ms 00:15:24.814 00:15:24.814 --- 10.0.0.1 ping statistics --- 00:15:24.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:24.814 rtt min/avg/max/mdev = 0.421/0.421/0.421/0.000 ms 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@168 -- # get_net_dev target0 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@107 -- # local dev=target0 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:15:24.814 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:24.814 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:15:24.814 00:15:24.814 --- 10.0.0.2 ping statistics --- 00:15:24.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:24.814 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@98 -- # (( pair++ )) 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@270 -- # return 0 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@107 -- # local dev=initiator0 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@107 -- # local dev=initiator1 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@109 -- # return 1 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@168 -- # dev= 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@169 -- # return 0 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@168 -- # get_net_dev target0 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@107 -- # local dev=target0 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@168 -- # get_net_dev target1 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@107 -- # local dev=target1 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@109 -- # return 1 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@168 -- # dev= 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@169 -- # return 0 00:15:24.814 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@16 -- # nvmfappstart -m 0xF 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # nvmfpid=4192356 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@329 -- # waitforlisten 4192356 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 4192356 ']' 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:24.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.815 [2024-12-05 11:58:58.349330] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:15:24.815 [2024-12-05 11:58:58.349380] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:24.815 [2024-12-05 11:58:58.428641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:24.815 [2024-12-05 11:58:58.471228] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:24.815 [2024-12-05 11:58:58.471267] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:24.815 [2024-12-05 11:58:58.471275] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:24.815 [2024-12-05 11:58:58.471281] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:24.815 [2024-12-05 11:58:58.471286] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:24.815 [2024-12-05 11:58:58.472919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:24.815 [2024-12-05 11:58:58.473051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:24.815 [2024-12-05 11:58:58.473159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:24.815 [2024-12-05 11:58:58.473160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.815 [2024-12-05 11:58:58.623639] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # seq 1 4 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # for i in $(seq 1 4) 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@22 -- # rpc_cmd bdev_null_create Null1 102400 512 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.815 Null1 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.815 [2024-12-05 11:58:58.677531] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # for i in $(seq 1 4) 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@22 -- # rpc_cmd bdev_null_create Null2 102400 512 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.815 Null2 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # for i in $(seq 1 4) 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@22 -- # rpc_cmd bdev_null_create Null3 102400 512 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.815 Null3 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # for i in $(seq 1 4) 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@22 -- # rpc_cmd bdev_null_create Null4 102400 512 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.815 Null4 00:15:24.815 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.816 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:15:24.816 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.816 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.816 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.816 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:15:24.816 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.816 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.816 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.816 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:15:24.816 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.816 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.816 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.816 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:24.816 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.816 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.816 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.816 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:15:24.816 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.816 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.816 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.816 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:15:24.816 00:15:24.816 Discovery Log Number of Records 6, Generation counter 6 00:15:24.816 =====Discovery Log Entry 0====== 00:15:24.816 trtype: tcp 00:15:24.816 adrfam: ipv4 00:15:24.816 subtype: current discovery subsystem 00:15:24.816 treq: not required 00:15:24.816 portid: 0 00:15:24.816 trsvcid: 4420 00:15:24.816 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:24.816 traddr: 10.0.0.2 00:15:24.816 eflags: explicit discovery connections, duplicate discovery information 00:15:24.816 sectype: none 00:15:24.816 =====Discovery Log Entry 1====== 00:15:24.816 trtype: tcp 00:15:24.816 adrfam: ipv4 00:15:24.816 subtype: nvme subsystem 00:15:24.816 treq: not required 00:15:24.816 portid: 0 00:15:24.816 trsvcid: 4420 00:15:24.816 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:24.816 traddr: 10.0.0.2 00:15:24.816 eflags: none 00:15:24.816 sectype: none 00:15:24.816 =====Discovery Log Entry 2====== 00:15:24.816 trtype: tcp 00:15:24.816 adrfam: ipv4 00:15:24.816 subtype: nvme subsystem 00:15:24.816 treq: not required 00:15:24.816 portid: 0 00:15:24.816 trsvcid: 4420 00:15:24.816 subnqn: nqn.2016-06.io.spdk:cnode2 00:15:24.816 traddr: 10.0.0.2 00:15:24.816 eflags: none 00:15:24.816 sectype: none 00:15:24.816 =====Discovery Log Entry 3====== 00:15:24.816 trtype: tcp 00:15:24.816 adrfam: ipv4 00:15:24.816 subtype: nvme subsystem 00:15:24.816 treq: not required 00:15:24.816 portid: 0 00:15:24.816 trsvcid: 4420 00:15:24.816 subnqn: nqn.2016-06.io.spdk:cnode3 00:15:24.816 traddr: 10.0.0.2 00:15:24.816 eflags: none 00:15:24.816 sectype: none 00:15:24.816 =====Discovery Log Entry 4====== 00:15:24.816 trtype: tcp 00:15:24.816 adrfam: ipv4 00:15:24.816 subtype: nvme subsystem 00:15:24.816 treq: not required 00:15:24.816 portid: 0 00:15:24.816 trsvcid: 4420 00:15:24.816 subnqn: nqn.2016-06.io.spdk:cnode4 00:15:24.816 traddr: 10.0.0.2 00:15:24.816 eflags: none 00:15:24.816 sectype: none 00:15:24.816 =====Discovery Log Entry 5====== 00:15:24.816 trtype: tcp 00:15:24.816 adrfam: ipv4 00:15:24.816 subtype: discovery subsystem referral 00:15:24.816 treq: not required 00:15:24.816 portid: 0 00:15:24.816 trsvcid: 4430 00:15:24.816 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:24.816 traddr: 10.0.0.2 00:15:24.816 eflags: none 00:15:24.816 sectype: none 00:15:24.816 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@34 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:15:24.816 Perform nvmf subsystem discovery via RPC 00:15:24.816 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_get_subsystems 00:15:24.816 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.816 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.816 [ 00:15:24.816 { 00:15:24.816 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:24.816 "subtype": "Discovery", 00:15:24.816 "listen_addresses": [ 00:15:24.816 { 00:15:24.816 "trtype": "TCP", 00:15:24.816 "adrfam": "IPv4", 00:15:24.816 "traddr": "10.0.0.2", 00:15:24.816 "trsvcid": "4420" 00:15:24.816 } 00:15:24.816 ], 00:15:24.816 "allow_any_host": true, 00:15:24.816 "hosts": [] 00:15:24.816 }, 00:15:24.816 { 00:15:24.816 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:24.816 "subtype": "NVMe", 00:15:24.816 "listen_addresses": [ 00:15:24.816 { 00:15:24.816 "trtype": "TCP", 00:15:24.816 "adrfam": "IPv4", 00:15:24.816 "traddr": "10.0.0.2", 00:15:24.816 "trsvcid": "4420" 00:15:24.816 } 00:15:24.816 ], 00:15:24.816 "allow_any_host": true, 00:15:24.816 "hosts": [], 00:15:24.816 "serial_number": "SPDK00000000000001", 00:15:24.816 "model_number": "SPDK bdev Controller", 00:15:24.816 "max_namespaces": 32, 00:15:24.816 "min_cntlid": 1, 00:15:24.816 "max_cntlid": 65519, 00:15:24.816 "namespaces": [ 00:15:24.816 { 00:15:24.816 "nsid": 1, 00:15:24.816 "bdev_name": "Null1", 00:15:24.816 "name": "Null1", 00:15:24.816 "nguid": "C50A00D5A59C4949B02EA8D63219A88B", 00:15:24.816 "uuid": "c50a00d5-a59c-4949-b02e-a8d63219a88b" 00:15:24.816 } 00:15:24.816 ] 00:15:24.816 }, 00:15:24.816 { 00:15:24.816 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:15:24.816 "subtype": "NVMe", 00:15:24.816 "listen_addresses": [ 00:15:24.816 { 00:15:24.816 "trtype": "TCP", 00:15:24.816 "adrfam": "IPv4", 00:15:24.816 "traddr": "10.0.0.2", 00:15:24.816 "trsvcid": "4420" 00:15:24.816 } 00:15:24.816 ], 00:15:24.816 "allow_any_host": true, 00:15:24.816 "hosts": [], 00:15:24.816 "serial_number": "SPDK00000000000002", 00:15:24.816 "model_number": "SPDK bdev Controller", 00:15:24.816 "max_namespaces": 32, 00:15:24.816 "min_cntlid": 1, 00:15:24.816 "max_cntlid": 65519, 00:15:24.816 "namespaces": [ 00:15:24.816 { 00:15:24.816 "nsid": 1, 00:15:24.816 "bdev_name": "Null2", 00:15:24.816 "name": "Null2", 00:15:24.816 "nguid": "2D229284334743489F596A9FE8BD2323", 00:15:24.816 "uuid": "2d229284-3347-4348-9f59-6a9fe8bd2323" 00:15:24.816 } 00:15:24.816 ] 00:15:24.816 }, 00:15:24.816 { 00:15:24.816 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:15:24.816 "subtype": "NVMe", 00:15:24.816 "listen_addresses": [ 00:15:24.816 { 00:15:24.816 "trtype": "TCP", 00:15:24.816 "adrfam": "IPv4", 00:15:24.816 "traddr": "10.0.0.2", 00:15:24.816 "trsvcid": "4420" 00:15:24.816 } 00:15:24.816 ], 00:15:24.816 "allow_any_host": true, 00:15:24.816 "hosts": [], 00:15:24.816 "serial_number": "SPDK00000000000003", 00:15:24.816 "model_number": "SPDK bdev Controller", 00:15:24.816 "max_namespaces": 32, 00:15:24.816 "min_cntlid": 1, 00:15:24.816 "max_cntlid": 65519, 00:15:24.816 "namespaces": [ 00:15:24.816 { 00:15:24.816 "nsid": 1, 00:15:24.816 "bdev_name": "Null3", 00:15:24.816 "name": "Null3", 00:15:24.816 "nguid": "93C81228B4424DA29755FB336614A155", 00:15:24.816 "uuid": "93c81228-b442-4da2-9755-fb336614a155" 00:15:24.817 } 00:15:24.817 ] 00:15:24.817 }, 00:15:24.817 { 00:15:24.817 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:15:24.817 "subtype": "NVMe", 00:15:24.817 "listen_addresses": [ 00:15:24.817 { 00:15:24.817 "trtype": "TCP", 00:15:24.817 "adrfam": "IPv4", 00:15:24.817 "traddr": "10.0.0.2", 00:15:24.817 "trsvcid": "4420" 00:15:24.817 } 00:15:24.817 ], 00:15:24.817 "allow_any_host": true, 00:15:24.817 "hosts": [], 00:15:24.817 "serial_number": "SPDK00000000000004", 00:15:24.817 "model_number": "SPDK bdev Controller", 00:15:24.817 "max_namespaces": 32, 00:15:24.817 "min_cntlid": 1, 00:15:24.817 "max_cntlid": 65519, 00:15:24.817 "namespaces": [ 00:15:24.817 { 00:15:24.817 "nsid": 1, 00:15:24.817 "bdev_name": "Null4", 00:15:24.817 "name": "Null4", 00:15:24.817 "nguid": "4101868F23C9475C98328AB4604DCFCB", 00:15:24.817 "uuid": "4101868f-23c9-475c-9832-8ab4604dcfcb" 00:15:24.817 } 00:15:24.817 ] 00:15:24.817 } 00:15:24.817 ] 00:15:24.817 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.817 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # seq 1 4 00:15:24.817 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # for i in $(seq 1 4) 00:15:24.817 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:24.817 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.817 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.817 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.817 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # rpc_cmd bdev_null_delete Null1 00:15:24.817 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.817 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.817 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.817 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # for i in $(seq 1 4) 00:15:24.817 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:15:24.817 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.817 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.817 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.817 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # rpc_cmd bdev_null_delete Null2 00:15:24.817 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.817 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.817 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.817 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # for i in $(seq 1 4) 00:15:24.817 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:15:24.817 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.817 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:25.075 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.075 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # rpc_cmd bdev_null_delete Null3 00:15:25.075 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.075 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:25.075 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.075 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # for i in $(seq 1 4) 00:15:25.075 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:15:25.075 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.075 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:25.075 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.075 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # rpc_cmd bdev_null_delete Null4 00:15:25.075 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.075 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:25.075 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.075 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:15:25.075 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.075 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:25.075 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.075 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_get_bdevs 00:15:25.075 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # jq -r '.[].name' 00:15:25.075 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.076 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:25.076 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.076 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # check_bdevs= 00:15:25.076 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@45 -- # '[' -n '' ']' 00:15:25.076 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:15:25.076 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@52 -- # nvmftestfini 00:15:25.076 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # nvmfcleanup 00:15:25.076 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@99 -- # sync 00:15:25.076 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:15:25.076 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@102 -- # set +e 00:15:25.076 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@103 -- # for i in {1..20} 00:15:25.076 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:15:25.076 rmmod nvme_tcp 00:15:25.076 rmmod nvme_fabrics 00:15:25.076 rmmod nvme_keyring 00:15:25.076 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:15:25.076 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # set -e 00:15:25.076 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # return 0 00:15:25.076 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # '[' -n 4192356 ']' 00:15:25.076 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@337 -- # killprocess 4192356 00:15:25.076 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 4192356 ']' 00:15:25.076 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 4192356 00:15:25.076 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:15:25.076 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:25.076 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4192356 00:15:25.076 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:25.076 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:25.076 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4192356' 00:15:25.076 killing process with pid 4192356 00:15:25.076 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 4192356 00:15:25.076 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 4192356 00:15:25.335 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:15:25.335 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # nvmf_fini 00:15:25.335 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@264 -- # local dev 00:15:25.335 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@267 -- # remove_target_ns 00:15:25.335 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:15:25.335 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:15:25.335 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_target_ns 00:15:27.281 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@268 -- # delete_main_bridge 00:15:27.281 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:15:27.281 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@130 -- # return 0 00:15:27.281 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:15:27.281 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:15:27.281 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:15:27.281 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:15:27.281 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:15:27.281 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:15:27.281 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:15:27.281 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:15:27.281 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:15:27.281 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:15:27.281 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:15:27.281 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:15:27.281 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:15:27.281 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:15:27.281 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:15:27.281 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:15:27.281 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:15:27.281 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@41 -- # _dev=0 00:15:27.281 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@41 -- # dev_map=() 00:15:27.281 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@284 -- # iptr 00:15:27.281 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@542 -- # iptables-save 00:15:27.281 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:15:27.281 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@542 -- # iptables-restore 00:15:27.281 00:15:27.281 real 0m9.487s 00:15:27.281 user 0m5.572s 00:15:27.281 sys 0m4.922s 00:15:27.281 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:27.281 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:27.281 ************************************ 00:15:27.281 END TEST nvmf_target_discovery 00:15:27.281 ************************************ 00:15:27.541 11:59:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:15:27.541 11:59:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:27.541 11:59:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:27.541 11:59:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:27.541 ************************************ 00:15:27.541 START TEST nvmf_referrals 00:15:27.541 ************************************ 00:15:27.541 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:15:27.541 * Looking for test storage... 00:15:27.541 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:27.541 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:27.541 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:15:27.541 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:27.541 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:27.541 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:27.541 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:27.541 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:27.541 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:15:27.541 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:15:27.541 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:15:27.541 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:15:27.541 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:15:27.541 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:15:27.541 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:15:27.541 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:27.541 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:15:27.541 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:15:27.541 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:27.541 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:27.541 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:15:27.541 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:15:27.541 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:27.541 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:15:27.541 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:15:27.541 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:15:27.541 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:15:27.541 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:27.541 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:15:27.541 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:15:27.541 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:27.541 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:27.541 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:15:27.541 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:27.541 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:27.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.541 --rc genhtml_branch_coverage=1 00:15:27.541 --rc genhtml_function_coverage=1 00:15:27.541 --rc genhtml_legend=1 00:15:27.541 --rc geninfo_all_blocks=1 00:15:27.541 --rc geninfo_unexecuted_blocks=1 00:15:27.541 00:15:27.541 ' 00:15:27.541 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:27.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.541 --rc genhtml_branch_coverage=1 00:15:27.541 --rc genhtml_function_coverage=1 00:15:27.541 --rc genhtml_legend=1 00:15:27.541 --rc geninfo_all_blocks=1 00:15:27.541 --rc geninfo_unexecuted_blocks=1 00:15:27.541 00:15:27.541 ' 00:15:27.541 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:27.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.541 --rc genhtml_branch_coverage=1 00:15:27.541 --rc genhtml_function_coverage=1 00:15:27.541 --rc genhtml_legend=1 00:15:27.541 --rc geninfo_all_blocks=1 00:15:27.541 --rc geninfo_unexecuted_blocks=1 00:15:27.541 00:15:27.541 ' 00:15:27.541 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:27.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.541 --rc genhtml_branch_coverage=1 00:15:27.541 --rc genhtml_function_coverage=1 00:15:27.541 --rc genhtml_legend=1 00:15:27.541 --rc geninfo_all_blocks=1 00:15:27.541 --rc geninfo_unexecuted_blocks=1 00:15:27.541 00:15:27.541 ' 00:15:27.541 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:27.541 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:15:27.541 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:27.541 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:27.541 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:27.541 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:27.541 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:27.541 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:15:27.541 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:27.541 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:15:27.541 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:27.541 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:15:27.541 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:27.541 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:15:27.541 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:15:27.541 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:27.541 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:27.541 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:15:27.541 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:27.541 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:27.541 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:27.541 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.541 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.541 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.541 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:15:27.542 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.542 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:15:27.542 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:15:27.542 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:15:27.542 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:15:27.542 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@50 -- # : 0 00:15:27.542 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:15:27.542 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:15:27.542 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:15:27.542 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:27.542 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:27.542 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:15:27.542 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:15:27.542 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:15:27.542 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:15:27.542 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@54 -- # have_pci_nics=0 00:15:27.542 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:15:27.542 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:15:27.542 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:15:27.542 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:15:27.542 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:15:27.542 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:15:27.542 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:15:27.542 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:15:27.542 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:27.542 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # prepare_net_devs 00:15:27.542 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # local -g is_hw=no 00:15:27.542 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@260 -- # remove_target_ns 00:15:27.542 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:15:27.542 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:15:27.542 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_target_ns 00:15:27.542 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:15:27.542 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:15:27.542 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # xtrace_disable 00:15:27.542 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:34.111 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:34.111 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@131 -- # pci_devs=() 00:15:34.111 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@131 -- # local -a pci_devs 00:15:34.111 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@132 -- # pci_net_devs=() 00:15:34.111 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:15:34.111 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@133 -- # pci_drivers=() 00:15:34.111 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@133 -- # local -A pci_drivers 00:15:34.111 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@135 -- # net_devs=() 00:15:34.111 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@135 -- # local -ga net_devs 00:15:34.111 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@136 -- # e810=() 00:15:34.111 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@136 -- # local -ga e810 00:15:34.111 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@137 -- # x722=() 00:15:34.111 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@137 -- # local -ga x722 00:15:34.111 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@138 -- # mlx=() 00:15:34.111 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@138 -- # local -ga mlx 00:15:34.111 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:34.111 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:34.111 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:34.111 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:34.111 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:34.111 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:34.111 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:34.111 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:34.111 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:34.111 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:34.111 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:34.111 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:34.111 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:15:34.111 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:15:34.111 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:15:34.111 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:15:34.111 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:15:34.111 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:15:34.111 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:15:34.111 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:34.111 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:34.111 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:15:34.111 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:15:34.111 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:34.111 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:34.111 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:15:34.111 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:15:34.111 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:34.111 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:34.111 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:15:34.111 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:15:34.111 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:34.111 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:34.111 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:15:34.111 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:15:34.111 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:15:34.111 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:15:34.111 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:15:34.111 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:34.111 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:15:34.111 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:34.111 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # [[ up == up ]] 00:15:34.111 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:15:34.111 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:34.111 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:34.111 Found net devices under 0000:86:00.0: cvl_0_0 00:15:34.111 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:15:34.111 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # [[ up == up ]] 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:34.112 Found net devices under 0000:86:00.1: cvl_0_1 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # is_hw=yes 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@257 -- # create_target_ns 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@27 -- # local -gA dev_map 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@28 -- # local -g _dev 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@44 -- # ips=() 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@11 -- # local val=167772161 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:15:34.112 10.0.0.1 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@11 -- # local val=167772162 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:15:34.112 10.0.0.2 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@38 -- # ping_ips 1 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:15:34.112 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@107 -- # local dev=initiator0 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:15:34.113 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:34.113 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.388 ms 00:15:34.113 00:15:34.113 --- 10.0.0.1 ping statistics --- 00:15:34.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:34.113 rtt min/avg/max/mdev = 0.388/0.388/0.388/0.000 ms 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@168 -- # get_net_dev target0 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@107 -- # local dev=target0 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:15:34.113 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:34.113 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:15:34.113 00:15:34.113 --- 10.0.0.2 ping statistics --- 00:15:34.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:34.113 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@98 -- # (( pair++ )) 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@270 -- # return 0 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@107 -- # local dev=initiator0 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@107 -- # local dev=initiator1 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@109 -- # return 1 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@168 -- # dev= 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@169 -- # return 0 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@168 -- # get_net_dev target0 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@107 -- # local dev=target0 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@168 -- # get_net_dev target1 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@107 -- # local dev=target1 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:15:34.113 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:15:34.114 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@109 -- # return 1 00:15:34.114 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@168 -- # dev= 00:15:34.114 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@169 -- # return 0 00:15:34.114 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:15:34.114 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:34.114 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:15:34.114 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:15:34.114 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:34.114 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:15:34.114 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:15:34.114 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:15:34.114 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:15:34.114 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:34.114 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:34.114 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # nvmfpid=2522 00:15:34.114 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@329 -- # waitforlisten 2522 00:15:34.114 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:34.114 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 2522 ']' 00:15:34.114 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:34.114 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:34.114 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:34.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:34.114 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:34.114 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:34.114 [2024-12-05 11:59:07.885831] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:15:34.114 [2024-12-05 11:59:07.885881] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:34.114 [2024-12-05 11:59:07.962384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:34.114 [2024-12-05 11:59:08.002736] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:34.114 [2024-12-05 11:59:08.002774] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:34.114 [2024-12-05 11:59:08.002781] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:34.114 [2024-12-05 11:59:08.002786] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:34.114 [2024-12-05 11:59:08.002795] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:34.114 [2024-12-05 11:59:08.004424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:34.114 [2024-12-05 11:59:08.004507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:34.114 [2024-12-05 11:59:08.004595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.114 [2024-12-05 11:59:08.004596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:34.114 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:34.114 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:15:34.114 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:15:34.114 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:34.114 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:34.114 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:34.114 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:34.114 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.114 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:34.114 [2024-12-05 11:59:08.154632] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:34.114 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.114 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:15:34.114 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.114 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:34.114 [2024-12-05 11:59:08.181544] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:15:34.114 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.114 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:15:34.114 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.114 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:34.114 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.114 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:15:34.114 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.114 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:34.114 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.114 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:15:34.114 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.114 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:34.114 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.114 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:15:34.114 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:15:34.114 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.114 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:34.114 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.114 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:15:34.114 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:15:34.114 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:15:34.114 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:15:34.114 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.114 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:34.114 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:15:34.114 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:15:34.114 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.114 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:15:34.114 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:15:34.114 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:15:34.114 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:15:34.114 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:15:34.373 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:15:34.373 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:34.373 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:15:34.373 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:15:34.373 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:15:34.373 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:15:34.373 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.373 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:34.374 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.374 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:15:34.374 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.374 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:34.374 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.374 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:15:34.374 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.374 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:34.374 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.374 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:15:34.374 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:15:34.374 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.374 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:34.374 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.632 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:15:34.632 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:15:34.632 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:15:34.632 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:15:34.632 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:15:34.633 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:34.633 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:15:34.633 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:15:34.633 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:15:34.633 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:15:34.633 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.633 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:34.633 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.633 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:15:34.633 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.633 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:34.892 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.892 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:15:34.892 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:15:34.892 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:15:34.892 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:15:34.892 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.892 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:15:34.892 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:34.892 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.892 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:15:34.892 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:15:34.892 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:15:34.892 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:15:34.892 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:15:34.892 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:34.892 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:15:34.892 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:15:34.892 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:15:34.892 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:15:34.893 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:15:34.893 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:15:34.893 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:15:34.893 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:15:34.893 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:35.152 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:15:35.152 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:15:35.152 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:15:35.152 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:15:35.152 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:35.152 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:15:35.152 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:15:35.152 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:15:35.152 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.152 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:35.411 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.411 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:15:35.411 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:15:35.411 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:15:35.411 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:15:35.411 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.411 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:15:35.411 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:35.411 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.411 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:15:35.411 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:15:35.411 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:15:35.411 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:15:35.411 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:15:35.411 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:15:35.411 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:35.411 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:15:35.411 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:15:35.411 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:15:35.411 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:15:35.411 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:15:35.411 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:15:35.411 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:35.411 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:15:35.670 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:15:35.670 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:15:35.670 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:15:35.670 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:15:35.670 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:35.670 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:15:35.929 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:15:35.929 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:15:35.929 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.929 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:35.929 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.929 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:15:35.929 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:15:35.929 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.929 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:35.929 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.929 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:15:35.929 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:15:35.929 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:15:35.929 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:15:35.929 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:35.929 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:15:35.929 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:15:36.188 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:15:36.188 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:15:36.188 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:15:36.188 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:15:36.188 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # nvmfcleanup 00:15:36.188 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@99 -- # sync 00:15:36.188 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:15:36.188 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@102 -- # set +e 00:15:36.188 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@103 -- # for i in {1..20} 00:15:36.188 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:15:36.188 rmmod nvme_tcp 00:15:36.188 rmmod nvme_fabrics 00:15:36.188 rmmod nvme_keyring 00:15:36.188 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:15:36.188 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # set -e 00:15:36.188 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # return 0 00:15:36.188 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # '[' -n 2522 ']' 00:15:36.188 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@337 -- # killprocess 2522 00:15:36.188 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 2522 ']' 00:15:36.188 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 2522 00:15:36.188 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:15:36.188 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:36.188 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2522 00:15:36.188 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:36.188 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:36.189 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2522' 00:15:36.189 killing process with pid 2522 00:15:36.189 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 2522 00:15:36.189 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 2522 00:15:36.448 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:15:36.448 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # nvmf_fini 00:15:36.448 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@264 -- # local dev 00:15:36.448 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@267 -- # remove_target_ns 00:15:36.448 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:15:36.448 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:15:36.448 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_target_ns 00:15:38.984 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@268 -- # delete_main_bridge 00:15:38.984 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:15:38.984 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@130 -- # return 0 00:15:38.984 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:15:38.984 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:15:38.984 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:15:38.984 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:15:38.984 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:15:38.984 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:15:38.984 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:15:38.984 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:15:38.984 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:15:38.984 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:15:38.984 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:15:38.984 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:15:38.984 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:15:38.984 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:15:38.984 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:15:38.984 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:15:38.984 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:15:38.984 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@41 -- # _dev=0 00:15:38.984 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@41 -- # dev_map=() 00:15:38.984 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@284 -- # iptr 00:15:38.984 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@542 -- # iptables-save 00:15:38.984 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:15:38.984 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@542 -- # iptables-restore 00:15:38.984 00:15:38.984 real 0m11.067s 00:15:38.984 user 0m12.626s 00:15:38.984 sys 0m5.258s 00:15:38.984 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:38.984 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:38.984 ************************************ 00:15:38.984 END TEST nvmf_referrals 00:15:38.984 ************************************ 00:15:38.984 11:59:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:15:38.984 11:59:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:38.984 11:59:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:38.984 11:59:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:38.984 ************************************ 00:15:38.984 START TEST nvmf_connect_disconnect 00:15:38.984 ************************************ 00:15:38.984 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:15:38.984 * Looking for test storage... 00:15:38.984 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:38.984 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:38.984 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:15:38.984 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:38.984 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:38.984 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:38.984 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:38.984 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:38.984 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:15:38.984 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:15:38.984 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:15:38.984 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:15:38.984 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:15:38.984 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:15:38.984 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:15:38.984 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:38.984 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:15:38.984 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:15:38.984 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:38.984 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:38.984 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:15:38.984 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:15:38.984 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:38.984 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:15:38.984 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:15:38.984 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:15:38.984 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:15:38.984 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:38.984 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:15:38.984 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:15:38.984 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:38.984 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:38.984 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:15:38.984 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:38.984 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:38.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:38.984 --rc genhtml_branch_coverage=1 00:15:38.984 --rc genhtml_function_coverage=1 00:15:38.984 --rc genhtml_legend=1 00:15:38.984 --rc geninfo_all_blocks=1 00:15:38.984 --rc geninfo_unexecuted_blocks=1 00:15:38.984 00:15:38.984 ' 00:15:38.984 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:38.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:38.984 --rc genhtml_branch_coverage=1 00:15:38.984 --rc genhtml_function_coverage=1 00:15:38.984 --rc genhtml_legend=1 00:15:38.984 --rc geninfo_all_blocks=1 00:15:38.984 --rc geninfo_unexecuted_blocks=1 00:15:38.984 00:15:38.984 ' 00:15:38.984 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:38.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:38.984 --rc genhtml_branch_coverage=1 00:15:38.984 --rc genhtml_function_coverage=1 00:15:38.984 --rc genhtml_legend=1 00:15:38.984 --rc geninfo_all_blocks=1 00:15:38.984 --rc geninfo_unexecuted_blocks=1 00:15:38.984 00:15:38.984 ' 00:15:38.985 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:38.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:38.985 --rc genhtml_branch_coverage=1 00:15:38.985 --rc genhtml_function_coverage=1 00:15:38.985 --rc genhtml_legend=1 00:15:38.985 --rc geninfo_all_blocks=1 00:15:38.985 --rc geninfo_unexecuted_blocks=1 00:15:38.985 00:15:38.985 ' 00:15:38.985 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:38.985 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:15:38.985 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:38.985 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:38.985 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:38.985 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:38.985 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:38.985 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:15:38.985 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:38.985 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:15:38.985 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:38.985 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:15:38.985 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:38.985 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:15:38.985 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:15:38.985 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:38.985 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:38.985 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:15:38.985 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:38.985 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:38.985 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:38.985 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.985 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.985 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.985 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:15:38.985 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.985 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:15:38.985 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:15:38.985 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:15:38.985 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:15:38.985 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@50 -- # : 0 00:15:38.985 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:15:38.985 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:15:38.985 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:15:38.985 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:38.985 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:38.985 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:15:38.985 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:15:38.985 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:15:38.985 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:15:38.985 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@54 -- # have_pci_nics=0 00:15:38.985 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:38.985 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:38.985 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:15:38.985 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:15:38.985 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:38.985 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # prepare_net_devs 00:15:38.985 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # local -g is_hw=no 00:15:38.985 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # remove_target_ns 00:15:38.985 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:15:38.985 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:15:38.985 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_target_ns 00:15:38.985 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:15:38.985 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:15:38.985 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # xtrace_disable 00:15:38.985 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:44.398 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:44.398 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@131 -- # pci_devs=() 00:15:44.398 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@131 -- # local -a pci_devs 00:15:44.398 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@132 -- # pci_net_devs=() 00:15:44.398 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:15:44.398 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@133 -- # pci_drivers=() 00:15:44.398 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@133 -- # local -A pci_drivers 00:15:44.398 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@135 -- # net_devs=() 00:15:44.398 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@135 -- # local -ga net_devs 00:15:44.398 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@136 -- # e810=() 00:15:44.398 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@136 -- # local -ga e810 00:15:44.398 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@137 -- # x722=() 00:15:44.398 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@137 -- # local -ga x722 00:15:44.398 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@138 -- # mlx=() 00:15:44.398 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@138 -- # local -ga mlx 00:15:44.398 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:44.398 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:44.398 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:44.398 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:44.398 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:44.398 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:44.398 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:44.398 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:44.398 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:44.398 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:44.398 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:44.398 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:44.398 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:15:44.398 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:15:44.398 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:15:44.398 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:15:44.398 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:44.399 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:44.399 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # [[ up == up ]] 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:44.399 Found net devices under 0000:86:00.0: cvl_0_0 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # [[ up == up ]] 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:44.399 Found net devices under 0000:86:00.1: cvl_0_1 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # is_hw=yes 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@257 -- # create_target_ns 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@27 -- # local -gA dev_map 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@28 -- # local -g _dev 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@44 -- # ips=() 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:15:44.399 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:15:44.658 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:15:44.658 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:15:44.658 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:15:44.658 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:15:44.658 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@11 -- # local val=167772161 00:15:44.658 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:15:44.658 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:15:44.658 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:15:44.658 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:15:44.658 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:15:44.658 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:15:44.658 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:15:44.658 10.0.0.1 00:15:44.658 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:15:44.658 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:15:44.658 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:44.658 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:44.658 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:15:44.658 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@11 -- # local val=167772162 00:15:44.658 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:15:44.658 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:15:44.658 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:15:44.658 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:15:44.658 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:15:44.658 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:15:44.658 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:15:44.658 10.0.0.2 00:15:44.658 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:15:44.658 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:15:44.658 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:15:44.658 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:15:44.658 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:15:44.658 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:15:44.658 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:15:44.658 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:44.658 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:44.658 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:15:44.658 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:15:44.658 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:15:44.658 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:15:44.658 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:15:44.658 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:15:44.658 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:15:44.658 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:15:44.658 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:15:44.658 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:15:44.658 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:15:44.658 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@38 -- # ping_ips 1 00:15:44.658 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:15:44.658 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:15:44.658 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:15:44.658 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:15:44.658 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:15:44.658 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:15:44.658 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:15:44.658 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:15:44.658 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:15:44.658 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@107 -- # local dev=initiator0 00:15:44.658 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:15:44.658 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:15:44.658 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:15:44.658 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:15:44.658 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:15:44.658 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:15:44.658 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:15:44.658 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:15:44.658 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:15:44.658 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:15:44.658 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:15:44.659 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:44.659 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:44.659 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:15:44.659 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:15:44.659 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:44.659 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.484 ms 00:15:44.659 00:15:44.659 --- 10.0.0.1 ping statistics --- 00:15:44.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:44.659 rtt min/avg/max/mdev = 0.484/0.484/0.484/0.000 ms 00:15:44.659 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:15:44.659 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:15:44.659 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:15:44.659 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:15:44.659 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:44.659 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:44.659 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@168 -- # get_net_dev target0 00:15:44.659 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@107 -- # local dev=target0 00:15:44.659 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:15:44.659 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:15:44.659 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:15:44.659 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:15:44.659 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:15:44.659 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:15:44.659 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:15:44.659 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:15:44.659 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:15:44.659 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:15:44.659 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:15:44.659 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:15:44.659 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:15:44.659 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:15:44.659 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:44.659 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:15:44.659 00:15:44.659 --- 10.0.0.2 ping statistics --- 00:15:44.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:44.659 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:15:44.659 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@98 -- # (( pair++ )) 00:15:44.659 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:15:44.659 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:44.659 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # return 0 00:15:44.659 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:15:44.659 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:15:44.659 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:15:44.659 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:15:44.659 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:15:44.659 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:15:44.659 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:15:44.659 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:15:44.659 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:15:44.659 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:15:44.659 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@107 -- # local dev=initiator0 00:15:44.659 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:15:44.659 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:15:44.659 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:15:44.659 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:15:44.918 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:15:44.918 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:15:44.918 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:15:44.918 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:15:44.918 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:15:44.918 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:44.918 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:15:44.918 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:15:44.918 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:15:44.918 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:15:44.918 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:15:44.918 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:15:44.918 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@107 -- # local dev=initiator1 00:15:44.918 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:15:44.918 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:15:44.918 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@109 -- # return 1 00:15:44.918 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@168 -- # dev= 00:15:44.918 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@169 -- # return 0 00:15:44.918 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:15:44.918 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:15:44.918 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:15:44.918 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:15:44.918 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:15:44.918 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:44.918 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:44.918 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@168 -- # get_net_dev target0 00:15:44.918 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@107 -- # local dev=target0 00:15:44.918 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:15:44.918 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:15:44.918 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:15:44.918 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:15:44.918 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:15:44.918 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:15:44.918 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:15:44.918 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:15:44.918 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:15:44.918 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:44.918 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:15:44.918 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:15:44.918 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:15:44.918 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:15:44.918 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:44.918 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:44.918 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@168 -- # get_net_dev target1 00:15:44.918 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@107 -- # local dev=target1 00:15:44.918 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:15:44.918 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:15:44.918 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@109 -- # return 1 00:15:44.918 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@168 -- # dev= 00:15:44.918 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@169 -- # return 0 00:15:44.918 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:15:44.918 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:44.918 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:15:44.918 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:15:44.918 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:44.918 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:15:44.918 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:15:44.918 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:15:44.918 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:15:44.918 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:44.918 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:44.918 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # nvmfpid=6808 00:15:44.918 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:44.918 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # waitforlisten 6808 00:15:44.918 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 6808 ']' 00:15:44.918 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:44.918 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:44.918 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:44.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:44.918 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:44.918 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:44.918 [2024-12-05 11:59:18.999858] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:15:44.918 [2024-12-05 11:59:18.999902] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:44.918 [2024-12-05 11:59:19.079733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:45.176 [2024-12-05 11:59:19.121978] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:45.176 [2024-12-05 11:59:19.122010] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:45.176 [2024-12-05 11:59:19.122018] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:45.176 [2024-12-05 11:59:19.122024] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:45.176 [2024-12-05 11:59:19.122029] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:45.176 [2024-12-05 11:59:19.123538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:45.176 [2024-12-05 11:59:19.123648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:45.176 [2024-12-05 11:59:19.123733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:45.176 [2024-12-05 11:59:19.123732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:45.742 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:45.742 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:15:45.742 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:15:45.742 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:45.742 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:45.742 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:45.742 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:15:45.742 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.742 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:45.742 [2024-12-05 11:59:19.890909] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:45.742 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.742 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:15:45.742 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.742 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:45.742 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.742 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:15:45.742 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:45.742 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.742 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:46.000 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.000 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:46.000 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.000 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:46.000 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.000 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:46.000 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.000 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:46.000 [2024-12-05 11:59:19.958519] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:46.000 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.001 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:15:46.001 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:15:46.001 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:15:49.287 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:52.573 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:55.862 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:59.146 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:02.430 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:02.430 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:16:02.430 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:16:02.430 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # nvmfcleanup 00:16:02.430 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@99 -- # sync 00:16:02.430 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:16:02.431 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # set +e 00:16:02.431 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # for i in {1..20} 00:16:02.431 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:16:02.431 rmmod nvme_tcp 00:16:02.431 rmmod nvme_fabrics 00:16:02.431 rmmod nvme_keyring 00:16:02.431 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:16:02.431 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # set -e 00:16:02.431 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # return 0 00:16:02.431 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # '[' -n 6808 ']' 00:16:02.431 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@337 -- # killprocess 6808 00:16:02.431 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 6808 ']' 00:16:02.431 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 6808 00:16:02.431 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:16:02.431 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:02.431 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 6808 00:16:02.431 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:02.431 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:02.431 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 6808' 00:16:02.431 killing process with pid 6808 00:16:02.431 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 6808 00:16:02.431 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 6808 00:16:02.431 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:16:02.431 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # nvmf_fini 00:16:02.431 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@264 -- # local dev 00:16:02.431 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@267 -- # remove_target_ns 00:16:02.431 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:16:02.431 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:16:02.431 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_target_ns 00:16:04.963 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@268 -- # delete_main_bridge 00:16:04.963 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:16:04.963 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@130 -- # return 0 00:16:04.963 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:16:04.963 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:16:04.963 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:16:04.963 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:16:04.963 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:16:04.963 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:16:04.963 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:16:04.963 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:16:04.963 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:16:04.963 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:16:04.963 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:16:04.963 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:16:04.963 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:16:04.963 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:16:04.963 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:16:04.963 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:16:04.963 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:16:04.963 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@41 -- # _dev=0 00:16:04.963 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@41 -- # dev_map=() 00:16:04.963 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@284 -- # iptr 00:16:04.963 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@542 -- # iptables-save 00:16:04.963 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:16:04.963 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@542 -- # iptables-restore 00:16:04.963 00:16:04.963 real 0m25.963s 00:16:04.963 user 1m11.238s 00:16:04.963 sys 0m5.851s 00:16:04.963 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:04.963 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:04.963 ************************************ 00:16:04.963 END TEST nvmf_connect_disconnect 00:16:04.963 ************************************ 00:16:04.963 11:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:04.963 11:59:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:04.963 11:59:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:04.963 11:59:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:04.963 ************************************ 00:16:04.963 START TEST nvmf_multitarget 00:16:04.963 ************************************ 00:16:04.963 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:04.963 * Looking for test storage... 00:16:04.963 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:04.963 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:04.963 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:16:04.963 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:04.963 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:04.963 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:04.963 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:04.963 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:04.963 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:16:04.963 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:16:04.963 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:16:04.963 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:16:04.963 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:16:04.963 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:16:04.963 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:16:04.963 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:04.963 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:16:04.963 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:16:04.963 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:04.963 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:04.963 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:16:04.963 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:16:04.963 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:04.963 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:16:04.963 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:16:04.964 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:16:04.964 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:16:04.964 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:04.964 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:16:04.964 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:16:04.964 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:04.964 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:04.964 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:16:04.964 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:04.964 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:04.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.964 --rc genhtml_branch_coverage=1 00:16:04.964 --rc genhtml_function_coverage=1 00:16:04.964 --rc genhtml_legend=1 00:16:04.964 --rc geninfo_all_blocks=1 00:16:04.964 --rc geninfo_unexecuted_blocks=1 00:16:04.964 00:16:04.964 ' 00:16:04.964 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:04.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.964 --rc genhtml_branch_coverage=1 00:16:04.964 --rc genhtml_function_coverage=1 00:16:04.964 --rc genhtml_legend=1 00:16:04.964 --rc geninfo_all_blocks=1 00:16:04.964 --rc geninfo_unexecuted_blocks=1 00:16:04.964 00:16:04.964 ' 00:16:04.964 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:04.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.964 --rc genhtml_branch_coverage=1 00:16:04.964 --rc genhtml_function_coverage=1 00:16:04.964 --rc genhtml_legend=1 00:16:04.964 --rc geninfo_all_blocks=1 00:16:04.964 --rc geninfo_unexecuted_blocks=1 00:16:04.964 00:16:04.964 ' 00:16:04.964 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:04.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.964 --rc genhtml_branch_coverage=1 00:16:04.964 --rc genhtml_function_coverage=1 00:16:04.964 --rc genhtml_legend=1 00:16:04.964 --rc geninfo_all_blocks=1 00:16:04.964 --rc geninfo_unexecuted_blocks=1 00:16:04.964 00:16:04.964 ' 00:16:04.964 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:04.964 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:16:04.964 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:04.964 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:04.964 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:04.964 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:04.964 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:04.964 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:16:04.964 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:04.964 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:16:04.964 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:04.964 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:16:04.964 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:04.964 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:16:04.964 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:16:04.964 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:04.964 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:04.964 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:16:04.964 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:04.964 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:04.964 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:04.964 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.964 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.964 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.964 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:16:04.964 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.964 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:16:04.964 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:16:04.964 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:16:04.964 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:16:04.964 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@50 -- # : 0 00:16:04.964 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:16:04.964 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:16:04.964 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:16:04.964 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:04.964 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:04.964 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:16:04.964 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:16:04.964 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:16:04.964 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:16:04.964 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@54 -- # have_pci_nics=0 00:16:04.964 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:04.964 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:16:04.964 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:16:04.964 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:04.964 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # prepare_net_devs 00:16:04.964 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # local -g is_hw=no 00:16:04.964 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@260 -- # remove_target_ns 00:16:04.964 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:16:04.964 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:16:04.964 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_target_ns 00:16:04.964 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:16:04.965 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:16:04.965 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # xtrace_disable 00:16:04.965 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:11.534 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:11.534 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@131 -- # pci_devs=() 00:16:11.534 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@131 -- # local -a pci_devs 00:16:11.534 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@132 -- # pci_net_devs=() 00:16:11.534 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:16:11.534 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@133 -- # pci_drivers=() 00:16:11.534 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@133 -- # local -A pci_drivers 00:16:11.534 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@135 -- # net_devs=() 00:16:11.534 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@135 -- # local -ga net_devs 00:16:11.534 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@136 -- # e810=() 00:16:11.534 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@136 -- # local -ga e810 00:16:11.534 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@137 -- # x722=() 00:16:11.534 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@137 -- # local -ga x722 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@138 -- # mlx=() 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@138 -- # local -ga mlx 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:11.535 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:11.535 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # [[ up == up ]] 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:11.535 Found net devices under 0000:86:00.0: cvl_0_0 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # [[ up == up ]] 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:11.535 Found net devices under 0000:86:00.1: cvl_0_1 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # is_hw=yes 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@257 -- # create_target_ns 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@27 -- # local -gA dev_map 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@28 -- # local -g _dev 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@44 -- # ips=() 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@11 -- # local val=167772161 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:16:11.535 10.0.0.1 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:11.535 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@11 -- # local val=167772162 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:16:11.536 10.0.0.2 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@38 -- # ping_ips 1 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@107 -- # local dev=initiator0 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:16:11.536 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:11.536 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.356 ms 00:16:11.536 00:16:11.536 --- 10.0.0.1 ping statistics --- 00:16:11.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:11.536 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@168 -- # get_net_dev target0 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@107 -- # local dev=target0 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:16:11.536 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:11.536 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:16:11.536 00:16:11.536 --- 10.0.0.2 ping statistics --- 00:16:11.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:11.536 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@98 -- # (( pair++ )) 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@270 -- # return 0 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@107 -- # local dev=initiator0 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:16:11.536 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:16:11.537 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:11.537 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:16:11.537 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:16:11.537 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:16:11.537 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:16:11.537 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:16:11.537 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:16:11.537 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@107 -- # local dev=initiator1 00:16:11.537 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:16:11.537 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:16:11.537 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@109 -- # return 1 00:16:11.537 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@168 -- # dev= 00:16:11.537 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@169 -- # return 0 00:16:11.537 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:16:11.537 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:16:11.537 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:16:11.537 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:16:11.537 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:16:11.537 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:11.537 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:11.537 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@168 -- # get_net_dev target0 00:16:11.537 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@107 -- # local dev=target0 00:16:11.537 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:16:11.537 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:16:11.537 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:16:11.537 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:16:11.537 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:16:11.537 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:16:11.537 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:16:11.537 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:16:11.537 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:16:11.537 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:11.537 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:16:11.537 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:16:11.537 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:16:11.537 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:16:11.537 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:11.537 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:11.537 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@168 -- # get_net_dev target1 00:16:11.537 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@107 -- # local dev=target1 00:16:11.537 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:16:11.537 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:16:11.537 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@109 -- # return 1 00:16:11.537 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@168 -- # dev= 00:16:11.537 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@169 -- # return 0 00:16:11.537 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:16:11.537 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:11.537 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:16:11.537 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:16:11.537 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:11.537 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:16:11.537 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:16:11.537 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:16:11.537 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:16:11.537 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:11.537 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:11.537 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # nvmfpid=13292 00:16:11.537 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@329 -- # waitforlisten 13292 00:16:11.537 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:11.537 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 13292 ']' 00:16:11.537 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:11.537 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:11.537 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:11.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:11.537 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:11.537 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:11.537 [2024-12-05 11:59:45.081489] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:16:11.537 [2024-12-05 11:59:45.081541] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:11.537 [2024-12-05 11:59:45.166249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:11.537 [2024-12-05 11:59:45.208455] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:11.537 [2024-12-05 11:59:45.208491] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:11.537 [2024-12-05 11:59:45.208499] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:11.537 [2024-12-05 11:59:45.208505] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:11.537 [2024-12-05 11:59:45.208510] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:11.537 [2024-12-05 11:59:45.210132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:11.537 [2024-12-05 11:59:45.210152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:11.537 [2024-12-05 11:59:45.210251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:11.537 [2024-12-05 11:59:45.210252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:11.537 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:11.537 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:16:11.537 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:16:11.537 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:11.537 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:11.537 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:11.537 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:11.537 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:11.537 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:16:11.537 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:16:11.537 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:16:11.537 "nvmf_tgt_1" 00:16:11.537 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:16:11.537 "nvmf_tgt_2" 00:16:11.537 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:16:11.537 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:11.796 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:16:11.796 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:16:11.796 true 00:16:11.796 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:16:11.796 true 00:16:12.055 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:12.055 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:16:12.055 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:16:12.055 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:16:12.055 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:16:12.055 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # nvmfcleanup 00:16:12.055 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@99 -- # sync 00:16:12.055 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:16:12.055 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@102 -- # set +e 00:16:12.055 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@103 -- # for i in {1..20} 00:16:12.055 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:16:12.055 rmmod nvme_tcp 00:16:12.055 rmmod nvme_fabrics 00:16:12.055 rmmod nvme_keyring 00:16:12.055 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:16:12.055 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # set -e 00:16:12.055 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # return 0 00:16:12.055 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # '[' -n 13292 ']' 00:16:12.055 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@337 -- # killprocess 13292 00:16:12.055 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 13292 ']' 00:16:12.055 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 13292 00:16:12.055 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:16:12.055 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:12.055 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 13292 00:16:12.055 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:12.055 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:12.055 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 13292' 00:16:12.055 killing process with pid 13292 00:16:12.055 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 13292 00:16:12.055 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 13292 00:16:12.315 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:16:12.315 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # nvmf_fini 00:16:12.315 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@264 -- # local dev 00:16:12.315 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@267 -- # remove_target_ns 00:16:12.315 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:16:12.315 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:16:12.315 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_target_ns 00:16:14.852 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@268 -- # delete_main_bridge 00:16:14.852 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:16:14.852 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@130 -- # return 0 00:16:14.852 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:16:14.852 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:16:14.852 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:16:14.852 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:16:14.852 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:16:14.852 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:16:14.852 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:16:14.852 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:16:14.852 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:16:14.852 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:16:14.852 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:16:14.852 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:16:14.852 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:16:14.852 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:16:14.852 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:16:14.852 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:16:14.852 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:16:14.852 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@41 -- # _dev=0 00:16:14.852 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@41 -- # dev_map=() 00:16:14.852 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@284 -- # iptr 00:16:14.852 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@542 -- # iptables-save 00:16:14.852 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:16:14.852 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@542 -- # iptables-restore 00:16:14.852 00:16:14.852 real 0m9.749s 00:16:14.852 user 0m7.176s 00:16:14.852 sys 0m5.019s 00:16:14.852 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:14.852 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:14.852 ************************************ 00:16:14.852 END TEST nvmf_multitarget 00:16:14.852 ************************************ 00:16:14.852 11:59:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:14.852 11:59:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:14.852 11:59:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:14.852 11:59:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:14.852 ************************************ 00:16:14.852 START TEST nvmf_rpc 00:16:14.852 ************************************ 00:16:14.852 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:14.852 * Looking for test storage... 00:16:14.852 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:14.852 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:14.852 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:16:14.852 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:14.852 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:14.852 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:14.852 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:14.852 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:14.852 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:16:14.852 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:16:14.852 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:16:14.852 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:16:14.852 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:16:14.852 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:16:14.852 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:16:14.852 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:14.852 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:16:14.852 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:16:14.852 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:14.852 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:14.852 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:16:14.852 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:16:14.852 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:14.852 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:16:14.852 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:14.852 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:16:14.852 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:16:14.852 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:14.853 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:16:14.853 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:14.853 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:14.853 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:14.853 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:16:14.853 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:14.853 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:14.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.853 --rc genhtml_branch_coverage=1 00:16:14.853 --rc genhtml_function_coverage=1 00:16:14.853 --rc genhtml_legend=1 00:16:14.853 --rc geninfo_all_blocks=1 00:16:14.853 --rc geninfo_unexecuted_blocks=1 00:16:14.853 00:16:14.853 ' 00:16:14.853 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:14.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.853 --rc genhtml_branch_coverage=1 00:16:14.853 --rc genhtml_function_coverage=1 00:16:14.853 --rc genhtml_legend=1 00:16:14.853 --rc geninfo_all_blocks=1 00:16:14.853 --rc geninfo_unexecuted_blocks=1 00:16:14.853 00:16:14.853 ' 00:16:14.853 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:14.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.853 --rc genhtml_branch_coverage=1 00:16:14.853 --rc genhtml_function_coverage=1 00:16:14.853 --rc genhtml_legend=1 00:16:14.853 --rc geninfo_all_blocks=1 00:16:14.853 --rc geninfo_unexecuted_blocks=1 00:16:14.853 00:16:14.853 ' 00:16:14.853 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:14.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.853 --rc genhtml_branch_coverage=1 00:16:14.853 --rc genhtml_function_coverage=1 00:16:14.853 --rc genhtml_legend=1 00:16:14.853 --rc geninfo_all_blocks=1 00:16:14.853 --rc geninfo_unexecuted_blocks=1 00:16:14.853 00:16:14.853 ' 00:16:14.853 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:14.853 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:16:14.853 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:14.853 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:14.853 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:14.853 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:14.853 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:14.853 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:16:14.853 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:14.853 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:16:14.853 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:14.853 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:16:14.853 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:14.853 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:16:14.853 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:16:14.853 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:14.853 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:14.853 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:16:14.853 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:14.853 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:14.853 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:14.853 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.853 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.853 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.853 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:16:14.853 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.853 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:16:14.853 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:16:14.853 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:16:14.853 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:16:14.853 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@50 -- # : 0 00:16:14.853 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:16:14.853 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:16:14.853 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:16:14.853 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:14.853 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:14.853 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:16:14.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:16:14.853 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:16:14.853 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:16:14.853 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@54 -- # have_pci_nics=0 00:16:14.853 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:16:14.853 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:16:14.853 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:16:14.853 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:14.853 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # prepare_net_devs 00:16:14.853 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # local -g is_hw=no 00:16:14.853 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@260 -- # remove_target_ns 00:16:14.853 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:16:14.853 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:16:14.853 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_target_ns 00:16:14.853 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:16:14.853 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:16:14.853 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # xtrace_disable 00:16:14.853 11:59:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:21.421 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:21.421 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@131 -- # pci_devs=() 00:16:21.421 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@131 -- # local -a pci_devs 00:16:21.421 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@132 -- # pci_net_devs=() 00:16:21.421 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:16:21.421 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@133 -- # pci_drivers=() 00:16:21.421 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@133 -- # local -A pci_drivers 00:16:21.421 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@135 -- # net_devs=() 00:16:21.421 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@135 -- # local -ga net_devs 00:16:21.421 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@136 -- # e810=() 00:16:21.421 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@136 -- # local -ga e810 00:16:21.421 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@137 -- # x722=() 00:16:21.421 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@137 -- # local -ga x722 00:16:21.421 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@138 -- # mlx=() 00:16:21.421 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@138 -- # local -ga mlx 00:16:21.421 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:21.421 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:21.421 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:21.421 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:21.421 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:21.421 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:21.421 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:21.421 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:21.421 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:21.421 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:21.421 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:21.421 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:21.421 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:16:21.421 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:16:21.421 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:16:21.421 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:16:21.421 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:16:21.421 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:16:21.421 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:16:21.421 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:21.421 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:21.421 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:16:21.421 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:16:21.421 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:21.421 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:21.421 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:16:21.421 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:16:21.421 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:21.421 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:21.421 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:16:21.421 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:16:21.421 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:21.421 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:21.421 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:16:21.421 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:16:21.421 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:16:21.421 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:16:21.421 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:16:21.421 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # [[ up == up ]] 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:21.422 Found net devices under 0000:86:00.0: cvl_0_0 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # [[ up == up ]] 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:21.422 Found net devices under 0000:86:00.1: cvl_0_1 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # is_hw=yes 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@257 -- # create_target_ns 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@27 -- # local -gA dev_map 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@28 -- # local -g _dev 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@44 -- # ips=() 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@11 -- # local val=167772161 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:16:21.422 10.0.0.1 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@11 -- # local val=167772162 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:16:21.422 10.0.0.2 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@38 -- # ping_ips 1 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:16:21.422 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@107 -- # local dev=initiator0 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:16:21.423 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:21.423 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.383 ms 00:16:21.423 00:16:21.423 --- 10.0.0.1 ping statistics --- 00:16:21.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:21.423 rtt min/avg/max/mdev = 0.383/0.383/0.383/0.000 ms 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@168 -- # get_net_dev target0 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@107 -- # local dev=target0 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:16:21.423 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:21.423 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:16:21.423 00:16:21.423 --- 10.0.0.2 ping statistics --- 00:16:21.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:21.423 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@98 -- # (( pair++ )) 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@270 -- # return 0 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@107 -- # local dev=initiator0 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@107 -- # local dev=initiator1 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@109 -- # return 1 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@168 -- # dev= 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@169 -- # return 0 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@168 -- # get_net_dev target0 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@107 -- # local dev=target0 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@168 -- # get_net_dev target1 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@107 -- # local dev=target1 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:16:21.423 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@109 -- # return 1 00:16:21.424 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@168 -- # dev= 00:16:21.424 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@169 -- # return 0 00:16:21.424 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:16:21.424 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:21.424 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:16:21.424 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:16:21.424 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:21.424 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:16:21.424 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:16:21.424 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:16:21.424 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:16:21.424 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:21.424 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:21.424 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # nvmfpid=17087 00:16:21.424 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@329 -- # waitforlisten 17087 00:16:21.424 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:21.424 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 17087 ']' 00:16:21.424 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:21.424 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:21.424 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:21.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:21.424 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:21.424 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:21.424 [2024-12-05 11:59:54.916327] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:16:21.424 [2024-12-05 11:59:54.916457] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:21.424 [2024-12-05 11:59:54.996168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:21.424 [2024-12-05 11:59:55.041952] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:21.424 [2024-12-05 11:59:55.041988] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:21.424 [2024-12-05 11:59:55.041996] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:21.424 [2024-12-05 11:59:55.042002] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:21.424 [2024-12-05 11:59:55.042006] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:21.424 [2024-12-05 11:59:55.043555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:21.424 [2024-12-05 11:59:55.043661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:21.424 [2024-12-05 11:59:55.043778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:21.424 [2024-12-05 11:59:55.043779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:21.681 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:21.681 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:21.682 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:16:21.682 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:21.682 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:21.682 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:21.682 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:16:21.682 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.682 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:21.682 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.682 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:16:21.682 "tick_rate": 2100000000, 00:16:21.682 "poll_groups": [ 00:16:21.682 { 00:16:21.682 "name": "nvmf_tgt_poll_group_000", 00:16:21.682 "admin_qpairs": 0, 00:16:21.682 "io_qpairs": 0, 00:16:21.682 "current_admin_qpairs": 0, 00:16:21.682 "current_io_qpairs": 0, 00:16:21.682 "pending_bdev_io": 0, 00:16:21.682 "completed_nvme_io": 0, 00:16:21.682 "transports": [] 00:16:21.682 }, 00:16:21.682 { 00:16:21.682 "name": "nvmf_tgt_poll_group_001", 00:16:21.682 "admin_qpairs": 0, 00:16:21.682 "io_qpairs": 0, 00:16:21.682 "current_admin_qpairs": 0, 00:16:21.682 "current_io_qpairs": 0, 00:16:21.682 "pending_bdev_io": 0, 00:16:21.682 "completed_nvme_io": 0, 00:16:21.682 "transports": [] 00:16:21.682 }, 00:16:21.682 { 00:16:21.682 "name": "nvmf_tgt_poll_group_002", 00:16:21.682 "admin_qpairs": 0, 00:16:21.682 "io_qpairs": 0, 00:16:21.682 "current_admin_qpairs": 0, 00:16:21.682 "current_io_qpairs": 0, 00:16:21.682 "pending_bdev_io": 0, 00:16:21.682 "completed_nvme_io": 0, 00:16:21.682 "transports": [] 00:16:21.682 }, 00:16:21.682 { 00:16:21.682 "name": "nvmf_tgt_poll_group_003", 00:16:21.682 "admin_qpairs": 0, 00:16:21.682 "io_qpairs": 0, 00:16:21.682 "current_admin_qpairs": 0, 00:16:21.682 "current_io_qpairs": 0, 00:16:21.682 "pending_bdev_io": 0, 00:16:21.682 "completed_nvme_io": 0, 00:16:21.682 "transports": [] 00:16:21.682 } 00:16:21.682 ] 00:16:21.682 }' 00:16:21.682 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:16:21.682 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:16:21.682 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:16:21.682 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:16:21.682 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:16:21.682 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:16:21.939 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:16:21.939 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:21.939 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.939 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:21.939 [2024-12-05 11:59:55.908032] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:21.939 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.939 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:16:21.939 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.939 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:21.939 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.939 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:16:21.939 "tick_rate": 2100000000, 00:16:21.939 "poll_groups": [ 00:16:21.939 { 00:16:21.939 "name": "nvmf_tgt_poll_group_000", 00:16:21.939 "admin_qpairs": 0, 00:16:21.939 "io_qpairs": 0, 00:16:21.939 "current_admin_qpairs": 0, 00:16:21.939 "current_io_qpairs": 0, 00:16:21.939 "pending_bdev_io": 0, 00:16:21.939 "completed_nvme_io": 0, 00:16:21.939 "transports": [ 00:16:21.939 { 00:16:21.939 "trtype": "TCP" 00:16:21.939 } 00:16:21.939 ] 00:16:21.939 }, 00:16:21.939 { 00:16:21.939 "name": "nvmf_tgt_poll_group_001", 00:16:21.939 "admin_qpairs": 0, 00:16:21.939 "io_qpairs": 0, 00:16:21.939 "current_admin_qpairs": 0, 00:16:21.939 "current_io_qpairs": 0, 00:16:21.939 "pending_bdev_io": 0, 00:16:21.939 "completed_nvme_io": 0, 00:16:21.939 "transports": [ 00:16:21.939 { 00:16:21.939 "trtype": "TCP" 00:16:21.939 } 00:16:21.939 ] 00:16:21.939 }, 00:16:21.939 { 00:16:21.939 "name": "nvmf_tgt_poll_group_002", 00:16:21.939 "admin_qpairs": 0, 00:16:21.939 "io_qpairs": 0, 00:16:21.939 "current_admin_qpairs": 0, 00:16:21.939 "current_io_qpairs": 0, 00:16:21.939 "pending_bdev_io": 0, 00:16:21.939 "completed_nvme_io": 0, 00:16:21.939 "transports": [ 00:16:21.939 { 00:16:21.939 "trtype": "TCP" 00:16:21.939 } 00:16:21.939 ] 00:16:21.939 }, 00:16:21.939 { 00:16:21.939 "name": "nvmf_tgt_poll_group_003", 00:16:21.939 "admin_qpairs": 0, 00:16:21.939 "io_qpairs": 0, 00:16:21.939 "current_admin_qpairs": 0, 00:16:21.939 "current_io_qpairs": 0, 00:16:21.939 "pending_bdev_io": 0, 00:16:21.939 "completed_nvme_io": 0, 00:16:21.939 "transports": [ 00:16:21.939 { 00:16:21.939 "trtype": "TCP" 00:16:21.939 } 00:16:21.939 ] 00:16:21.939 } 00:16:21.939 ] 00:16:21.939 }' 00:16:21.939 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:16:21.939 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:21.939 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:21.939 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:21.939 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:16:21.939 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:16:21.939 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:21.939 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:21.939 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:21.939 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:16:21.939 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:16:21.939 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:16:21.939 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:16:21.939 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:21.939 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.939 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:21.939 Malloc1 00:16:21.939 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.939 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:21.939 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.939 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:21.939 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.939 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:21.939 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.939 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:21.939 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.939 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:16:21.940 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.940 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:21.940 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.940 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:21.940 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.940 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:21.940 [2024-12-05 11:59:56.085990] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:21.940 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.940 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:16:21.940 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:16:21.940 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:16:21.940 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:16:21.940 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:21.940 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:16:21.940 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:21.940 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:16:21.940 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:21.940 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:16:21.940 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:16:21.940 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:16:21.940 [2024-12-05 11:59:56.114552] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562' 00:16:22.197 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:22.197 could not add new controller: failed to write to nvme-fabrics device 00:16:22.197 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:16:22.197 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:22.197 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:22.197 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:22.197 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:22.197 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.197 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.197 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.197 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:23.131 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:16:23.131 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:23.131 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:23.131 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:23.131 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:25.663 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:25.663 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:25.663 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:25.663 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:25.663 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:25.663 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:25.663 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:25.663 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:25.663 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:25.663 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:25.663 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:25.663 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:25.663 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:25.663 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:25.663 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:25.663 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:25.663 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.663 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:25.663 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.663 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:25.663 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:16:25.663 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:25.663 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:16:25.663 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:25.663 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:16:25.663 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:25.663 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:16:25.663 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:25.663 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:16:25.663 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:16:25.663 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:25.664 [2024-12-05 11:59:59.538076] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562' 00:16:25.664 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:25.664 could not add new controller: failed to write to nvme-fabrics device 00:16:25.664 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:16:25.664 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:25.664 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:25.664 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:25.664 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:16:25.664 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.664 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:25.664 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.664 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:26.606 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:16:26.606 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:26.606 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:26.606 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:26.606 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:29.136 12:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:29.136 12:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:29.136 12:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:29.136 12:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:29.136 12:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:29.136 12:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:29.136 12:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:29.136 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:29.136 12:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:29.136 12:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:29.136 12:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:29.136 12:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:29.136 12:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:29.136 12:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:29.136 12:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:29.136 12:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:29.136 12:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.136 12:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:29.136 12:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.136 12:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:16:29.136 12:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:29.136 12:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:29.136 12:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.136 12:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:29.136 12:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.136 12:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:29.136 12:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.136 12:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:29.136 [2024-12-05 12:00:02.903456] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:29.136 12:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.136 12:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:29.136 12:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.136 12:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:29.136 12:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.136 12:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:29.136 12:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.136 12:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:29.136 12:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.136 12:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:30.071 12:00:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:30.071 12:00:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:30.071 12:00:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:30.071 12:00:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:30.071 12:00:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:31.975 12:00:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:31.975 12:00:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:31.975 12:00:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:31.975 12:00:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:31.975 12:00:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:31.975 12:00:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:31.975 12:00:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:32.233 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:32.233 12:00:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:32.233 12:00:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:32.233 12:00:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:32.233 12:00:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:32.233 12:00:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:32.233 12:00:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:32.233 12:00:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:32.233 12:00:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:32.234 12:00:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.234 12:00:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.234 12:00:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.234 12:00:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:32.234 12:00:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.234 12:00:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.234 12:00:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.234 12:00:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:32.234 12:00:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:32.234 12:00:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.234 12:00:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.234 12:00:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.234 12:00:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:32.234 12:00:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.234 12:00:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.234 [2024-12-05 12:00:06.302193] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:32.234 12:00:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.234 12:00:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:32.234 12:00:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.234 12:00:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.234 12:00:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.234 12:00:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:32.234 12:00:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.234 12:00:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.234 12:00:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.234 12:00:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:33.611 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:33.611 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:33.611 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:33.611 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:33.611 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:35.517 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:35.517 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:35.517 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:35.517 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:35.517 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:35.517 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:35.517 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:35.517 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:35.517 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:35.517 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:35.517 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:35.517 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:35.517 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:35.517 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:35.517 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:35.517 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:35.517 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.517 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:35.517 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.517 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:35.517 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.517 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:35.517 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.517 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:35.517 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:35.517 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.517 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:35.517 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.517 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:35.517 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.517 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:35.517 [2024-12-05 12:00:09.714400] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:35.776 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.776 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:35.776 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.776 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:35.776 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.776 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:35.776 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.776 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:35.776 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.776 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:36.711 12:00:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:36.711 12:00:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:36.711 12:00:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:36.711 12:00:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:36.711 12:00:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:39.243 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:39.243 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:39.243 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:39.243 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:39.243 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:39.243 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:39.243 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:39.243 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:39.243 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:39.243 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:39.243 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:39.243 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:39.243 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:39.243 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:39.243 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:39.243 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:39.243 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.243 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:39.243 12:00:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.243 12:00:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:39.243 12:00:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.243 12:00:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:39.243 12:00:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.243 12:00:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:39.243 12:00:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:39.243 12:00:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.243 12:00:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:39.243 12:00:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.243 12:00:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:39.243 12:00:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.243 12:00:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:39.243 [2024-12-05 12:00:13.022794] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:39.243 12:00:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.243 12:00:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:39.243 12:00:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.243 12:00:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:39.243 12:00:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.243 12:00:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:39.243 12:00:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.243 12:00:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:39.243 12:00:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.244 12:00:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:40.207 12:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:40.207 12:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:40.207 12:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:40.207 12:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:40.207 12:00:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:42.105 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:42.105 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:42.105 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:42.105 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:42.105 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:42.105 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:42.105 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:42.105 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:42.105 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:42.105 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:42.105 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:42.105 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:42.105 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:42.105 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:42.105 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:42.105 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:42.105 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.105 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:42.363 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.363 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:42.363 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.363 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:42.363 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.363 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:42.363 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:42.363 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.363 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:42.363 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.363 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:42.363 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.363 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:42.363 [2024-12-05 12:00:16.332356] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:42.363 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.363 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:42.363 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.363 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:42.363 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.363 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:42.363 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.363 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:42.363 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.363 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:43.304 12:00:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:43.304 12:00:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:43.304 12:00:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:43.304 12:00:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:43.304 12:00:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:45.307 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:45.307 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:45.307 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:45.307 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:45.307 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:45.307 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:45.307 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:45.567 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:45.567 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:45.567 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:45.567 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:45.567 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:45.567 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:45.567 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:45.567 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:45.567 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:45.567 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.567 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.567 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.567 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:45.567 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.567 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.567 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.567 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:16:45.567 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:45.567 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:45.567 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.567 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.567 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.567 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:45.567 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.567 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.567 [2024-12-05 12:00:19.641963] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:45.567 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.567 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:45.567 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.567 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.567 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.567 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:45.567 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.567 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.567 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.567 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:45.567 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.567 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.567 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.567 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:45.567 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.567 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.567 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.567 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:45.567 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:45.567 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.567 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.567 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.567 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:45.567 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.567 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.567 [2024-12-05 12:00:19.690035] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:45.567 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.567 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:45.567 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.568 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.568 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.568 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:45.568 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.568 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.568 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.568 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:45.568 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.568 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.568 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.568 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:45.568 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.568 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.568 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.568 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:45.568 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:45.568 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.568 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.568 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.568 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:45.568 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.568 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.568 [2024-12-05 12:00:19.738180] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:45.568 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.568 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:45.568 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.568 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.568 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.568 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:45.568 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.568 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.568 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.568 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:45.568 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.568 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.827 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.827 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:45.827 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.827 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.827 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.827 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:45.827 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:45.827 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.828 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.828 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.828 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:45.828 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.828 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.828 [2024-12-05 12:00:19.786342] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:45.828 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.828 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:45.828 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.828 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.828 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.828 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:45.828 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.828 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.828 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.828 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:45.828 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.828 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.828 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.828 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:45.828 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.828 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.828 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.828 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:45.828 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:45.828 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.828 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.828 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.828 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:45.828 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.828 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.828 [2024-12-05 12:00:19.834496] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:45.828 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.828 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:45.828 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.828 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.828 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.828 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:45.828 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.828 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.828 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.828 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:45.828 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.828 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.828 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.828 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:45.828 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.828 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.828 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.828 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:16:45.828 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.828 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.828 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.828 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:16:45.828 "tick_rate": 2100000000, 00:16:45.828 "poll_groups": [ 00:16:45.828 { 00:16:45.828 "name": "nvmf_tgt_poll_group_000", 00:16:45.828 "admin_qpairs": 2, 00:16:45.828 "io_qpairs": 168, 00:16:45.828 "current_admin_qpairs": 0, 00:16:45.828 "current_io_qpairs": 0, 00:16:45.828 "pending_bdev_io": 0, 00:16:45.828 "completed_nvme_io": 296, 00:16:45.828 "transports": [ 00:16:45.828 { 00:16:45.828 "trtype": "TCP" 00:16:45.828 } 00:16:45.828 ] 00:16:45.828 }, 00:16:45.828 { 00:16:45.828 "name": "nvmf_tgt_poll_group_001", 00:16:45.828 "admin_qpairs": 2, 00:16:45.828 "io_qpairs": 168, 00:16:45.828 "current_admin_qpairs": 0, 00:16:45.828 "current_io_qpairs": 0, 00:16:45.828 "pending_bdev_io": 0, 00:16:45.828 "completed_nvme_io": 288, 00:16:45.828 "transports": [ 00:16:45.828 { 00:16:45.828 "trtype": "TCP" 00:16:45.828 } 00:16:45.828 ] 00:16:45.828 }, 00:16:45.828 { 00:16:45.828 "name": "nvmf_tgt_poll_group_002", 00:16:45.828 "admin_qpairs": 1, 00:16:45.828 "io_qpairs": 168, 00:16:45.828 "current_admin_qpairs": 0, 00:16:45.828 "current_io_qpairs": 0, 00:16:45.828 "pending_bdev_io": 0, 00:16:45.828 "completed_nvme_io": 168, 00:16:45.828 "transports": [ 00:16:45.828 { 00:16:45.828 "trtype": "TCP" 00:16:45.828 } 00:16:45.828 ] 00:16:45.828 }, 00:16:45.828 { 00:16:45.828 "name": "nvmf_tgt_poll_group_003", 00:16:45.828 "admin_qpairs": 2, 00:16:45.828 "io_qpairs": 168, 00:16:45.828 "current_admin_qpairs": 0, 00:16:45.828 "current_io_qpairs": 0, 00:16:45.828 "pending_bdev_io": 0, 00:16:45.828 "completed_nvme_io": 270, 00:16:45.828 "transports": [ 00:16:45.828 { 00:16:45.828 "trtype": "TCP" 00:16:45.828 } 00:16:45.828 ] 00:16:45.828 } 00:16:45.828 ] 00:16:45.828 }' 00:16:45.828 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:16:45.828 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:45.828 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:45.828 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:45.828 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:16:45.828 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:16:45.828 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:45.828 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:45.828 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:45.828 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:16:45.828 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:16:45.828 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:16:45.828 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:16:45.828 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # nvmfcleanup 00:16:45.828 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@99 -- # sync 00:16:45.828 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:16:45.828 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@102 -- # set +e 00:16:45.828 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@103 -- # for i in {1..20} 00:16:45.828 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:16:45.828 rmmod nvme_tcp 00:16:45.828 rmmod nvme_fabrics 00:16:45.828 rmmod nvme_keyring 00:16:46.088 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:16:46.088 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # set -e 00:16:46.088 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # return 0 00:16:46.088 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # '[' -n 17087 ']' 00:16:46.088 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@337 -- # killprocess 17087 00:16:46.088 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 17087 ']' 00:16:46.088 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 17087 00:16:46.088 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:16:46.088 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:46.088 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 17087 00:16:46.088 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:46.088 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:46.088 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 17087' 00:16:46.088 killing process with pid 17087 00:16:46.088 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 17087 00:16:46.088 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 17087 00:16:46.088 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:16:46.088 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # nvmf_fini 00:16:46.088 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@264 -- # local dev 00:16:46.088 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@267 -- # remove_target_ns 00:16:46.088 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:16:46.088 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:16:46.088 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_target_ns 00:16:48.644 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@268 -- # delete_main_bridge 00:16:48.644 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:16:48.644 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@130 -- # return 0 00:16:48.644 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:16:48.644 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:16:48.644 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:16:48.644 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:16:48.644 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:16:48.644 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:16:48.644 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:16:48.644 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:16:48.644 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:16:48.644 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:16:48.644 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:16:48.644 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:16:48.644 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:16:48.644 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:16:48.644 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:16:48.644 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:16:48.644 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:16:48.644 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@41 -- # _dev=0 00:16:48.644 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@41 -- # dev_map=() 00:16:48.644 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@284 -- # iptr 00:16:48.644 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@542 -- # iptables-save 00:16:48.644 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:16:48.644 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@542 -- # iptables-restore 00:16:48.644 00:16:48.644 real 0m33.831s 00:16:48.644 user 1m42.545s 00:16:48.644 sys 0m6.585s 00:16:48.644 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:48.644 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:48.644 ************************************ 00:16:48.644 END TEST nvmf_rpc 00:16:48.644 ************************************ 00:16:48.644 12:00:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:48.644 12:00:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:48.644 12:00:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:48.644 12:00:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:48.644 ************************************ 00:16:48.644 START TEST nvmf_invalid 00:16:48.644 ************************************ 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:48.645 * Looking for test storage... 00:16:48.645 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:48.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.645 --rc genhtml_branch_coverage=1 00:16:48.645 --rc genhtml_function_coverage=1 00:16:48.645 --rc genhtml_legend=1 00:16:48.645 --rc geninfo_all_blocks=1 00:16:48.645 --rc geninfo_unexecuted_blocks=1 00:16:48.645 00:16:48.645 ' 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:48.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.645 --rc genhtml_branch_coverage=1 00:16:48.645 --rc genhtml_function_coverage=1 00:16:48.645 --rc genhtml_legend=1 00:16:48.645 --rc geninfo_all_blocks=1 00:16:48.645 --rc geninfo_unexecuted_blocks=1 00:16:48.645 00:16:48.645 ' 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:48.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.645 --rc genhtml_branch_coverage=1 00:16:48.645 --rc genhtml_function_coverage=1 00:16:48.645 --rc genhtml_legend=1 00:16:48.645 --rc geninfo_all_blocks=1 00:16:48.645 --rc geninfo_unexecuted_blocks=1 00:16:48.645 00:16:48.645 ' 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:48.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.645 --rc genhtml_branch_coverage=1 00:16:48.645 --rc genhtml_function_coverage=1 00:16:48.645 --rc genhtml_legend=1 00:16:48.645 --rc geninfo_all_blocks=1 00:16:48.645 --rc geninfo_unexecuted_blocks=1 00:16:48.645 00:16:48.645 ' 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@50 -- # : 0 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:48.645 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:16:48.646 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:16:48.646 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:16:48.646 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:16:48.646 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@54 -- # have_pci_nics=0 00:16:48.646 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:48.646 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:48.646 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:48.646 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:16:48.646 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:16:48.646 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:16:48.646 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:16:48.646 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:48.646 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # prepare_net_devs 00:16:48.646 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # local -g is_hw=no 00:16:48.646 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@260 -- # remove_target_ns 00:16:48.646 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:16:48.646 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:16:48.646 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_target_ns 00:16:48.646 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:16:48.646 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:16:48.646 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # xtrace_disable 00:16:48.646 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:55.216 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:55.216 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@131 -- # pci_devs=() 00:16:55.216 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@131 -- # local -a pci_devs 00:16:55.216 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@132 -- # pci_net_devs=() 00:16:55.216 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:16:55.216 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@133 -- # pci_drivers=() 00:16:55.216 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@133 -- # local -A pci_drivers 00:16:55.216 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@135 -- # net_devs=() 00:16:55.216 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@135 -- # local -ga net_devs 00:16:55.216 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@136 -- # e810=() 00:16:55.216 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@136 -- # local -ga e810 00:16:55.216 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@137 -- # x722=() 00:16:55.216 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@137 -- # local -ga x722 00:16:55.216 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@138 -- # mlx=() 00:16:55.216 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@138 -- # local -ga mlx 00:16:55.216 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:55.216 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:55.216 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:55.216 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:55.216 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:55.216 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:55.216 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:55.216 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:55.216 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:55.216 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:55.216 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:55.216 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:55.216 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:16:55.216 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:16:55.216 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:16:55.216 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:16:55.216 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:16:55.216 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:16:55.216 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:16:55.216 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:55.216 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:55.216 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:16:55.216 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:16:55.216 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:55.216 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:55.216 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:16:55.216 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:16:55.216 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:55.216 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:55.216 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:16:55.216 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:16:55.216 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:55.216 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:55.216 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:16:55.216 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:16:55.216 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:16:55.216 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:16:55.216 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:16:55.216 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:55.216 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:16:55.216 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:55.216 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # [[ up == up ]] 00:16:55.216 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:16:55.216 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:55.216 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:55.216 Found net devices under 0000:86:00.0: cvl_0_0 00:16:55.216 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:16:55.216 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:16:55.216 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:55.216 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:16:55.216 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # [[ up == up ]] 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:55.217 Found net devices under 0000:86:00.1: cvl_0_1 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # is_hw=yes 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@257 -- # create_target_ns 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@27 -- # local -gA dev_map 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@28 -- # local -g _dev 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@44 -- # ips=() 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@11 -- # local val=167772161 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:16:55.217 10.0.0.1 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@11 -- # local val=167772162 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:16:55.217 10.0.0.2 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@38 -- # ping_ips 1 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:16:55.217 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@107 -- # local dev=initiator0 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:16:55.218 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:55.218 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.394 ms 00:16:55.218 00:16:55.218 --- 10.0.0.1 ping statistics --- 00:16:55.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:55.218 rtt min/avg/max/mdev = 0.394/0.394/0.394/0.000 ms 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@168 -- # get_net_dev target0 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@107 -- # local dev=target0 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:16:55.218 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:55.218 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:16:55.218 00:16:55.218 --- 10.0.0.2 ping statistics --- 00:16:55.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:55.218 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@98 -- # (( pair++ )) 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@270 -- # return 0 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@107 -- # local dev=initiator0 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@107 -- # local dev=initiator1 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@109 -- # return 1 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@168 -- # dev= 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@169 -- # return 0 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@168 -- # get_net_dev target0 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@107 -- # local dev=target0 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:16:55.218 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:16:55.219 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:55.219 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:55.219 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@168 -- # get_net_dev target1 00:16:55.219 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@107 -- # local dev=target1 00:16:55.219 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:16:55.219 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:16:55.219 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@109 -- # return 1 00:16:55.219 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@168 -- # dev= 00:16:55.219 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@169 -- # return 0 00:16:55.219 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:16:55.219 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:55.219 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:16:55.219 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:16:55.219 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:55.219 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:16:55.219 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:16:55.219 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:16:55.219 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:16:55.219 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:55.219 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:55.219 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # nvmfpid=25479 00:16:55.219 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:55.219 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@329 -- # waitforlisten 25479 00:16:55.219 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 25479 ']' 00:16:55.219 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:55.219 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:55.219 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:55.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:55.219 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:55.219 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:55.219 [2024-12-05 12:00:28.784680] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:16:55.219 [2024-12-05 12:00:28.784726] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:55.219 [2024-12-05 12:00:28.865170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:55.219 [2024-12-05 12:00:28.907267] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:55.219 [2024-12-05 12:00:28.907305] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:55.219 [2024-12-05 12:00:28.907312] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:55.219 [2024-12-05 12:00:28.907319] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:55.219 [2024-12-05 12:00:28.907323] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:55.219 [2024-12-05 12:00:28.908923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:55.219 [2024-12-05 12:00:28.909031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:55.219 [2024-12-05 12:00:28.909141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:55.219 [2024-12-05 12:00:28.909142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:55.477 12:00:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:55.477 12:00:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:16:55.478 12:00:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:16:55.478 12:00:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:55.478 12:00:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:55.478 12:00:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:55.478 12:00:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:55.478 12:00:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode25105 00:16:55.737 [2024-12-05 12:00:29.830080] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:16:55.737 12:00:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:16:55.737 { 00:16:55.737 "nqn": "nqn.2016-06.io.spdk:cnode25105", 00:16:55.737 "tgt_name": "foobar", 00:16:55.737 "method": "nvmf_create_subsystem", 00:16:55.737 "req_id": 1 00:16:55.737 } 00:16:55.737 Got JSON-RPC error response 00:16:55.737 response: 00:16:55.737 { 00:16:55.737 "code": -32603, 00:16:55.737 "message": "Unable to find target foobar" 00:16:55.737 }' 00:16:55.737 12:00:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:16:55.737 { 00:16:55.737 "nqn": "nqn.2016-06.io.spdk:cnode25105", 00:16:55.737 "tgt_name": "foobar", 00:16:55.737 "method": "nvmf_create_subsystem", 00:16:55.737 "req_id": 1 00:16:55.737 } 00:16:55.737 Got JSON-RPC error response 00:16:55.737 response: 00:16:55.737 { 00:16:55.737 "code": -32603, 00:16:55.737 "message": "Unable to find target foobar" 00:16:55.737 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:16:55.737 12:00:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:16:55.737 12:00:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode27080 00:16:56.009 [2024-12-05 12:00:30.042855] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27080: invalid serial number 'SPDKISFASTANDAWESOME' 00:16:56.009 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:16:56.009 { 00:16:56.009 "nqn": "nqn.2016-06.io.spdk:cnode27080", 00:16:56.009 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:16:56.009 "method": "nvmf_create_subsystem", 00:16:56.009 "req_id": 1 00:16:56.009 } 00:16:56.009 Got JSON-RPC error response 00:16:56.009 response: 00:16:56.009 { 00:16:56.009 "code": -32602, 00:16:56.009 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:16:56.009 }' 00:16:56.009 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:16:56.009 { 00:16:56.009 "nqn": "nqn.2016-06.io.spdk:cnode27080", 00:16:56.009 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:16:56.009 "method": "nvmf_create_subsystem", 00:16:56.009 "req_id": 1 00:16:56.009 } 00:16:56.009 Got JSON-RPC error response 00:16:56.009 response: 00:16:56.009 { 00:16:56.009 "code": -32602, 00:16:56.009 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:16:56.009 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:56.009 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:16:56.009 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode17342 00:16:56.269 [2024-12-05 12:00:30.267521] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17342: invalid model number 'SPDK_Controller' 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:16:56.269 { 00:16:56.269 "nqn": "nqn.2016-06.io.spdk:cnode17342", 00:16:56.269 "model_number": "SPDK_Controller\u001f", 00:16:56.269 "method": "nvmf_create_subsystem", 00:16:56.269 "req_id": 1 00:16:56.269 } 00:16:56.269 Got JSON-RPC error response 00:16:56.269 response: 00:16:56.269 { 00:16:56.269 "code": -32602, 00:16:56.269 "message": "Invalid MN SPDK_Controller\u001f" 00:16:56.269 }' 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:16:56.269 { 00:16:56.269 "nqn": "nqn.2016-06.io.spdk:cnode17342", 00:16:56.269 "model_number": "SPDK_Controller\u001f", 00:16:56.269 "method": "nvmf_create_subsystem", 00:16:56.269 "req_id": 1 00:16:56.269 } 00:16:56.269 Got JSON-RPC error response 00:16:56.269 response: 00:16:56.269 { 00:16:56.269 "code": -32602, 00:16:56.269 "message": "Invalid MN SPDK_Controller\u001f" 00:16:56.269 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:56.269 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:16:56.270 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:16:56.270 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:16:56.270 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:56.270 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:56.270 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:16:56.270 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:16:56.270 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:16:56.270 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:56.270 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:56.270 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:16:56.270 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:16:56.270 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:16:56.270 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:56.270 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:56.270 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:16:56.270 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:16:56.270 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:16:56.270 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:56.270 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:56.270 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:16:56.270 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:16:56.270 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:16:56.270 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:56.270 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:56.270 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:16:56.270 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:16:56.270 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:16:56.270 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:56.270 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:56.270 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:16:56.270 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:16:56.270 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:16:56.270 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:56.270 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:56.270 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ x == \- ]] 00:16:56.270 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'xj%V|E[{ rADUhk"A>G-Y' 00:16:56.270 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'xj%V|E[{ rADUhk"A>G-Y' nqn.2016-06.io.spdk:cnode5682 00:16:56.529 [2024-12-05 12:00:30.612663] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5682: invalid serial number 'xj%V|E[{ rADUhk"A>G-Y' 00:16:56.529 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:16:56.529 { 00:16:56.529 "nqn": "nqn.2016-06.io.spdk:cnode5682", 00:16:56.529 "serial_number": "xj%V|E[{ rADUhk\"A>G-Y", 00:16:56.529 "method": "nvmf_create_subsystem", 00:16:56.529 "req_id": 1 00:16:56.529 } 00:16:56.529 Got JSON-RPC error response 00:16:56.529 response: 00:16:56.529 { 00:16:56.529 "code": -32602, 00:16:56.529 "message": "Invalid SN xj%V|E[{ rADUhk\"A>G-Y" 00:16:56.529 }' 00:16:56.529 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:16:56.529 { 00:16:56.529 "nqn": "nqn.2016-06.io.spdk:cnode5682", 00:16:56.529 "serial_number": "xj%V|E[{ rADUhk\"A>G-Y", 00:16:56.529 "method": "nvmf_create_subsystem", 00:16:56.529 "req_id": 1 00:16:56.529 } 00:16:56.529 Got JSON-RPC error response 00:16:56.529 response: 00:16:56.529 { 00:16:56.529 "code": -32602, 00:16:56.529 "message": "Invalid SN xj%V|E[{ rADUhk\"A>G-Y" 00:16:56.530 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:56.530 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:16:56.530 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:16:56.530 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:56.530 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:16:56.530 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:16:56.530 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:56.530 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:56.530 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:16:56.530 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:16:56.530 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:16:56.530 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:56.530 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:56.530 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:16:56.530 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:16:56.530 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:16:56.530 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:56.530 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:56.530 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:16:56.530 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:16:56.530 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:16:56.530 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:56.530 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:56.530 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:16:56.530 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:16:56.530 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:16:56.530 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:56.530 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:56.530 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:16:56.530 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:16:56.530 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:16:56.530 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:56.530 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:56.530 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:16:56.530 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:16:56.530 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:16:56.530 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:56.530 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:56.530 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:16:56.530 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:16:56.530 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:16:56.530 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:56.530 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:56.530 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:16:56.530 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:16:56.530 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:16:56.530 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:56.530 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:56.530 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:16:56.530 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:16:56.530 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:16:56.530 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:56.530 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:56.530 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:16:56.530 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:16:56.530 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:16:56.530 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:56.530 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:56.530 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:16:56.530 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:16:56.530 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:16:56.530 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:56.530 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:56.530 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:16:56.530 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:16:56.530 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:16:56.530 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:56.530 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:16:56.790 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:56.791 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:56.791 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:16:56.791 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:16:56.791 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:16:56.791 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:56.791 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:56.791 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:16:56.791 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:16:56.791 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:16:56.791 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:56.791 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:56.791 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:16:56.791 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:16:56.791 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:16:56.791 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:56.791 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:56.791 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:16:56.791 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:16:56.791 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:16:56.791 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:56.791 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:56.791 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:16:56.791 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:16:56.791 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:16:56.791 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:56.791 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:56.791 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:16:56.791 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:16:56.791 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:16:56.791 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:56.791 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:56.791 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:16:56.791 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:16:56.791 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:16:56.791 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:56.791 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:56.791 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:16:56.791 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:16:56.791 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:16:56.791 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:56.791 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:56.791 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:16:56.791 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:16:56.791 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:16:56.791 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:56.791 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:56.791 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:16:56.791 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:16:56.791 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:16:56.791 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:56.791 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:56.791 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 5 == \- ]] 00:16:56.791 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '5l4F.:x_~Sa3XgDL?*M|0q0I_0mbkj@Q|HI "}[#6' 00:16:56.791 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '5l4F.:x_~Sa3XgDL?*M|0q0I_0mbkj@Q|HI "}[#6' nqn.2016-06.io.spdk:cnode5007 00:16:57.049 [2024-12-05 12:00:31.078192] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5007: invalid model number '5l4F.:x_~Sa3XgDL?*M|0q0I_0mbkj@Q|HI "}[#6' 00:16:57.049 12:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:16:57.049 { 00:16:57.049 "nqn": "nqn.2016-06.io.spdk:cnode5007", 00:16:57.049 "model_number": "5l4F.:x_~Sa3XgDL?*M|0q0I_0mbkj@Q|HI \"}[#6", 00:16:57.049 "method": "nvmf_create_subsystem", 00:16:57.049 "req_id": 1 00:16:57.049 } 00:16:57.049 Got JSON-RPC error response 00:16:57.049 response: 00:16:57.049 { 00:16:57.049 "code": -32602, 00:16:57.049 "message": "Invalid MN 5l4F.:x_~Sa3XgDL?*M|0q0I_0mbkj@Q|HI \"}[#6" 00:16:57.049 }' 00:16:57.049 12:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:16:57.049 { 00:16:57.049 "nqn": "nqn.2016-06.io.spdk:cnode5007", 00:16:57.049 "model_number": "5l4F.:x_~Sa3XgDL?*M|0q0I_0mbkj@Q|HI \"}[#6", 00:16:57.049 "method": "nvmf_create_subsystem", 00:16:57.049 "req_id": 1 00:16:57.049 } 00:16:57.049 Got JSON-RPC error response 00:16:57.049 response: 00:16:57.049 { 00:16:57.049 "code": -32602, 00:16:57.049 "message": "Invalid MN 5l4F.:x_~Sa3XgDL?*M|0q0I_0mbkj@Q|HI \"}[#6" 00:16:57.049 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:16:57.049 12:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:16:57.308 [2024-12-05 12:00:31.278947] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:57.308 12:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:16:57.566 12:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a 10.0.0.1 -s 4421 00:16:57.566 [2024-12-05 12:00:31.704311] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:16:57.566 12:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # out='request: 00:16:57.566 { 00:16:57.566 "nqn": "nqn.2016-06.io.spdk:cnode", 00:16:57.566 "listen_address": { 00:16:57.566 "trtype": "tcp", 00:16:57.566 "traddr": "10.0.0.1", 00:16:57.566 "trsvcid": "4421" 00:16:57.566 }, 00:16:57.566 "method": "nvmf_subsystem_remove_listener", 00:16:57.566 "req_id": 1 00:16:57.566 } 00:16:57.566 Got JSON-RPC error response 00:16:57.566 response: 00:16:57.566 { 00:16:57.566 "code": -32602, 00:16:57.566 "message": "Invalid parameters" 00:16:57.566 }' 00:16:57.566 12:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@65 -- # [[ request: 00:16:57.566 { 00:16:57.566 "nqn": "nqn.2016-06.io.spdk:cnode", 00:16:57.566 "listen_address": { 00:16:57.566 "trtype": "tcp", 00:16:57.566 "traddr": "10.0.0.1", 00:16:57.566 "trsvcid": "4421" 00:16:57.566 }, 00:16:57.566 "method": "nvmf_subsystem_remove_listener", 00:16:57.566 "req_id": 1 00:16:57.566 } 00:16:57.566 Got JSON-RPC error response 00:16:57.566 response: 00:16:57.566 { 00:16:57.566 "code": -32602, 00:16:57.566 "message": "Invalid parameters" 00:16:57.566 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:16:57.566 12:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23309 -i 0 00:16:57.825 [2024-12-05 12:00:31.908970] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23309: invalid cntlid range [0-65519] 00:16:57.825 12:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@68 -- # out='request: 00:16:57.825 { 00:16:57.825 "nqn": "nqn.2016-06.io.spdk:cnode23309", 00:16:57.825 "min_cntlid": 0, 00:16:57.825 "method": "nvmf_create_subsystem", 00:16:57.825 "req_id": 1 00:16:57.825 } 00:16:57.825 Got JSON-RPC error response 00:16:57.825 response: 00:16:57.825 { 00:16:57.825 "code": -32602, 00:16:57.825 "message": "Invalid cntlid range [0-65519]" 00:16:57.825 }' 00:16:57.825 12:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # [[ request: 00:16:57.825 { 00:16:57.825 "nqn": "nqn.2016-06.io.spdk:cnode23309", 00:16:57.825 "min_cntlid": 0, 00:16:57.825 "method": "nvmf_create_subsystem", 00:16:57.825 "req_id": 1 00:16:57.825 } 00:16:57.825 Got JSON-RPC error response 00:16:57.825 response: 00:16:57.825 { 00:16:57.825 "code": -32602, 00:16:57.825 "message": "Invalid cntlid range [0-65519]" 00:16:57.825 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:57.825 12:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31534 -i 65520 00:16:58.084 [2024-12-05 12:00:32.109664] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31534: invalid cntlid range [65520-65519] 00:16:58.084 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # out='request: 00:16:58.084 { 00:16:58.084 "nqn": "nqn.2016-06.io.spdk:cnode31534", 00:16:58.084 "min_cntlid": 65520, 00:16:58.084 "method": "nvmf_create_subsystem", 00:16:58.084 "req_id": 1 00:16:58.084 } 00:16:58.084 Got JSON-RPC error response 00:16:58.084 response: 00:16:58.084 { 00:16:58.084 "code": -32602, 00:16:58.084 "message": "Invalid cntlid range [65520-65519]" 00:16:58.084 }' 00:16:58.084 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@71 -- # [[ request: 00:16:58.084 { 00:16:58.084 "nqn": "nqn.2016-06.io.spdk:cnode31534", 00:16:58.084 "min_cntlid": 65520, 00:16:58.084 "method": "nvmf_create_subsystem", 00:16:58.084 "req_id": 1 00:16:58.084 } 00:16:58.084 Got JSON-RPC error response 00:16:58.084 response: 00:16:58.084 { 00:16:58.084 "code": -32602, 00:16:58.084 "message": "Invalid cntlid range [65520-65519]" 00:16:58.084 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:58.084 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4041 -I 0 00:16:58.342 [2024-12-05 12:00:32.310315] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4041: invalid cntlid range [1-0] 00:16:58.342 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@72 -- # out='request: 00:16:58.342 { 00:16:58.342 "nqn": "nqn.2016-06.io.spdk:cnode4041", 00:16:58.342 "max_cntlid": 0, 00:16:58.342 "method": "nvmf_create_subsystem", 00:16:58.342 "req_id": 1 00:16:58.342 } 00:16:58.342 Got JSON-RPC error response 00:16:58.342 response: 00:16:58.342 { 00:16:58.342 "code": -32602, 00:16:58.342 "message": "Invalid cntlid range [1-0]" 00:16:58.342 }' 00:16:58.342 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # [[ request: 00:16:58.342 { 00:16:58.342 "nqn": "nqn.2016-06.io.spdk:cnode4041", 00:16:58.342 "max_cntlid": 0, 00:16:58.342 "method": "nvmf_create_subsystem", 00:16:58.342 "req_id": 1 00:16:58.342 } 00:16:58.342 Got JSON-RPC error response 00:16:58.342 response: 00:16:58.342 { 00:16:58.342 "code": -32602, 00:16:58.342 "message": "Invalid cntlid range [1-0]" 00:16:58.342 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:58.342 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10783 -I 65520 00:16:58.342 [2024-12-05 12:00:32.510955] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10783: invalid cntlid range [1-65520] 00:16:58.600 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # out='request: 00:16:58.600 { 00:16:58.600 "nqn": "nqn.2016-06.io.spdk:cnode10783", 00:16:58.600 "max_cntlid": 65520, 00:16:58.600 "method": "nvmf_create_subsystem", 00:16:58.600 "req_id": 1 00:16:58.600 } 00:16:58.600 Got JSON-RPC error response 00:16:58.600 response: 00:16:58.600 { 00:16:58.600 "code": -32602, 00:16:58.600 "message": "Invalid cntlid range [1-65520]" 00:16:58.600 }' 00:16:58.600 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # [[ request: 00:16:58.600 { 00:16:58.600 "nqn": "nqn.2016-06.io.spdk:cnode10783", 00:16:58.600 "max_cntlid": 65520, 00:16:58.600 "method": "nvmf_create_subsystem", 00:16:58.600 "req_id": 1 00:16:58.600 } 00:16:58.600 Got JSON-RPC error response 00:16:58.600 response: 00:16:58.600 { 00:16:58.600 "code": -32602, 00:16:58.600 "message": "Invalid cntlid range [1-65520]" 00:16:58.600 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:58.600 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode22525 -i 6 -I 5 00:16:58.600 [2024-12-05 12:00:32.707634] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22525: invalid cntlid range [6-5] 00:16:58.600 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # out='request: 00:16:58.600 { 00:16:58.600 "nqn": "nqn.2016-06.io.spdk:cnode22525", 00:16:58.600 "min_cntlid": 6, 00:16:58.600 "max_cntlid": 5, 00:16:58.600 "method": "nvmf_create_subsystem", 00:16:58.600 "req_id": 1 00:16:58.600 } 00:16:58.600 Got JSON-RPC error response 00:16:58.600 response: 00:16:58.600 { 00:16:58.600 "code": -32602, 00:16:58.600 "message": "Invalid cntlid range [6-5]" 00:16:58.600 }' 00:16:58.600 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # [[ request: 00:16:58.600 { 00:16:58.600 "nqn": "nqn.2016-06.io.spdk:cnode22525", 00:16:58.600 "min_cntlid": 6, 00:16:58.600 "max_cntlid": 5, 00:16:58.600 "method": "nvmf_create_subsystem", 00:16:58.600 "req_id": 1 00:16:58.600 } 00:16:58.600 Got JSON-RPC error response 00:16:58.600 response: 00:16:58.600 { 00:16:58.600 "code": -32602, 00:16:58.600 "message": "Invalid cntlid range [6-5]" 00:16:58.600 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:58.600 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:16:58.859 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@82 -- # out='request: 00:16:58.859 { 00:16:58.859 "name": "foobar", 00:16:58.859 "method": "nvmf_delete_target", 00:16:58.859 "req_id": 1 00:16:58.859 } 00:16:58.859 Got JSON-RPC error response 00:16:58.859 response: 00:16:58.859 { 00:16:58.859 "code": -32602, 00:16:58.859 "message": "The specified target doesn'\''t exist, cannot delete it." 00:16:58.859 }' 00:16:58.859 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # [[ request: 00:16:58.859 { 00:16:58.859 "name": "foobar", 00:16:58.859 "method": "nvmf_delete_target", 00:16:58.859 "req_id": 1 00:16:58.859 } 00:16:58.859 Got JSON-RPC error response 00:16:58.859 response: 00:16:58.859 { 00:16:58.859 "code": -32602, 00:16:58.859 "message": "The specified target doesn't exist, cannot delete it." 00:16:58.859 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:16:58.859 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:16:58.859 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@86 -- # nvmftestfini 00:16:58.859 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@335 -- # nvmfcleanup 00:16:58.859 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@99 -- # sync 00:16:58.859 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:16:58.859 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@102 -- # set +e 00:16:58.859 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@103 -- # for i in {1..20} 00:16:58.859 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:16:58.859 rmmod nvme_tcp 00:16:58.859 rmmod nvme_fabrics 00:16:58.859 rmmod nvme_keyring 00:16:58.859 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:16:58.859 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # set -e 00:16:58.859 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # return 0 00:16:58.859 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # '[' -n 25479 ']' 00:16:58.859 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@337 -- # killprocess 25479 00:16:58.859 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 25479 ']' 00:16:58.859 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 25479 00:16:58.859 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:16:58.859 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:58.859 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 25479 00:16:58.859 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:58.859 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:58.859 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 25479' 00:16:58.859 killing process with pid 25479 00:16:58.859 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 25479 00:16:58.859 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 25479 00:16:59.139 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:16:59.139 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # nvmf_fini 00:16:59.139 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@264 -- # local dev 00:16:59.139 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@267 -- # remove_target_ns 00:16:59.139 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:16:59.139 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:16:59.139 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_target_ns 00:17:01.044 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@268 -- # delete_main_bridge 00:17:01.044 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:17:01.044 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@130 -- # return 0 00:17:01.045 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:17:01.045 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:17:01.045 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:17:01.045 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:17:01.045 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:17:01.045 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:17:01.045 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:17:01.045 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:17:01.045 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:17:01.045 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:17:01.045 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:17:01.045 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:17:01.045 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:17:01.045 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:17:01.045 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:17:01.045 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:17:01.046 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:17:01.046 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@41 -- # _dev=0 00:17:01.046 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@41 -- # dev_map=() 00:17:01.046 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@284 -- # iptr 00:17:01.046 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@542 -- # iptables-save 00:17:01.046 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:17:01.046 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@542 -- # iptables-restore 00:17:01.046 00:17:01.046 real 0m12.793s 00:17:01.046 user 0m21.382s 00:17:01.046 sys 0m5.489s 00:17:01.046 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:01.046 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:01.046 ************************************ 00:17:01.046 END TEST nvmf_invalid 00:17:01.046 ************************************ 00:17:01.309 12:00:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:01.309 12:00:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:01.309 12:00:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:01.309 12:00:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:01.309 ************************************ 00:17:01.309 START TEST nvmf_connect_stress 00:17:01.309 ************************************ 00:17:01.309 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:01.309 * Looking for test storage... 00:17:01.309 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:01.309 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:01.309 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:17:01.309 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:01.309 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:01.309 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:01.309 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:01.309 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:01.309 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:17:01.309 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:17:01.309 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:17:01.309 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:17:01.309 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:17:01.309 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:17:01.309 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:17:01.309 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:01.309 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:17:01.309 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:17:01.309 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:01.309 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:01.309 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:17:01.309 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:17:01.309 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:01.309 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:17:01.309 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:17:01.309 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:17:01.309 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:17:01.309 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:01.309 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:17:01.309 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:17:01.309 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:01.309 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:01.309 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:17:01.309 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:01.309 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:01.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.309 --rc genhtml_branch_coverage=1 00:17:01.309 --rc genhtml_function_coverage=1 00:17:01.309 --rc genhtml_legend=1 00:17:01.309 --rc geninfo_all_blocks=1 00:17:01.309 --rc geninfo_unexecuted_blocks=1 00:17:01.309 00:17:01.309 ' 00:17:01.309 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:01.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.309 --rc genhtml_branch_coverage=1 00:17:01.309 --rc genhtml_function_coverage=1 00:17:01.309 --rc genhtml_legend=1 00:17:01.309 --rc geninfo_all_blocks=1 00:17:01.309 --rc geninfo_unexecuted_blocks=1 00:17:01.309 00:17:01.309 ' 00:17:01.309 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:01.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.309 --rc genhtml_branch_coverage=1 00:17:01.309 --rc genhtml_function_coverage=1 00:17:01.309 --rc genhtml_legend=1 00:17:01.309 --rc geninfo_all_blocks=1 00:17:01.309 --rc geninfo_unexecuted_blocks=1 00:17:01.309 00:17:01.309 ' 00:17:01.309 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:01.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.309 --rc genhtml_branch_coverage=1 00:17:01.309 --rc genhtml_function_coverage=1 00:17:01.310 --rc genhtml_legend=1 00:17:01.310 --rc geninfo_all_blocks=1 00:17:01.310 --rc geninfo_unexecuted_blocks=1 00:17:01.310 00:17:01.310 ' 00:17:01.310 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:01.310 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:17:01.310 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:01.310 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:01.310 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:01.310 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:01.310 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:01.310 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:17:01.310 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:01.310 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:17:01.310 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:01.310 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:17:01.310 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:01.310 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:17:01.310 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:17:01.310 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:01.310 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:01.310 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:17:01.310 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:01.310 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:01.310 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:01.310 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.310 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.310 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.310 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:17:01.310 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.310 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:17:01.310 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:17:01.310 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:17:01.310 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:17:01.310 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@50 -- # : 0 00:17:01.310 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:17:01.310 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:17:01.310 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:17:01.310 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:01.310 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:01.310 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:17:01.310 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:17:01.310 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:17:01.310 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:17:01.310 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@54 -- # have_pci_nics=0 00:17:01.568 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:17:01.568 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:17:01.568 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:01.568 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # prepare_net_devs 00:17:01.568 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # local -g is_hw=no 00:17:01.568 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@260 -- # remove_target_ns 00:17:01.568 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:17:01.568 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:17:01.568 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_target_ns 00:17:01.568 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:17:01.568 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:17:01.568 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # xtrace_disable 00:17:01.568 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:08.139 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:08.139 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@131 -- # pci_devs=() 00:17:08.139 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@131 -- # local -a pci_devs 00:17:08.139 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@132 -- # pci_net_devs=() 00:17:08.139 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:17:08.139 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@133 -- # pci_drivers=() 00:17:08.139 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@133 -- # local -A pci_drivers 00:17:08.139 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@135 -- # net_devs=() 00:17:08.139 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@135 -- # local -ga net_devs 00:17:08.139 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@136 -- # e810=() 00:17:08.139 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@136 -- # local -ga e810 00:17:08.139 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@137 -- # x722=() 00:17:08.139 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@137 -- # local -ga x722 00:17:08.139 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@138 -- # mlx=() 00:17:08.139 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@138 -- # local -ga mlx 00:17:08.139 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:08.139 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:08.139 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:08.139 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:08.139 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:08.139 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:08.139 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:08.139 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:08.139 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:08.139 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:08.139 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:08.139 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:08.139 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:17:08.139 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:17:08.139 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:17:08.139 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:17:08.139 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:17:08.139 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:17:08.139 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:17:08.139 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:08.139 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:08.139 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:17:08.139 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:17:08.139 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:08.139 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:08.139 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:17:08.139 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:17:08.139 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:08.139 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:08.139 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:17:08.139 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:17:08.139 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:08.139 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:08.139 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:17:08.139 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:17:08.139 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:17:08.139 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:17:08.139 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:17:08.139 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:08.139 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:17:08.139 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:08.139 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # [[ up == up ]] 00:17:08.139 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:17:08.139 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:08.139 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:08.139 Found net devices under 0000:86:00.0: cvl_0_0 00:17:08.139 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:17:08.139 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:17:08.139 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:08.139 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:17:08.139 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:08.139 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # [[ up == up ]] 00:17:08.139 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:17:08.139 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:08.139 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:08.139 Found net devices under 0000:86:00.1: cvl_0_1 00:17:08.139 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:17:08.139 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:17:08.139 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # is_hw=yes 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@257 -- # create_target_ns 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@27 -- # local -gA dev_map 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@28 -- # local -g _dev 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@44 -- # ips=() 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@11 -- # local val=167772161 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:17:08.140 10.0.0.1 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@11 -- # local val=167772162 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:17:08.140 10.0.0.2 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@38 -- # ping_ips 1 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@107 -- # local dev=initiator0 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:17:08.140 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:17:08.141 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:08.141 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.363 ms 00:17:08.141 00:17:08.141 --- 10.0.0.1 ping statistics --- 00:17:08.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:08.141 rtt min/avg/max/mdev = 0.363/0.363/0.363/0.000 ms 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@168 -- # get_net_dev target0 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@107 -- # local dev=target0 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:17:08.141 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:08.141 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:17:08.141 00:17:08.141 --- 10.0.0.2 ping statistics --- 00:17:08.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:08.141 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@98 -- # (( pair++ )) 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@270 -- # return 0 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@107 -- # local dev=initiator0 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@107 -- # local dev=initiator1 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@109 -- # return 1 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@168 -- # dev= 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@169 -- # return 0 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@168 -- # get_net_dev target0 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@107 -- # local dev=target0 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@168 -- # get_net_dev target1 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@107 -- # local dev=target1 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@109 -- # return 1 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@168 -- # dev= 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@169 -- # return 0 00:17:08.141 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:17:08.142 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:08.142 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:17:08.142 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:17:08.142 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:08.142 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:17:08.142 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:17:08.142 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:17:08.142 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:17:08.142 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:08.142 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:08.142 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # nvmfpid=29991 00:17:08.142 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@329 -- # waitforlisten 29991 00:17:08.142 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:08.142 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 29991 ']' 00:17:08.142 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:08.142 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:08.142 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:08.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:08.142 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:08.142 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:08.142 [2024-12-05 12:00:41.687711] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:17:08.142 [2024-12-05 12:00:41.687764] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:08.142 [2024-12-05 12:00:41.765815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:08.142 [2024-12-05 12:00:41.808238] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:08.142 [2024-12-05 12:00:41.808274] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:08.142 [2024-12-05 12:00:41.808281] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:08.142 [2024-12-05 12:00:41.808287] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:08.142 [2024-12-05 12:00:41.808292] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:08.142 [2024-12-05 12:00:41.809748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:08.142 [2024-12-05 12:00:41.809852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:08.142 [2024-12-05 12:00:41.809854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:08.142 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:08.142 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:17:08.142 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:17:08.142 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:08.142 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:08.142 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:08.142 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:08.142 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.142 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:08.142 [2024-12-05 12:00:41.946493] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:08.142 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.142 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:08.142 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.142 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:08.142 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.142 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:08.142 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.142 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:08.142 [2024-12-05 12:00:41.966722] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:08.142 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.142 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:08.142 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.142 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:08.142 NULL1 00:17:08.142 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.142 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=30073 00:17:08.142 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:08.142 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:17:08.142 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:08.142 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:17:08.142 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:08.142 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:08.142 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:08.142 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:08.142 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:08.142 12:00:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:08.142 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:08.142 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:08.142 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:08.142 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:08.142 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:08.142 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:08.142 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:08.142 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:08.142 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:08.142 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:08.142 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:08.142 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:08.142 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:08.142 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:08.142 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:08.142 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:08.142 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:08.142 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:08.142 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:08.142 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:08.142 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:08.142 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:08.142 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:08.142 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:08.142 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:08.142 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:08.142 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:08.142 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:08.142 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:08.142 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:08.142 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:08.142 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:08.142 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:08.142 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:08.142 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 30073 00:17:08.142 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:08.142 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.142 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:08.401 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.401 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 30073 00:17:08.401 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:08.401 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.401 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:08.661 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.661 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 30073 00:17:08.661 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:08.661 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.661 12:00:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:08.919 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.919 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 30073 00:17:08.919 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:08.919 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.919 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:09.179 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.179 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 30073 00:17:09.179 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:09.179 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.179 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:09.747 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.747 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 30073 00:17:09.747 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:09.747 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.747 12:00:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:10.006 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.006 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 30073 00:17:10.006 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:10.006 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.006 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:10.265 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.265 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 30073 00:17:10.265 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:10.265 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.265 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:10.524 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.524 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 30073 00:17:10.524 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:10.524 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.524 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:11.090 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.090 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 30073 00:17:11.090 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:11.090 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.090 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:11.348 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.348 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 30073 00:17:11.348 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:11.348 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.348 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:11.606 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.606 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 30073 00:17:11.606 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:11.606 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.606 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:11.865 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.865 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 30073 00:17:11.865 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:11.865 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.865 12:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:12.123 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.123 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 30073 00:17:12.123 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:12.123 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.123 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:12.690 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.690 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 30073 00:17:12.690 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:12.690 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.690 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:12.948 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.948 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 30073 00:17:12.948 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:12.948 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.948 12:00:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:13.207 12:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.207 12:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 30073 00:17:13.207 12:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:13.207 12:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.207 12:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:13.465 12:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.465 12:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 30073 00:17:13.465 12:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:13.465 12:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.465 12:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:14.032 12:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.032 12:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 30073 00:17:14.032 12:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:14.032 12:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.032 12:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:14.290 12:00:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.290 12:00:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 30073 00:17:14.290 12:00:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:14.290 12:00:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.290 12:00:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:14.549 12:00:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.549 12:00:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 30073 00:17:14.549 12:00:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:14.549 12:00:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.549 12:00:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:14.808 12:00:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.808 12:00:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 30073 00:17:14.808 12:00:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:14.808 12:00:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.808 12:00:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:15.066 12:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.066 12:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 30073 00:17:15.066 12:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:15.066 12:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.066 12:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:15.633 12:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.633 12:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 30073 00:17:15.633 12:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:15.633 12:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.633 12:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:15.892 12:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.893 12:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 30073 00:17:15.893 12:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:15.893 12:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.893 12:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:16.152 12:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.152 12:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 30073 00:17:16.152 12:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:16.152 12:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.152 12:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:16.411 12:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.411 12:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 30073 00:17:16.411 12:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:16.411 12:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.411 12:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:16.670 12:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.670 12:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 30073 00:17:16.670 12:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:16.670 12:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.670 12:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:17.236 12:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.236 12:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 30073 00:17:17.236 12:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:17.236 12:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.236 12:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:17.493 12:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.493 12:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 30073 00:17:17.493 12:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:17.493 12:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.493 12:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:17.751 12:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.751 12:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 30073 00:17:17.751 12:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:17.751 12:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.751 12:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:18.009 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:18.009 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.009 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 30073 00:17:18.009 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (30073) - No such process 00:17:18.009 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 30073 00:17:18.009 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:18.009 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:18.009 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:17:18.009 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # nvmfcleanup 00:17:18.009 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@99 -- # sync 00:17:18.009 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:17:18.009 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@102 -- # set +e 00:17:18.009 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@103 -- # for i in {1..20} 00:17:18.009 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:17:18.009 rmmod nvme_tcp 00:17:18.009 rmmod nvme_fabrics 00:17:18.268 rmmod nvme_keyring 00:17:18.268 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:17:18.268 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # set -e 00:17:18.268 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # return 0 00:17:18.268 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # '[' -n 29991 ']' 00:17:18.268 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@337 -- # killprocess 29991 00:17:18.268 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 29991 ']' 00:17:18.268 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 29991 00:17:18.268 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:17:18.268 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:18.268 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 29991 00:17:18.268 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:18.268 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:18.268 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 29991' 00:17:18.268 killing process with pid 29991 00:17:18.268 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 29991 00:17:18.268 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 29991 00:17:18.268 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:17:18.268 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # nvmf_fini 00:17:18.268 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@264 -- # local dev 00:17:18.268 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@267 -- # remove_target_ns 00:17:18.268 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:17:18.268 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:17:18.268 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_target_ns 00:17:20.852 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@268 -- # delete_main_bridge 00:17:20.852 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:17:20.852 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@130 -- # return 0 00:17:20.852 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:17:20.852 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:17:20.852 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:17:20.852 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:17:20.852 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:17:20.852 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:17:20.852 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:17:20.852 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:17:20.852 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:17:20.852 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:17:20.852 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:17:20.852 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:17:20.852 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:17:20.852 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:17:20.852 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:17:20.852 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:17:20.852 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:17:20.852 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@41 -- # _dev=0 00:17:20.852 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@41 -- # dev_map=() 00:17:20.852 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@284 -- # iptr 00:17:20.852 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@542 -- # iptables-save 00:17:20.853 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:17:20.853 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@542 -- # iptables-restore 00:17:20.853 00:17:20.853 real 0m19.237s 00:17:20.853 user 0m39.586s 00:17:20.853 sys 0m8.636s 00:17:20.853 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:20.853 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:20.853 ************************************ 00:17:20.853 END TEST nvmf_connect_stress 00:17:20.853 ************************************ 00:17:20.853 12:00:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:20.853 12:00:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:20.853 12:00:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:20.853 12:00:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:20.853 ************************************ 00:17:20.853 START TEST nvmf_fused_ordering 00:17:20.853 ************************************ 00:17:20.853 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:20.853 * Looking for test storage... 00:17:20.853 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:20.853 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:20.853 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:17:20.853 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:20.853 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:20.853 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:20.853 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:20.853 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:20.853 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:17:20.853 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:17:20.853 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:17:20.853 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:17:20.853 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:17:20.853 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:17:20.853 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:17:20.853 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:20.853 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:17:20.853 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:17:20.853 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:20.853 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:20.853 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:17:20.853 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:17:20.853 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:20.853 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:17:20.853 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:17:20.853 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:17:20.853 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:17:20.853 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:20.853 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:17:20.853 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:17:20.853 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:20.853 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:20.853 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:17:20.853 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:20.853 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:20.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:20.853 --rc genhtml_branch_coverage=1 00:17:20.853 --rc genhtml_function_coverage=1 00:17:20.853 --rc genhtml_legend=1 00:17:20.853 --rc geninfo_all_blocks=1 00:17:20.853 --rc geninfo_unexecuted_blocks=1 00:17:20.853 00:17:20.853 ' 00:17:20.853 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:20.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:20.853 --rc genhtml_branch_coverage=1 00:17:20.853 --rc genhtml_function_coverage=1 00:17:20.853 --rc genhtml_legend=1 00:17:20.853 --rc geninfo_all_blocks=1 00:17:20.853 --rc geninfo_unexecuted_blocks=1 00:17:20.853 00:17:20.853 ' 00:17:20.853 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:20.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:20.853 --rc genhtml_branch_coverage=1 00:17:20.853 --rc genhtml_function_coverage=1 00:17:20.853 --rc genhtml_legend=1 00:17:20.853 --rc geninfo_all_blocks=1 00:17:20.853 --rc geninfo_unexecuted_blocks=1 00:17:20.853 00:17:20.853 ' 00:17:20.853 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:20.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:20.853 --rc genhtml_branch_coverage=1 00:17:20.853 --rc genhtml_function_coverage=1 00:17:20.853 --rc genhtml_legend=1 00:17:20.853 --rc geninfo_all_blocks=1 00:17:20.853 --rc geninfo_unexecuted_blocks=1 00:17:20.853 00:17:20.853 ' 00:17:20.853 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:20.853 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:17:20.853 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:20.853 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:20.853 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:20.853 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:20.853 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:20.853 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:17:20.853 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:20.853 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:17:20.853 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:20.853 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:17:20.853 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:20.853 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:17:20.853 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:17:20.853 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:20.853 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:20.853 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:17:20.853 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:20.853 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:20.853 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:20.853 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.853 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.853 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.853 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:17:20.854 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.854 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:17:20.854 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:17:20.854 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:17:20.854 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:17:20.854 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@50 -- # : 0 00:17:20.854 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:17:20.854 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:17:20.854 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:17:20.854 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:20.854 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:20.854 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:17:20.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:17:20.854 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:17:20.854 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:17:20.854 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@54 -- # have_pci_nics=0 00:17:20.854 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:17:20.854 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:17:20.854 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:20.854 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # prepare_net_devs 00:17:20.854 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # local -g is_hw=no 00:17:20.854 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@260 -- # remove_target_ns 00:17:20.854 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:17:20.854 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:17:20.854 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_target_ns 00:17:20.854 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:17:20.854 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:17:20.854 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # xtrace_disable 00:17:20.854 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:27.421 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:27.421 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@131 -- # pci_devs=() 00:17:27.421 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@131 -- # local -a pci_devs 00:17:27.421 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@132 -- # pci_net_devs=() 00:17:27.421 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:17:27.421 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@133 -- # pci_drivers=() 00:17:27.421 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@133 -- # local -A pci_drivers 00:17:27.421 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@135 -- # net_devs=() 00:17:27.421 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@135 -- # local -ga net_devs 00:17:27.421 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@136 -- # e810=() 00:17:27.421 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@136 -- # local -ga e810 00:17:27.421 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@137 -- # x722=() 00:17:27.421 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@137 -- # local -ga x722 00:17:27.421 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@138 -- # mlx=() 00:17:27.421 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@138 -- # local -ga mlx 00:17:27.421 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:27.421 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:27.421 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:27.421 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:27.421 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:27.421 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:27.421 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:27.421 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:27.421 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:27.421 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:27.421 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:27.421 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:27.421 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:17:27.421 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:27.422 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:27.422 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # [[ up == up ]] 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:27.422 Found net devices under 0000:86:00.0: cvl_0_0 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # [[ up == up ]] 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:27.422 Found net devices under 0000:86:00.1: cvl_0_1 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # is_hw=yes 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@257 -- # create_target_ns 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@27 -- # local -gA dev_map 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@28 -- # local -g _dev 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@44 -- # ips=() 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@11 -- # local val=167772161 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:17:27.422 10.0.0.1 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@11 -- # local val=167772162 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:17:27.422 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:17:27.423 10.0.0.2 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@38 -- # ping_ips 1 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@107 -- # local dev=initiator0 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:17:27.423 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:27.423 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:17:27.423 00:17:27.423 --- 10.0.0.1 ping statistics --- 00:17:27.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:27.423 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@168 -- # get_net_dev target0 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@107 -- # local dev=target0 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:17:27.423 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:27.423 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:17:27.423 00:17:27.423 --- 10.0.0.2 ping statistics --- 00:17:27.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:27.423 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@98 -- # (( pair++ )) 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@270 -- # return 0 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@107 -- # local dev=initiator0 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:27.423 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:17:27.424 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:17:27.424 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:17:27.424 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:17:27.424 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:17:27.424 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:17:27.424 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@107 -- # local dev=initiator1 00:17:27.424 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:17:27.424 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:17:27.424 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@109 -- # return 1 00:17:27.424 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@168 -- # dev= 00:17:27.424 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@169 -- # return 0 00:17:27.424 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:17:27.424 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:17:27.424 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:17:27.424 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:17:27.424 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:17:27.424 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:27.424 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:27.424 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@168 -- # get_net_dev target0 00:17:27.424 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@107 -- # local dev=target0 00:17:27.424 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:17:27.424 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:17:27.424 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:17:27.424 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:17:27.424 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:17:27.424 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:17:27.424 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:17:27.424 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:17:27.424 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:17:27.424 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:27.424 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:17:27.424 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:17:27.424 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:17:27.424 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:17:27.424 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:27.424 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:27.424 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@168 -- # get_net_dev target1 00:17:27.424 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@107 -- # local dev=target1 00:17:27.424 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:17:27.424 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:17:27.424 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@109 -- # return 1 00:17:27.424 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@168 -- # dev= 00:17:27.424 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@169 -- # return 0 00:17:27.424 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:17:27.424 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:27.424 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:17:27.424 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:17:27.424 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:27.424 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:17:27.424 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:17:27.424 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:17:27.424 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:17:27.424 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:27.424 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:27.424 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # nvmfpid=35256 00:17:27.424 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:27.424 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@329 -- # waitforlisten 35256 00:17:27.424 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 35256 ']' 00:17:27.424 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:27.424 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:27.424 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:27.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:27.424 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:27.424 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:27.424 [2024-12-05 12:01:01.020319] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:17:27.424 [2024-12-05 12:01:01.020364] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:27.424 [2024-12-05 12:01:01.100653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.424 [2024-12-05 12:01:01.143098] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:27.424 [2024-12-05 12:01:01.143128] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:27.424 [2024-12-05 12:01:01.143135] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:27.424 [2024-12-05 12:01:01.143141] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:27.424 [2024-12-05 12:01:01.143146] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:27.424 [2024-12-05 12:01:01.143693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:27.424 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:27.424 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:17:27.424 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:17:27.424 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:27.424 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:27.424 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:27.424 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:27.424 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.424 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:27.424 [2024-12-05 12:01:01.280106] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:27.424 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.424 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:27.424 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.424 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:27.424 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.424 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:27.424 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.424 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:27.424 [2024-12-05 12:01:01.300289] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:27.424 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.424 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:27.424 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.424 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:27.424 NULL1 00:17:27.424 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.424 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:17:27.424 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.424 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:27.424 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.424 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:17:27.424 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.425 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:27.425 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.425 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:17:27.425 [2024-12-05 12:01:01.359704] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:17:27.425 [2024-12-05 12:01:01.359739] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid35408 ] 00:17:27.683 Attached to nqn.2016-06.io.spdk:cnode1 00:17:27.684 Namespace ID: 1 size: 1GB 00:17:27.684 fused_ordering(0) 00:17:27.684 fused_ordering(1) 00:17:27.684 fused_ordering(2) 00:17:27.684 fused_ordering(3) 00:17:27.684 fused_ordering(4) 00:17:27.684 fused_ordering(5) 00:17:27.684 fused_ordering(6) 00:17:27.684 fused_ordering(7) 00:17:27.684 fused_ordering(8) 00:17:27.684 fused_ordering(9) 00:17:27.684 fused_ordering(10) 00:17:27.684 fused_ordering(11) 00:17:27.684 fused_ordering(12) 00:17:27.684 fused_ordering(13) 00:17:27.684 fused_ordering(14) 00:17:27.684 fused_ordering(15) 00:17:27.684 fused_ordering(16) 00:17:27.684 fused_ordering(17) 00:17:27.684 fused_ordering(18) 00:17:27.684 fused_ordering(19) 00:17:27.684 fused_ordering(20) 00:17:27.684 fused_ordering(21) 00:17:27.684 fused_ordering(22) 00:17:27.684 fused_ordering(23) 00:17:27.684 fused_ordering(24) 00:17:27.684 fused_ordering(25) 00:17:27.684 fused_ordering(26) 00:17:27.684 fused_ordering(27) 00:17:27.684 fused_ordering(28) 00:17:27.684 fused_ordering(29) 00:17:27.684 fused_ordering(30) 00:17:27.684 fused_ordering(31) 00:17:27.684 fused_ordering(32) 00:17:27.684 fused_ordering(33) 00:17:27.684 fused_ordering(34) 00:17:27.684 fused_ordering(35) 00:17:27.684 fused_ordering(36) 00:17:27.684 fused_ordering(37) 00:17:27.684 fused_ordering(38) 00:17:27.684 fused_ordering(39) 00:17:27.684 fused_ordering(40) 00:17:27.684 fused_ordering(41) 00:17:27.684 fused_ordering(42) 00:17:27.684 fused_ordering(43) 00:17:27.684 fused_ordering(44) 00:17:27.684 fused_ordering(45) 00:17:27.684 fused_ordering(46) 00:17:27.684 fused_ordering(47) 00:17:27.684 fused_ordering(48) 00:17:27.684 fused_ordering(49) 00:17:27.684 fused_ordering(50) 00:17:27.684 fused_ordering(51) 00:17:27.684 fused_ordering(52) 00:17:27.684 fused_ordering(53) 00:17:27.684 fused_ordering(54) 00:17:27.684 fused_ordering(55) 00:17:27.684 fused_ordering(56) 00:17:27.684 fused_ordering(57) 00:17:27.684 fused_ordering(58) 00:17:27.684 fused_ordering(59) 00:17:27.684 fused_ordering(60) 00:17:27.684 fused_ordering(61) 00:17:27.684 fused_ordering(62) 00:17:27.684 fused_ordering(63) 00:17:27.684 fused_ordering(64) 00:17:27.684 fused_ordering(65) 00:17:27.684 fused_ordering(66) 00:17:27.684 fused_ordering(67) 00:17:27.684 fused_ordering(68) 00:17:27.684 fused_ordering(69) 00:17:27.684 fused_ordering(70) 00:17:27.684 fused_ordering(71) 00:17:27.684 fused_ordering(72) 00:17:27.684 fused_ordering(73) 00:17:27.684 fused_ordering(74) 00:17:27.684 fused_ordering(75) 00:17:27.684 fused_ordering(76) 00:17:27.684 fused_ordering(77) 00:17:27.684 fused_ordering(78) 00:17:27.684 fused_ordering(79) 00:17:27.684 fused_ordering(80) 00:17:27.684 fused_ordering(81) 00:17:27.684 fused_ordering(82) 00:17:27.684 fused_ordering(83) 00:17:27.684 fused_ordering(84) 00:17:27.684 fused_ordering(85) 00:17:27.684 fused_ordering(86) 00:17:27.684 fused_ordering(87) 00:17:27.684 fused_ordering(88) 00:17:27.684 fused_ordering(89) 00:17:27.684 fused_ordering(90) 00:17:27.684 fused_ordering(91) 00:17:27.684 fused_ordering(92) 00:17:27.684 fused_ordering(93) 00:17:27.684 fused_ordering(94) 00:17:27.684 fused_ordering(95) 00:17:27.684 fused_ordering(96) 00:17:27.684 fused_ordering(97) 00:17:27.684 fused_ordering(98) 00:17:27.684 fused_ordering(99) 00:17:27.684 fused_ordering(100) 00:17:27.684 fused_ordering(101) 00:17:27.684 fused_ordering(102) 00:17:27.684 fused_ordering(103) 00:17:27.684 fused_ordering(104) 00:17:27.684 fused_ordering(105) 00:17:27.684 fused_ordering(106) 00:17:27.684 fused_ordering(107) 00:17:27.684 fused_ordering(108) 00:17:27.684 fused_ordering(109) 00:17:27.684 fused_ordering(110) 00:17:27.684 fused_ordering(111) 00:17:27.684 fused_ordering(112) 00:17:27.684 fused_ordering(113) 00:17:27.684 fused_ordering(114) 00:17:27.684 fused_ordering(115) 00:17:27.684 fused_ordering(116) 00:17:27.684 fused_ordering(117) 00:17:27.684 fused_ordering(118) 00:17:27.684 fused_ordering(119) 00:17:27.684 fused_ordering(120) 00:17:27.684 fused_ordering(121) 00:17:27.684 fused_ordering(122) 00:17:27.684 fused_ordering(123) 00:17:27.684 fused_ordering(124) 00:17:27.684 fused_ordering(125) 00:17:27.684 fused_ordering(126) 00:17:27.684 fused_ordering(127) 00:17:27.684 fused_ordering(128) 00:17:27.684 fused_ordering(129) 00:17:27.684 fused_ordering(130) 00:17:27.684 fused_ordering(131) 00:17:27.684 fused_ordering(132) 00:17:27.684 fused_ordering(133) 00:17:27.684 fused_ordering(134) 00:17:27.684 fused_ordering(135) 00:17:27.684 fused_ordering(136) 00:17:27.684 fused_ordering(137) 00:17:27.684 fused_ordering(138) 00:17:27.684 fused_ordering(139) 00:17:27.684 fused_ordering(140) 00:17:27.684 fused_ordering(141) 00:17:27.684 fused_ordering(142) 00:17:27.684 fused_ordering(143) 00:17:27.684 fused_ordering(144) 00:17:27.684 fused_ordering(145) 00:17:27.684 fused_ordering(146) 00:17:27.684 fused_ordering(147) 00:17:27.684 fused_ordering(148) 00:17:27.684 fused_ordering(149) 00:17:27.684 fused_ordering(150) 00:17:27.684 fused_ordering(151) 00:17:27.684 fused_ordering(152) 00:17:27.684 fused_ordering(153) 00:17:27.684 fused_ordering(154) 00:17:27.684 fused_ordering(155) 00:17:27.684 fused_ordering(156) 00:17:27.684 fused_ordering(157) 00:17:27.684 fused_ordering(158) 00:17:27.684 fused_ordering(159) 00:17:27.684 fused_ordering(160) 00:17:27.684 fused_ordering(161) 00:17:27.684 fused_ordering(162) 00:17:27.684 fused_ordering(163) 00:17:27.684 fused_ordering(164) 00:17:27.684 fused_ordering(165) 00:17:27.684 fused_ordering(166) 00:17:27.684 fused_ordering(167) 00:17:27.684 fused_ordering(168) 00:17:27.684 fused_ordering(169) 00:17:27.684 fused_ordering(170) 00:17:27.684 fused_ordering(171) 00:17:27.684 fused_ordering(172) 00:17:27.684 fused_ordering(173) 00:17:27.684 fused_ordering(174) 00:17:27.684 fused_ordering(175) 00:17:27.684 fused_ordering(176) 00:17:27.684 fused_ordering(177) 00:17:27.684 fused_ordering(178) 00:17:27.684 fused_ordering(179) 00:17:27.684 fused_ordering(180) 00:17:27.684 fused_ordering(181) 00:17:27.684 fused_ordering(182) 00:17:27.684 fused_ordering(183) 00:17:27.684 fused_ordering(184) 00:17:27.684 fused_ordering(185) 00:17:27.684 fused_ordering(186) 00:17:27.684 fused_ordering(187) 00:17:27.684 fused_ordering(188) 00:17:27.684 fused_ordering(189) 00:17:27.684 fused_ordering(190) 00:17:27.684 fused_ordering(191) 00:17:27.684 fused_ordering(192) 00:17:27.684 fused_ordering(193) 00:17:27.684 fused_ordering(194) 00:17:27.684 fused_ordering(195) 00:17:27.684 fused_ordering(196) 00:17:27.684 fused_ordering(197) 00:17:27.684 fused_ordering(198) 00:17:27.684 fused_ordering(199) 00:17:27.684 fused_ordering(200) 00:17:27.684 fused_ordering(201) 00:17:27.684 fused_ordering(202) 00:17:27.684 fused_ordering(203) 00:17:27.684 fused_ordering(204) 00:17:27.684 fused_ordering(205) 00:17:27.943 fused_ordering(206) 00:17:27.943 fused_ordering(207) 00:17:27.943 fused_ordering(208) 00:17:27.943 fused_ordering(209) 00:17:27.943 fused_ordering(210) 00:17:27.943 fused_ordering(211) 00:17:27.943 fused_ordering(212) 00:17:27.943 fused_ordering(213) 00:17:27.943 fused_ordering(214) 00:17:27.943 fused_ordering(215) 00:17:27.943 fused_ordering(216) 00:17:27.943 fused_ordering(217) 00:17:27.943 fused_ordering(218) 00:17:27.943 fused_ordering(219) 00:17:27.943 fused_ordering(220) 00:17:27.943 fused_ordering(221) 00:17:27.943 fused_ordering(222) 00:17:27.943 fused_ordering(223) 00:17:27.943 fused_ordering(224) 00:17:27.943 fused_ordering(225) 00:17:27.943 fused_ordering(226) 00:17:27.943 fused_ordering(227) 00:17:27.943 fused_ordering(228) 00:17:27.943 fused_ordering(229) 00:17:27.943 fused_ordering(230) 00:17:27.943 fused_ordering(231) 00:17:27.943 fused_ordering(232) 00:17:27.943 fused_ordering(233) 00:17:27.943 fused_ordering(234) 00:17:27.943 fused_ordering(235) 00:17:27.943 fused_ordering(236) 00:17:27.943 fused_ordering(237) 00:17:27.943 fused_ordering(238) 00:17:27.943 fused_ordering(239) 00:17:27.943 fused_ordering(240) 00:17:27.943 fused_ordering(241) 00:17:27.943 fused_ordering(242) 00:17:27.943 fused_ordering(243) 00:17:27.943 fused_ordering(244) 00:17:27.943 fused_ordering(245) 00:17:27.943 fused_ordering(246) 00:17:27.943 fused_ordering(247) 00:17:27.943 fused_ordering(248) 00:17:27.943 fused_ordering(249) 00:17:27.943 fused_ordering(250) 00:17:27.943 fused_ordering(251) 00:17:27.943 fused_ordering(252) 00:17:27.943 fused_ordering(253) 00:17:27.943 fused_ordering(254) 00:17:27.943 fused_ordering(255) 00:17:27.943 fused_ordering(256) 00:17:27.943 fused_ordering(257) 00:17:27.943 fused_ordering(258) 00:17:27.943 fused_ordering(259) 00:17:27.943 fused_ordering(260) 00:17:27.943 fused_ordering(261) 00:17:27.943 fused_ordering(262) 00:17:27.943 fused_ordering(263) 00:17:27.943 fused_ordering(264) 00:17:27.943 fused_ordering(265) 00:17:27.943 fused_ordering(266) 00:17:27.943 fused_ordering(267) 00:17:27.943 fused_ordering(268) 00:17:27.943 fused_ordering(269) 00:17:27.943 fused_ordering(270) 00:17:27.943 fused_ordering(271) 00:17:27.943 fused_ordering(272) 00:17:27.943 fused_ordering(273) 00:17:27.943 fused_ordering(274) 00:17:27.943 fused_ordering(275) 00:17:27.943 fused_ordering(276) 00:17:27.943 fused_ordering(277) 00:17:27.943 fused_ordering(278) 00:17:27.944 fused_ordering(279) 00:17:27.944 fused_ordering(280) 00:17:27.944 fused_ordering(281) 00:17:27.944 fused_ordering(282) 00:17:27.944 fused_ordering(283) 00:17:27.944 fused_ordering(284) 00:17:27.944 fused_ordering(285) 00:17:27.944 fused_ordering(286) 00:17:27.944 fused_ordering(287) 00:17:27.944 fused_ordering(288) 00:17:27.944 fused_ordering(289) 00:17:27.944 fused_ordering(290) 00:17:27.944 fused_ordering(291) 00:17:27.944 fused_ordering(292) 00:17:27.944 fused_ordering(293) 00:17:27.944 fused_ordering(294) 00:17:27.944 fused_ordering(295) 00:17:27.944 fused_ordering(296) 00:17:27.944 fused_ordering(297) 00:17:27.944 fused_ordering(298) 00:17:27.944 fused_ordering(299) 00:17:27.944 fused_ordering(300) 00:17:27.944 fused_ordering(301) 00:17:27.944 fused_ordering(302) 00:17:27.944 fused_ordering(303) 00:17:27.944 fused_ordering(304) 00:17:27.944 fused_ordering(305) 00:17:27.944 fused_ordering(306) 00:17:27.944 fused_ordering(307) 00:17:27.944 fused_ordering(308) 00:17:27.944 fused_ordering(309) 00:17:27.944 fused_ordering(310) 00:17:27.944 fused_ordering(311) 00:17:27.944 fused_ordering(312) 00:17:27.944 fused_ordering(313) 00:17:27.944 fused_ordering(314) 00:17:27.944 fused_ordering(315) 00:17:27.944 fused_ordering(316) 00:17:27.944 fused_ordering(317) 00:17:27.944 fused_ordering(318) 00:17:27.944 fused_ordering(319) 00:17:27.944 fused_ordering(320) 00:17:27.944 fused_ordering(321) 00:17:27.944 fused_ordering(322) 00:17:27.944 fused_ordering(323) 00:17:27.944 fused_ordering(324) 00:17:27.944 fused_ordering(325) 00:17:27.944 fused_ordering(326) 00:17:27.944 fused_ordering(327) 00:17:27.944 fused_ordering(328) 00:17:27.944 fused_ordering(329) 00:17:27.944 fused_ordering(330) 00:17:27.944 fused_ordering(331) 00:17:27.944 fused_ordering(332) 00:17:27.944 fused_ordering(333) 00:17:27.944 fused_ordering(334) 00:17:27.944 fused_ordering(335) 00:17:27.944 fused_ordering(336) 00:17:27.944 fused_ordering(337) 00:17:27.944 fused_ordering(338) 00:17:27.944 fused_ordering(339) 00:17:27.944 fused_ordering(340) 00:17:27.944 fused_ordering(341) 00:17:27.944 fused_ordering(342) 00:17:27.944 fused_ordering(343) 00:17:27.944 fused_ordering(344) 00:17:27.944 fused_ordering(345) 00:17:27.944 fused_ordering(346) 00:17:27.944 fused_ordering(347) 00:17:27.944 fused_ordering(348) 00:17:27.944 fused_ordering(349) 00:17:27.944 fused_ordering(350) 00:17:27.944 fused_ordering(351) 00:17:27.944 fused_ordering(352) 00:17:27.944 fused_ordering(353) 00:17:27.944 fused_ordering(354) 00:17:27.944 fused_ordering(355) 00:17:27.944 fused_ordering(356) 00:17:27.944 fused_ordering(357) 00:17:27.944 fused_ordering(358) 00:17:27.944 fused_ordering(359) 00:17:27.944 fused_ordering(360) 00:17:27.944 fused_ordering(361) 00:17:27.944 fused_ordering(362) 00:17:27.944 fused_ordering(363) 00:17:27.944 fused_ordering(364) 00:17:27.944 fused_ordering(365) 00:17:27.944 fused_ordering(366) 00:17:27.944 fused_ordering(367) 00:17:27.944 fused_ordering(368) 00:17:27.944 fused_ordering(369) 00:17:27.944 fused_ordering(370) 00:17:27.944 fused_ordering(371) 00:17:27.944 fused_ordering(372) 00:17:27.944 fused_ordering(373) 00:17:27.944 fused_ordering(374) 00:17:27.944 fused_ordering(375) 00:17:27.944 fused_ordering(376) 00:17:27.944 fused_ordering(377) 00:17:27.944 fused_ordering(378) 00:17:27.944 fused_ordering(379) 00:17:27.944 fused_ordering(380) 00:17:27.944 fused_ordering(381) 00:17:27.944 fused_ordering(382) 00:17:27.944 fused_ordering(383) 00:17:27.944 fused_ordering(384) 00:17:27.944 fused_ordering(385) 00:17:27.944 fused_ordering(386) 00:17:27.944 fused_ordering(387) 00:17:27.944 fused_ordering(388) 00:17:27.944 fused_ordering(389) 00:17:27.944 fused_ordering(390) 00:17:27.944 fused_ordering(391) 00:17:27.944 fused_ordering(392) 00:17:27.944 fused_ordering(393) 00:17:27.944 fused_ordering(394) 00:17:27.944 fused_ordering(395) 00:17:27.944 fused_ordering(396) 00:17:27.944 fused_ordering(397) 00:17:27.944 fused_ordering(398) 00:17:27.944 fused_ordering(399) 00:17:27.944 fused_ordering(400) 00:17:27.944 fused_ordering(401) 00:17:27.944 fused_ordering(402) 00:17:27.944 fused_ordering(403) 00:17:27.944 fused_ordering(404) 00:17:27.944 fused_ordering(405) 00:17:27.944 fused_ordering(406) 00:17:27.944 fused_ordering(407) 00:17:27.944 fused_ordering(408) 00:17:27.944 fused_ordering(409) 00:17:27.944 fused_ordering(410) 00:17:28.203 fused_ordering(411) 00:17:28.203 fused_ordering(412) 00:17:28.203 fused_ordering(413) 00:17:28.203 fused_ordering(414) 00:17:28.203 fused_ordering(415) 00:17:28.203 fused_ordering(416) 00:17:28.203 fused_ordering(417) 00:17:28.203 fused_ordering(418) 00:17:28.203 fused_ordering(419) 00:17:28.203 fused_ordering(420) 00:17:28.203 fused_ordering(421) 00:17:28.203 fused_ordering(422) 00:17:28.203 fused_ordering(423) 00:17:28.203 fused_ordering(424) 00:17:28.203 fused_ordering(425) 00:17:28.203 fused_ordering(426) 00:17:28.203 fused_ordering(427) 00:17:28.203 fused_ordering(428) 00:17:28.203 fused_ordering(429) 00:17:28.203 fused_ordering(430) 00:17:28.203 fused_ordering(431) 00:17:28.203 fused_ordering(432) 00:17:28.203 fused_ordering(433) 00:17:28.203 fused_ordering(434) 00:17:28.203 fused_ordering(435) 00:17:28.203 fused_ordering(436) 00:17:28.203 fused_ordering(437) 00:17:28.203 fused_ordering(438) 00:17:28.203 fused_ordering(439) 00:17:28.203 fused_ordering(440) 00:17:28.203 fused_ordering(441) 00:17:28.203 fused_ordering(442) 00:17:28.203 fused_ordering(443) 00:17:28.203 fused_ordering(444) 00:17:28.203 fused_ordering(445) 00:17:28.203 fused_ordering(446) 00:17:28.203 fused_ordering(447) 00:17:28.203 fused_ordering(448) 00:17:28.203 fused_ordering(449) 00:17:28.203 fused_ordering(450) 00:17:28.203 fused_ordering(451) 00:17:28.203 fused_ordering(452) 00:17:28.203 fused_ordering(453) 00:17:28.203 fused_ordering(454) 00:17:28.203 fused_ordering(455) 00:17:28.203 fused_ordering(456) 00:17:28.203 fused_ordering(457) 00:17:28.203 fused_ordering(458) 00:17:28.203 fused_ordering(459) 00:17:28.203 fused_ordering(460) 00:17:28.203 fused_ordering(461) 00:17:28.203 fused_ordering(462) 00:17:28.203 fused_ordering(463) 00:17:28.203 fused_ordering(464) 00:17:28.203 fused_ordering(465) 00:17:28.203 fused_ordering(466) 00:17:28.203 fused_ordering(467) 00:17:28.203 fused_ordering(468) 00:17:28.203 fused_ordering(469) 00:17:28.203 fused_ordering(470) 00:17:28.203 fused_ordering(471) 00:17:28.203 fused_ordering(472) 00:17:28.203 fused_ordering(473) 00:17:28.203 fused_ordering(474) 00:17:28.203 fused_ordering(475) 00:17:28.203 fused_ordering(476) 00:17:28.203 fused_ordering(477) 00:17:28.203 fused_ordering(478) 00:17:28.203 fused_ordering(479) 00:17:28.203 fused_ordering(480) 00:17:28.203 fused_ordering(481) 00:17:28.203 fused_ordering(482) 00:17:28.203 fused_ordering(483) 00:17:28.203 fused_ordering(484) 00:17:28.203 fused_ordering(485) 00:17:28.203 fused_ordering(486) 00:17:28.203 fused_ordering(487) 00:17:28.203 fused_ordering(488) 00:17:28.203 fused_ordering(489) 00:17:28.203 fused_ordering(490) 00:17:28.203 fused_ordering(491) 00:17:28.203 fused_ordering(492) 00:17:28.203 fused_ordering(493) 00:17:28.203 fused_ordering(494) 00:17:28.203 fused_ordering(495) 00:17:28.203 fused_ordering(496) 00:17:28.203 fused_ordering(497) 00:17:28.203 fused_ordering(498) 00:17:28.203 fused_ordering(499) 00:17:28.203 fused_ordering(500) 00:17:28.204 fused_ordering(501) 00:17:28.204 fused_ordering(502) 00:17:28.204 fused_ordering(503) 00:17:28.204 fused_ordering(504) 00:17:28.204 fused_ordering(505) 00:17:28.204 fused_ordering(506) 00:17:28.204 fused_ordering(507) 00:17:28.204 fused_ordering(508) 00:17:28.204 fused_ordering(509) 00:17:28.204 fused_ordering(510) 00:17:28.204 fused_ordering(511) 00:17:28.204 fused_ordering(512) 00:17:28.204 fused_ordering(513) 00:17:28.204 fused_ordering(514) 00:17:28.204 fused_ordering(515) 00:17:28.204 fused_ordering(516) 00:17:28.204 fused_ordering(517) 00:17:28.204 fused_ordering(518) 00:17:28.204 fused_ordering(519) 00:17:28.204 fused_ordering(520) 00:17:28.204 fused_ordering(521) 00:17:28.204 fused_ordering(522) 00:17:28.204 fused_ordering(523) 00:17:28.204 fused_ordering(524) 00:17:28.204 fused_ordering(525) 00:17:28.204 fused_ordering(526) 00:17:28.204 fused_ordering(527) 00:17:28.204 fused_ordering(528) 00:17:28.204 fused_ordering(529) 00:17:28.204 fused_ordering(530) 00:17:28.204 fused_ordering(531) 00:17:28.204 fused_ordering(532) 00:17:28.204 fused_ordering(533) 00:17:28.204 fused_ordering(534) 00:17:28.204 fused_ordering(535) 00:17:28.204 fused_ordering(536) 00:17:28.204 fused_ordering(537) 00:17:28.204 fused_ordering(538) 00:17:28.204 fused_ordering(539) 00:17:28.204 fused_ordering(540) 00:17:28.204 fused_ordering(541) 00:17:28.204 fused_ordering(542) 00:17:28.204 fused_ordering(543) 00:17:28.204 fused_ordering(544) 00:17:28.204 fused_ordering(545) 00:17:28.204 fused_ordering(546) 00:17:28.204 fused_ordering(547) 00:17:28.204 fused_ordering(548) 00:17:28.204 fused_ordering(549) 00:17:28.204 fused_ordering(550) 00:17:28.204 fused_ordering(551) 00:17:28.204 fused_ordering(552) 00:17:28.204 fused_ordering(553) 00:17:28.204 fused_ordering(554) 00:17:28.204 fused_ordering(555) 00:17:28.204 fused_ordering(556) 00:17:28.204 fused_ordering(557) 00:17:28.204 fused_ordering(558) 00:17:28.204 fused_ordering(559) 00:17:28.204 fused_ordering(560) 00:17:28.204 fused_ordering(561) 00:17:28.204 fused_ordering(562) 00:17:28.204 fused_ordering(563) 00:17:28.204 fused_ordering(564) 00:17:28.204 fused_ordering(565) 00:17:28.204 fused_ordering(566) 00:17:28.204 fused_ordering(567) 00:17:28.204 fused_ordering(568) 00:17:28.204 fused_ordering(569) 00:17:28.204 fused_ordering(570) 00:17:28.204 fused_ordering(571) 00:17:28.204 fused_ordering(572) 00:17:28.204 fused_ordering(573) 00:17:28.204 fused_ordering(574) 00:17:28.204 fused_ordering(575) 00:17:28.204 fused_ordering(576) 00:17:28.204 fused_ordering(577) 00:17:28.204 fused_ordering(578) 00:17:28.204 fused_ordering(579) 00:17:28.204 fused_ordering(580) 00:17:28.204 fused_ordering(581) 00:17:28.204 fused_ordering(582) 00:17:28.204 fused_ordering(583) 00:17:28.204 fused_ordering(584) 00:17:28.204 fused_ordering(585) 00:17:28.204 fused_ordering(586) 00:17:28.204 fused_ordering(587) 00:17:28.204 fused_ordering(588) 00:17:28.204 fused_ordering(589) 00:17:28.204 fused_ordering(590) 00:17:28.204 fused_ordering(591) 00:17:28.204 fused_ordering(592) 00:17:28.204 fused_ordering(593) 00:17:28.204 fused_ordering(594) 00:17:28.204 fused_ordering(595) 00:17:28.204 fused_ordering(596) 00:17:28.204 fused_ordering(597) 00:17:28.204 fused_ordering(598) 00:17:28.204 fused_ordering(599) 00:17:28.204 fused_ordering(600) 00:17:28.204 fused_ordering(601) 00:17:28.204 fused_ordering(602) 00:17:28.204 fused_ordering(603) 00:17:28.204 fused_ordering(604) 00:17:28.204 fused_ordering(605) 00:17:28.204 fused_ordering(606) 00:17:28.204 fused_ordering(607) 00:17:28.204 fused_ordering(608) 00:17:28.204 fused_ordering(609) 00:17:28.204 fused_ordering(610) 00:17:28.204 fused_ordering(611) 00:17:28.204 fused_ordering(612) 00:17:28.204 fused_ordering(613) 00:17:28.204 fused_ordering(614) 00:17:28.204 fused_ordering(615) 00:17:28.463 fused_ordering(616) 00:17:28.463 fused_ordering(617) 00:17:28.463 fused_ordering(618) 00:17:28.463 fused_ordering(619) 00:17:28.463 fused_ordering(620) 00:17:28.463 fused_ordering(621) 00:17:28.463 fused_ordering(622) 00:17:28.463 fused_ordering(623) 00:17:28.463 fused_ordering(624) 00:17:28.463 fused_ordering(625) 00:17:28.463 fused_ordering(626) 00:17:28.463 fused_ordering(627) 00:17:28.463 fused_ordering(628) 00:17:28.463 fused_ordering(629) 00:17:28.463 fused_ordering(630) 00:17:28.463 fused_ordering(631) 00:17:28.463 fused_ordering(632) 00:17:28.463 fused_ordering(633) 00:17:28.463 fused_ordering(634) 00:17:28.463 fused_ordering(635) 00:17:28.463 fused_ordering(636) 00:17:28.463 fused_ordering(637) 00:17:28.463 fused_ordering(638) 00:17:28.463 fused_ordering(639) 00:17:28.463 fused_ordering(640) 00:17:28.463 fused_ordering(641) 00:17:28.463 fused_ordering(642) 00:17:28.463 fused_ordering(643) 00:17:28.463 fused_ordering(644) 00:17:28.463 fused_ordering(645) 00:17:28.463 fused_ordering(646) 00:17:28.463 fused_ordering(647) 00:17:28.463 fused_ordering(648) 00:17:28.463 fused_ordering(649) 00:17:28.463 fused_ordering(650) 00:17:28.463 fused_ordering(651) 00:17:28.463 fused_ordering(652) 00:17:28.463 fused_ordering(653) 00:17:28.463 fused_ordering(654) 00:17:28.463 fused_ordering(655) 00:17:28.463 fused_ordering(656) 00:17:28.463 fused_ordering(657) 00:17:28.463 fused_ordering(658) 00:17:28.463 fused_ordering(659) 00:17:28.463 fused_ordering(660) 00:17:28.463 fused_ordering(661) 00:17:28.463 fused_ordering(662) 00:17:28.463 fused_ordering(663) 00:17:28.463 fused_ordering(664) 00:17:28.463 fused_ordering(665) 00:17:28.463 fused_ordering(666) 00:17:28.463 fused_ordering(667) 00:17:28.463 fused_ordering(668) 00:17:28.463 fused_ordering(669) 00:17:28.463 fused_ordering(670) 00:17:28.463 fused_ordering(671) 00:17:28.463 fused_ordering(672) 00:17:28.463 fused_ordering(673) 00:17:28.463 fused_ordering(674) 00:17:28.463 fused_ordering(675) 00:17:28.463 fused_ordering(676) 00:17:28.463 fused_ordering(677) 00:17:28.463 fused_ordering(678) 00:17:28.463 fused_ordering(679) 00:17:28.463 fused_ordering(680) 00:17:28.463 fused_ordering(681) 00:17:28.463 fused_ordering(682) 00:17:28.463 fused_ordering(683) 00:17:28.463 fused_ordering(684) 00:17:28.463 fused_ordering(685) 00:17:28.463 fused_ordering(686) 00:17:28.463 fused_ordering(687) 00:17:28.463 fused_ordering(688) 00:17:28.463 fused_ordering(689) 00:17:28.463 fused_ordering(690) 00:17:28.463 fused_ordering(691) 00:17:28.463 fused_ordering(692) 00:17:28.463 fused_ordering(693) 00:17:28.463 fused_ordering(694) 00:17:28.463 fused_ordering(695) 00:17:28.463 fused_ordering(696) 00:17:28.463 fused_ordering(697) 00:17:28.463 fused_ordering(698) 00:17:28.463 fused_ordering(699) 00:17:28.463 fused_ordering(700) 00:17:28.463 fused_ordering(701) 00:17:28.463 fused_ordering(702) 00:17:28.464 fused_ordering(703) 00:17:28.464 fused_ordering(704) 00:17:28.464 fused_ordering(705) 00:17:28.464 fused_ordering(706) 00:17:28.464 fused_ordering(707) 00:17:28.464 fused_ordering(708) 00:17:28.464 fused_ordering(709) 00:17:28.464 fused_ordering(710) 00:17:28.464 fused_ordering(711) 00:17:28.464 fused_ordering(712) 00:17:28.464 fused_ordering(713) 00:17:28.464 fused_ordering(714) 00:17:28.464 fused_ordering(715) 00:17:28.464 fused_ordering(716) 00:17:28.464 fused_ordering(717) 00:17:28.464 fused_ordering(718) 00:17:28.464 fused_ordering(719) 00:17:28.464 fused_ordering(720) 00:17:28.464 fused_ordering(721) 00:17:28.464 fused_ordering(722) 00:17:28.464 fused_ordering(723) 00:17:28.464 fused_ordering(724) 00:17:28.464 fused_ordering(725) 00:17:28.464 fused_ordering(726) 00:17:28.464 fused_ordering(727) 00:17:28.464 fused_ordering(728) 00:17:28.464 fused_ordering(729) 00:17:28.464 fused_ordering(730) 00:17:28.464 fused_ordering(731) 00:17:28.464 fused_ordering(732) 00:17:28.464 fused_ordering(733) 00:17:28.464 fused_ordering(734) 00:17:28.464 fused_ordering(735) 00:17:28.464 fused_ordering(736) 00:17:28.464 fused_ordering(737) 00:17:28.464 fused_ordering(738) 00:17:28.464 fused_ordering(739) 00:17:28.464 fused_ordering(740) 00:17:28.464 fused_ordering(741) 00:17:28.464 fused_ordering(742) 00:17:28.464 fused_ordering(743) 00:17:28.464 fused_ordering(744) 00:17:28.464 fused_ordering(745) 00:17:28.464 fused_ordering(746) 00:17:28.464 fused_ordering(747) 00:17:28.464 fused_ordering(748) 00:17:28.464 fused_ordering(749) 00:17:28.464 fused_ordering(750) 00:17:28.464 fused_ordering(751) 00:17:28.464 fused_ordering(752) 00:17:28.464 fused_ordering(753) 00:17:28.464 fused_ordering(754) 00:17:28.464 fused_ordering(755) 00:17:28.464 fused_ordering(756) 00:17:28.464 fused_ordering(757) 00:17:28.464 fused_ordering(758) 00:17:28.464 fused_ordering(759) 00:17:28.464 fused_ordering(760) 00:17:28.464 fused_ordering(761) 00:17:28.464 fused_ordering(762) 00:17:28.464 fused_ordering(763) 00:17:28.464 fused_ordering(764) 00:17:28.464 fused_ordering(765) 00:17:28.464 fused_ordering(766) 00:17:28.464 fused_ordering(767) 00:17:28.464 fused_ordering(768) 00:17:28.464 fused_ordering(769) 00:17:28.464 fused_ordering(770) 00:17:28.464 fused_ordering(771) 00:17:28.464 fused_ordering(772) 00:17:28.464 fused_ordering(773) 00:17:28.464 fused_ordering(774) 00:17:28.464 fused_ordering(775) 00:17:28.464 fused_ordering(776) 00:17:28.464 fused_ordering(777) 00:17:28.464 fused_ordering(778) 00:17:28.464 fused_ordering(779) 00:17:28.464 fused_ordering(780) 00:17:28.464 fused_ordering(781) 00:17:28.464 fused_ordering(782) 00:17:28.464 fused_ordering(783) 00:17:28.464 fused_ordering(784) 00:17:28.464 fused_ordering(785) 00:17:28.464 fused_ordering(786) 00:17:28.464 fused_ordering(787) 00:17:28.464 fused_ordering(788) 00:17:28.464 fused_ordering(789) 00:17:28.464 fused_ordering(790) 00:17:28.464 fused_ordering(791) 00:17:28.464 fused_ordering(792) 00:17:28.464 fused_ordering(793) 00:17:28.464 fused_ordering(794) 00:17:28.464 fused_ordering(795) 00:17:28.464 fused_ordering(796) 00:17:28.464 fused_ordering(797) 00:17:28.464 fused_ordering(798) 00:17:28.464 fused_ordering(799) 00:17:28.464 fused_ordering(800) 00:17:28.464 fused_ordering(801) 00:17:28.464 fused_ordering(802) 00:17:28.464 fused_ordering(803) 00:17:28.464 fused_ordering(804) 00:17:28.464 fused_ordering(805) 00:17:28.464 fused_ordering(806) 00:17:28.464 fused_ordering(807) 00:17:28.464 fused_ordering(808) 00:17:28.464 fused_ordering(809) 00:17:28.464 fused_ordering(810) 00:17:28.464 fused_ordering(811) 00:17:28.464 fused_ordering(812) 00:17:28.464 fused_ordering(813) 00:17:28.464 fused_ordering(814) 00:17:28.464 fused_ordering(815) 00:17:28.464 fused_ordering(816) 00:17:28.464 fused_ordering(817) 00:17:28.464 fused_ordering(818) 00:17:28.464 fused_ordering(819) 00:17:28.464 fused_ordering(820) 00:17:29.032 fused_ordering(821) 00:17:29.032 fused_ordering(822) 00:17:29.032 fused_ordering(823) 00:17:29.032 fused_ordering(824) 00:17:29.032 fused_ordering(825) 00:17:29.032 fused_ordering(826) 00:17:29.032 fused_ordering(827) 00:17:29.032 fused_ordering(828) 00:17:29.032 fused_ordering(829) 00:17:29.032 fused_ordering(830) 00:17:29.032 fused_ordering(831) 00:17:29.032 fused_ordering(832) 00:17:29.032 fused_ordering(833) 00:17:29.032 fused_ordering(834) 00:17:29.032 fused_ordering(835) 00:17:29.032 fused_ordering(836) 00:17:29.032 fused_ordering(837) 00:17:29.032 fused_ordering(838) 00:17:29.032 fused_ordering(839) 00:17:29.032 fused_ordering(840) 00:17:29.032 fused_ordering(841) 00:17:29.032 fused_ordering(842) 00:17:29.032 fused_ordering(843) 00:17:29.032 fused_ordering(844) 00:17:29.032 fused_ordering(845) 00:17:29.032 fused_ordering(846) 00:17:29.032 fused_ordering(847) 00:17:29.032 fused_ordering(848) 00:17:29.032 fused_ordering(849) 00:17:29.032 fused_ordering(850) 00:17:29.032 fused_ordering(851) 00:17:29.032 fused_ordering(852) 00:17:29.032 fused_ordering(853) 00:17:29.032 fused_ordering(854) 00:17:29.032 fused_ordering(855) 00:17:29.032 fused_ordering(856) 00:17:29.032 fused_ordering(857) 00:17:29.032 fused_ordering(858) 00:17:29.032 fused_ordering(859) 00:17:29.032 fused_ordering(860) 00:17:29.032 fused_ordering(861) 00:17:29.032 fused_ordering(862) 00:17:29.032 fused_ordering(863) 00:17:29.032 fused_ordering(864) 00:17:29.032 fused_ordering(865) 00:17:29.032 fused_ordering(866) 00:17:29.032 fused_ordering(867) 00:17:29.032 fused_ordering(868) 00:17:29.032 fused_ordering(869) 00:17:29.032 fused_ordering(870) 00:17:29.032 fused_ordering(871) 00:17:29.032 fused_ordering(872) 00:17:29.032 fused_ordering(873) 00:17:29.032 fused_ordering(874) 00:17:29.032 fused_ordering(875) 00:17:29.032 fused_ordering(876) 00:17:29.032 fused_ordering(877) 00:17:29.032 fused_ordering(878) 00:17:29.032 fused_ordering(879) 00:17:29.032 fused_ordering(880) 00:17:29.032 fused_ordering(881) 00:17:29.032 fused_ordering(882) 00:17:29.032 fused_ordering(883) 00:17:29.032 fused_ordering(884) 00:17:29.032 fused_ordering(885) 00:17:29.032 fused_ordering(886) 00:17:29.032 fused_ordering(887) 00:17:29.032 fused_ordering(888) 00:17:29.032 fused_ordering(889) 00:17:29.032 fused_ordering(890) 00:17:29.032 fused_ordering(891) 00:17:29.032 fused_ordering(892) 00:17:29.032 fused_ordering(893) 00:17:29.032 fused_ordering(894) 00:17:29.032 fused_ordering(895) 00:17:29.032 fused_ordering(896) 00:17:29.032 fused_ordering(897) 00:17:29.032 fused_ordering(898) 00:17:29.032 fused_ordering(899) 00:17:29.032 fused_ordering(900) 00:17:29.032 fused_ordering(901) 00:17:29.032 fused_ordering(902) 00:17:29.032 fused_ordering(903) 00:17:29.032 fused_ordering(904) 00:17:29.032 fused_ordering(905) 00:17:29.032 fused_ordering(906) 00:17:29.032 fused_ordering(907) 00:17:29.032 fused_ordering(908) 00:17:29.032 fused_ordering(909) 00:17:29.032 fused_ordering(910) 00:17:29.032 fused_ordering(911) 00:17:29.032 fused_ordering(912) 00:17:29.032 fused_ordering(913) 00:17:29.032 fused_ordering(914) 00:17:29.032 fused_ordering(915) 00:17:29.032 fused_ordering(916) 00:17:29.032 fused_ordering(917) 00:17:29.032 fused_ordering(918) 00:17:29.032 fused_ordering(919) 00:17:29.032 fused_ordering(920) 00:17:29.032 fused_ordering(921) 00:17:29.032 fused_ordering(922) 00:17:29.032 fused_ordering(923) 00:17:29.032 fused_ordering(924) 00:17:29.032 fused_ordering(925) 00:17:29.032 fused_ordering(926) 00:17:29.032 fused_ordering(927) 00:17:29.032 fused_ordering(928) 00:17:29.032 fused_ordering(929) 00:17:29.032 fused_ordering(930) 00:17:29.032 fused_ordering(931) 00:17:29.032 fused_ordering(932) 00:17:29.032 fused_ordering(933) 00:17:29.032 fused_ordering(934) 00:17:29.032 fused_ordering(935) 00:17:29.032 fused_ordering(936) 00:17:29.032 fused_ordering(937) 00:17:29.032 fused_ordering(938) 00:17:29.032 fused_ordering(939) 00:17:29.032 fused_ordering(940) 00:17:29.032 fused_ordering(941) 00:17:29.032 fused_ordering(942) 00:17:29.032 fused_ordering(943) 00:17:29.032 fused_ordering(944) 00:17:29.032 fused_ordering(945) 00:17:29.032 fused_ordering(946) 00:17:29.032 fused_ordering(947) 00:17:29.032 fused_ordering(948) 00:17:29.032 fused_ordering(949) 00:17:29.032 fused_ordering(950) 00:17:29.032 fused_ordering(951) 00:17:29.032 fused_ordering(952) 00:17:29.032 fused_ordering(953) 00:17:29.032 fused_ordering(954) 00:17:29.032 fused_ordering(955) 00:17:29.032 fused_ordering(956) 00:17:29.032 fused_ordering(957) 00:17:29.032 fused_ordering(958) 00:17:29.032 fused_ordering(959) 00:17:29.032 fused_ordering(960) 00:17:29.032 fused_ordering(961) 00:17:29.032 fused_ordering(962) 00:17:29.032 fused_ordering(963) 00:17:29.032 fused_ordering(964) 00:17:29.032 fused_ordering(965) 00:17:29.032 fused_ordering(966) 00:17:29.032 fused_ordering(967) 00:17:29.032 fused_ordering(968) 00:17:29.032 fused_ordering(969) 00:17:29.032 fused_ordering(970) 00:17:29.032 fused_ordering(971) 00:17:29.032 fused_ordering(972) 00:17:29.032 fused_ordering(973) 00:17:29.032 fused_ordering(974) 00:17:29.032 fused_ordering(975) 00:17:29.032 fused_ordering(976) 00:17:29.032 fused_ordering(977) 00:17:29.032 fused_ordering(978) 00:17:29.032 fused_ordering(979) 00:17:29.032 fused_ordering(980) 00:17:29.032 fused_ordering(981) 00:17:29.032 fused_ordering(982) 00:17:29.032 fused_ordering(983) 00:17:29.032 fused_ordering(984) 00:17:29.032 fused_ordering(985) 00:17:29.032 fused_ordering(986) 00:17:29.032 fused_ordering(987) 00:17:29.032 fused_ordering(988) 00:17:29.032 fused_ordering(989) 00:17:29.032 fused_ordering(990) 00:17:29.032 fused_ordering(991) 00:17:29.032 fused_ordering(992) 00:17:29.032 fused_ordering(993) 00:17:29.032 fused_ordering(994) 00:17:29.032 fused_ordering(995) 00:17:29.032 fused_ordering(996) 00:17:29.032 fused_ordering(997) 00:17:29.032 fused_ordering(998) 00:17:29.032 fused_ordering(999) 00:17:29.032 fused_ordering(1000) 00:17:29.032 fused_ordering(1001) 00:17:29.032 fused_ordering(1002) 00:17:29.032 fused_ordering(1003) 00:17:29.032 fused_ordering(1004) 00:17:29.032 fused_ordering(1005) 00:17:29.032 fused_ordering(1006) 00:17:29.032 fused_ordering(1007) 00:17:29.032 fused_ordering(1008) 00:17:29.032 fused_ordering(1009) 00:17:29.032 fused_ordering(1010) 00:17:29.032 fused_ordering(1011) 00:17:29.032 fused_ordering(1012) 00:17:29.032 fused_ordering(1013) 00:17:29.032 fused_ordering(1014) 00:17:29.032 fused_ordering(1015) 00:17:29.032 fused_ordering(1016) 00:17:29.032 fused_ordering(1017) 00:17:29.032 fused_ordering(1018) 00:17:29.032 fused_ordering(1019) 00:17:29.032 fused_ordering(1020) 00:17:29.032 fused_ordering(1021) 00:17:29.032 fused_ordering(1022) 00:17:29.032 fused_ordering(1023) 00:17:29.032 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:17:29.032 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:17:29.032 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # nvmfcleanup 00:17:29.032 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@99 -- # sync 00:17:29.032 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:17:29.032 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@102 -- # set +e 00:17:29.032 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@103 -- # for i in {1..20} 00:17:29.033 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:17:29.033 rmmod nvme_tcp 00:17:29.033 rmmod nvme_fabrics 00:17:29.033 rmmod nvme_keyring 00:17:29.033 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:17:29.033 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # set -e 00:17:29.033 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # return 0 00:17:29.033 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # '[' -n 35256 ']' 00:17:29.033 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@337 -- # killprocess 35256 00:17:29.033 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 35256 ']' 00:17:29.033 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 35256 00:17:29.033 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:17:29.033 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:29.033 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 35256 00:17:29.292 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:29.292 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:29.292 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 35256' 00:17:29.292 killing process with pid 35256 00:17:29.292 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 35256 00:17:29.292 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 35256 00:17:29.292 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:17:29.292 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # nvmf_fini 00:17:29.292 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@264 -- # local dev 00:17:29.292 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@267 -- # remove_target_ns 00:17:29.292 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:17:29.292 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:17:29.292 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_target_ns 00:17:31.830 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@268 -- # delete_main_bridge 00:17:31.830 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:17:31.830 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@130 -- # return 0 00:17:31.830 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:17:31.830 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:17:31.830 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:17:31.830 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:17:31.830 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:17:31.830 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:17:31.830 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:17:31.830 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:17:31.830 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:17:31.830 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:17:31.830 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:17:31.830 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:17:31.830 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:17:31.830 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:17:31.830 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:17:31.830 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:17:31.830 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:17:31.830 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@41 -- # _dev=0 00:17:31.830 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@41 -- # dev_map=() 00:17:31.830 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@284 -- # iptr 00:17:31.830 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@542 -- # iptables-save 00:17:31.830 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:17:31.830 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@542 -- # iptables-restore 00:17:31.830 00:17:31.830 real 0m10.864s 00:17:31.830 user 0m5.043s 00:17:31.830 sys 0m5.913s 00:17:31.830 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:31.830 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:31.830 ************************************ 00:17:31.830 END TEST nvmf_fused_ordering 00:17:31.830 ************************************ 00:17:31.830 12:01:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:17:31.830 12:01:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:31.830 12:01:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:31.830 12:01:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:31.830 ************************************ 00:17:31.830 START TEST nvmf_ns_masking 00:17:31.830 ************************************ 00:17:31.830 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:17:31.830 * Looking for test storage... 00:17:31.830 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:31.830 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:31.830 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:17:31.830 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:31.830 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:31.830 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:31.830 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:31.830 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:31.830 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:17:31.830 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:17:31.830 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:17:31.830 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:17:31.830 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:17:31.830 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:17:31.830 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:17:31.830 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:31.830 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:17:31.830 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:17:31.830 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:31.830 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:31.830 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:17:31.830 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:17:31.830 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:31.830 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:17:31.830 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:17:31.830 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:17:31.830 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:17:31.830 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:31.830 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:17:31.830 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:17:31.830 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:31.830 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:31.830 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:17:31.830 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:31.830 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:31.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:31.830 --rc genhtml_branch_coverage=1 00:17:31.830 --rc genhtml_function_coverage=1 00:17:31.830 --rc genhtml_legend=1 00:17:31.830 --rc geninfo_all_blocks=1 00:17:31.830 --rc geninfo_unexecuted_blocks=1 00:17:31.830 00:17:31.830 ' 00:17:31.830 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:31.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:31.830 --rc genhtml_branch_coverage=1 00:17:31.830 --rc genhtml_function_coverage=1 00:17:31.830 --rc genhtml_legend=1 00:17:31.830 --rc geninfo_all_blocks=1 00:17:31.830 --rc geninfo_unexecuted_blocks=1 00:17:31.830 00:17:31.830 ' 00:17:31.830 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:31.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:31.830 --rc genhtml_branch_coverage=1 00:17:31.830 --rc genhtml_function_coverage=1 00:17:31.830 --rc genhtml_legend=1 00:17:31.830 --rc geninfo_all_blocks=1 00:17:31.830 --rc geninfo_unexecuted_blocks=1 00:17:31.830 00:17:31.830 ' 00:17:31.830 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:31.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:31.830 --rc genhtml_branch_coverage=1 00:17:31.830 --rc genhtml_function_coverage=1 00:17:31.830 --rc genhtml_legend=1 00:17:31.830 --rc geninfo_all_blocks=1 00:17:31.830 --rc geninfo_unexecuted_blocks=1 00:17:31.830 00:17:31.830 ' 00:17:31.830 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:31.830 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:17:31.831 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:31.831 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:31.831 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:31.831 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:31.831 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:31.831 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:17:31.831 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:31.831 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:17:31.831 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:31.831 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:17:31.831 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:31.831 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:17:31.831 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:17:31.831 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:31.831 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:31.831 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:17:31.831 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:31.831 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:31.831 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:31.831 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.831 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.831 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.831 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:17:31.831 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.831 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:17:31.831 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:17:31.831 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:17:31.831 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:17:31.831 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@50 -- # : 0 00:17:31.831 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:17:31.831 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:17:31.831 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:17:31.831 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:31.831 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:31.831 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:17:31.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:17:31.831 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:17:31.831 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:17:31.831 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@54 -- # have_pci_nics=0 00:17:31.831 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:31.831 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:17:31.831 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:17:31.831 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:17:31.831 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=fec49080-c7a9-4f85-910a-2310b7e6cdff 00:17:31.831 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:17:31.831 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=a12ef2a3-037e-4b7a-bc86-ea579dbad825 00:17:31.831 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:17:31.831 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:17:31.831 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:17:31.831 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:17:31.831 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=872d7222-0257-40df-b581-477183fc23fc 00:17:31.831 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:17:31.831 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:17:31.831 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:31.831 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # prepare_net_devs 00:17:31.831 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # local -g is_hw=no 00:17:31.831 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@260 -- # remove_target_ns 00:17:31.831 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:17:31.831 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:17:31.831 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_target_ns 00:17:31.831 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:17:31.831 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:17:31.831 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # xtrace_disable 00:17:31.831 12:01:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:38.405 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:38.405 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@131 -- # pci_devs=() 00:17:38.405 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@131 -- # local -a pci_devs 00:17:38.405 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@132 -- # pci_net_devs=() 00:17:38.405 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:17:38.405 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@133 -- # pci_drivers=() 00:17:38.405 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@133 -- # local -A pci_drivers 00:17:38.405 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@135 -- # net_devs=() 00:17:38.405 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@135 -- # local -ga net_devs 00:17:38.405 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@136 -- # e810=() 00:17:38.405 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@136 -- # local -ga e810 00:17:38.405 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@137 -- # x722=() 00:17:38.405 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@137 -- # local -ga x722 00:17:38.405 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@138 -- # mlx=() 00:17:38.405 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@138 -- # local -ga mlx 00:17:38.405 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:38.405 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:38.405 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:38.405 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:38.405 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:38.405 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:38.405 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:38.405 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:38.405 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:38.405 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:38.405 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:38.405 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:38.405 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:17:38.405 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:38.406 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:38.406 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # [[ up == up ]] 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:38.406 Found net devices under 0000:86:00.0: cvl_0_0 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # [[ up == up ]] 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:38.406 Found net devices under 0000:86:00.1: cvl_0_1 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # is_hw=yes 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@257 -- # create_target_ns 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@27 -- # local -gA dev_map 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@28 -- # local -g _dev 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@44 -- # ips=() 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:17:38.406 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@11 -- # local val=167772161 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:17:38.407 10.0.0.1 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@11 -- # local val=167772162 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:17:38.407 10.0.0.2 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@38 -- # ping_ips 1 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@107 -- # local dev=initiator0 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:17:38.407 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:38.407 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.395 ms 00:17:38.407 00:17:38.407 --- 10.0.0.1 ping statistics --- 00:17:38.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:38.407 rtt min/avg/max/mdev = 0.395/0.395/0.395/0.000 ms 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:17:38.407 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@168 -- # get_net_dev target0 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@107 -- # local dev=target0 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:17:38.408 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:38.408 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:17:38.408 00:17:38.408 --- 10.0.0.2 ping statistics --- 00:17:38.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:38.408 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@98 -- # (( pair++ )) 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@270 -- # return 0 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@107 -- # local dev=initiator0 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@107 -- # local dev=initiator1 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@109 -- # return 1 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@168 -- # dev= 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@169 -- # return 0 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@168 -- # get_net_dev target0 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@107 -- # local dev=target0 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:38.408 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:38.409 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@168 -- # get_net_dev target1 00:17:38.409 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@107 -- # local dev=target1 00:17:38.409 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:17:38.409 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:17:38.409 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@109 -- # return 1 00:17:38.409 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@168 -- # dev= 00:17:38.409 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@169 -- # return 0 00:17:38.409 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:17:38.409 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:38.409 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:17:38.409 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:17:38.409 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:38.409 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:17:38.409 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:17:38.409 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:17:38.409 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:17:38.409 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:38.409 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:38.409 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # nvmfpid=39281 00:17:38.409 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@329 -- # waitforlisten 39281 00:17:38.409 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:38.409 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 39281 ']' 00:17:38.409 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:38.409 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:38.409 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:38.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:38.409 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:38.409 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:38.409 [2024-12-05 12:01:11.925509] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:17:38.409 [2024-12-05 12:01:11.925553] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:38.409 [2024-12-05 12:01:12.003995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.409 [2024-12-05 12:01:12.043856] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:38.409 [2024-12-05 12:01:12.043892] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:38.409 [2024-12-05 12:01:12.043899] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:38.409 [2024-12-05 12:01:12.043905] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:38.409 [2024-12-05 12:01:12.043910] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:38.409 [2024-12-05 12:01:12.044463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.409 12:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:38.409 12:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:17:38.409 12:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:17:38.409 12:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:38.409 12:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:38.409 12:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:38.409 12:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:38.409 [2024-12-05 12:01:12.340113] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:38.409 12:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:17:38.409 12:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:17:38.409 12:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:38.409 Malloc1 00:17:38.409 12:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:38.667 Malloc2 00:17:38.667 12:01:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:38.925 12:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:17:39.183 12:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:39.183 [2024-12-05 12:01:13.368949] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:39.441 12:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:17:39.441 12:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 872d7222-0257-40df-b581-477183fc23fc -a 10.0.0.2 -s 4420 -i 4 00:17:39.441 12:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:17:39.441 12:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:39.441 12:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:39.441 12:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:39.441 12:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:41.465 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:41.465 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:41.465 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:41.465 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:41.465 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:41.465 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:41.465 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:41.465 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:41.722 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:41.722 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:41.722 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:17:41.722 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:41.722 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:41.722 [ 0]:0x1 00:17:41.722 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:41.722 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:41.722 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1c0190c353674cdc8264c091fa9e8bf6 00:17:41.722 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1c0190c353674cdc8264c091fa9e8bf6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:41.722 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:17:41.722 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:17:41.722 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:41.722 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:41.979 [ 0]:0x1 00:17:41.979 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:41.979 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:41.979 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1c0190c353674cdc8264c091fa9e8bf6 00:17:41.979 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1c0190c353674cdc8264c091fa9e8bf6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:41.979 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:17:41.979 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:41.979 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:41.979 [ 1]:0x2 00:17:41.979 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:41.979 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:41.979 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4e1e6fa577864f0dbf814ba7de8c2803 00:17:41.979 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4e1e6fa577864f0dbf814ba7de8c2803 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:41.979 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:17:41.979 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:42.237 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:42.237 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:42.495 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:17:42.753 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:17:42.753 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 872d7222-0257-40df-b581-477183fc23fc -a 10.0.0.2 -s 4420 -i 4 00:17:42.753 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:17:42.753 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:42.754 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:42.754 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:17:42.754 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:17:42.754 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:44.657 12:01:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:44.657 12:01:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:44.657 12:01:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:44.657 12:01:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:44.657 12:01:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:44.657 12:01:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:44.657 12:01:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:44.657 12:01:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:44.916 12:01:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:44.916 12:01:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:44.916 12:01:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:17:44.916 12:01:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:44.916 12:01:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:44.916 12:01:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:44.916 12:01:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:44.916 12:01:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:44.916 12:01:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:44.916 12:01:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:44.916 12:01:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:44.916 12:01:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:44.916 12:01:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:44.916 12:01:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:44.916 12:01:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:44.916 12:01:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:44.916 12:01:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:44.916 12:01:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:44.916 12:01:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:44.916 12:01:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:44.916 12:01:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:17:44.916 12:01:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:44.916 12:01:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:44.916 [ 0]:0x2 00:17:44.916 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:44.916 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:44.916 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4e1e6fa577864f0dbf814ba7de8c2803 00:17:44.916 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4e1e6fa577864f0dbf814ba7de8c2803 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:44.916 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:45.175 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:17:45.175 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:45.175 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:45.175 [ 0]:0x1 00:17:45.175 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:45.175 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:45.434 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1c0190c353674cdc8264c091fa9e8bf6 00:17:45.434 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1c0190c353674cdc8264c091fa9e8bf6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:45.434 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:17:45.434 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:45.434 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:45.434 [ 1]:0x2 00:17:45.434 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:45.434 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:45.434 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4e1e6fa577864f0dbf814ba7de8c2803 00:17:45.434 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4e1e6fa577864f0dbf814ba7de8c2803 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:45.434 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:45.693 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:17:45.693 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:45.693 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:45.693 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:45.693 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:45.693 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:45.693 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:45.693 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:45.693 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:45.693 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:45.693 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:45.693 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:45.693 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:45.693 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:45.693 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:45.693 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:45.693 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:45.693 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:45.693 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:17:45.693 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:45.693 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:45.693 [ 0]:0x2 00:17:45.693 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:45.693 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:45.693 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4e1e6fa577864f0dbf814ba7de8c2803 00:17:45.693 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4e1e6fa577864f0dbf814ba7de8c2803 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:45.693 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:17:45.693 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:45.693 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:45.693 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:45.951 12:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:17:45.951 12:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 872d7222-0257-40df-b581-477183fc23fc -a 10.0.0.2 -s 4420 -i 4 00:17:45.951 12:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:45.951 12:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:45.951 12:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:45.951 12:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:17:45.951 12:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:17:45.951 12:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:48.485 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:48.485 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:48.485 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:48.485 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:17:48.485 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:48.485 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:48.485 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:48.485 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:48.485 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:48.485 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:48.485 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:17:48.485 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:48.485 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:48.485 [ 0]:0x1 00:17:48.485 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:48.485 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:48.485 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1c0190c353674cdc8264c091fa9e8bf6 00:17:48.485 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1c0190c353674cdc8264c091fa9e8bf6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:48.485 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:17:48.485 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:48.485 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:48.485 [ 1]:0x2 00:17:48.485 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:48.485 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:48.485 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4e1e6fa577864f0dbf814ba7de8c2803 00:17:48.486 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4e1e6fa577864f0dbf814ba7de8c2803 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:48.486 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:48.744 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:17:48.744 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:48.744 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:48.744 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:48.744 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:48.744 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:48.744 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:48.744 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:48.744 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:48.744 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:48.744 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:48.744 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:48.744 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:48.744 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:48.744 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:48.744 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:48.744 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:48.744 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:48.744 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:17:48.744 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:48.744 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:48.744 [ 0]:0x2 00:17:48.744 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:48.744 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:48.744 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4e1e6fa577864f0dbf814ba7de8c2803 00:17:48.744 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4e1e6fa577864f0dbf814ba7de8c2803 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:48.744 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:48.744 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:48.744 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:48.745 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:48.745 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:48.745 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:48.745 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:48.745 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:48.745 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:48.745 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:48.745 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:48.745 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:49.004 [2024-12-05 12:01:22.976615] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:17:49.004 request: 00:17:49.004 { 00:17:49.004 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:49.004 "nsid": 2, 00:17:49.004 "host": "nqn.2016-06.io.spdk:host1", 00:17:49.004 "method": "nvmf_ns_remove_host", 00:17:49.004 "req_id": 1 00:17:49.004 } 00:17:49.004 Got JSON-RPC error response 00:17:49.004 response: 00:17:49.004 { 00:17:49.004 "code": -32602, 00:17:49.004 "message": "Invalid parameters" 00:17:49.004 } 00:17:49.004 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:49.004 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:49.004 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:49.004 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:49.004 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:17:49.004 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:49.004 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:49.004 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:49.004 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:49.004 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:49.004 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:49.004 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:49.004 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:49.004 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:49.004 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:49.004 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:49.004 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:49.004 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:49.004 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:49.004 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:49.004 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:49.004 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:49.004 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:17:49.004 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:49.004 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:49.004 [ 0]:0x2 00:17:49.004 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:49.004 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:49.004 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4e1e6fa577864f0dbf814ba7de8c2803 00:17:49.004 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4e1e6fa577864f0dbf814ba7de8c2803 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:49.004 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:17:49.004 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:49.004 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:49.004 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=41288 00:17:49.004 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:17:49.004 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 41288 /var/tmp/host.sock 00:17:49.004 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:17:49.004 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 41288 ']' 00:17:49.004 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:17:49.004 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:49.004 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:49.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:49.004 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:49.004 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:49.263 [2024-12-05 12:01:23.218337] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:17:49.263 [2024-12-05 12:01:23.218387] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid41288 ] 00:17:49.263 [2024-12-05 12:01:23.294670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.263 [2024-12-05 12:01:23.335142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:49.522 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:49.522 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:17:49.522 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:49.779 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:49.779 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid fec49080-c7a9-4f85-910a-2310b7e6cdff 00:17:49.779 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@538 -- # tr -d - 00:17:49.779 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g FEC49080C7A94F85910A2310B7E6CDFF -i 00:17:50.037 12:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid a12ef2a3-037e-4b7a-bc86-ea579dbad825 00:17:50.037 12:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@538 -- # tr -d - 00:17:50.037 12:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g A12EF2A3037E4B7ABC86EA579DBAD825 -i 00:17:50.295 12:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:50.553 12:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:17:50.810 12:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:50.810 12:01:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:51.067 nvme0n1 00:17:51.067 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:51.067 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:51.325 nvme1n2 00:17:51.325 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:17:51.325 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:17:51.325 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:17:51.325 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:17:51.325 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:17:51.583 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:17:51.583 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:17:51.583 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:17:51.583 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:17:51.841 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ fec49080-c7a9-4f85-910a-2310b7e6cdff == \f\e\c\4\9\0\8\0\-\c\7\a\9\-\4\f\8\5\-\9\1\0\a\-\2\3\1\0\b\7\e\6\c\d\f\f ]] 00:17:51.841 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:17:51.841 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:17:51.841 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:17:52.099 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ a12ef2a3-037e-4b7a-bc86-ea579dbad825 == \a\1\2\e\f\2\a\3\-\0\3\7\e\-\4\b\7\a\-\b\c\8\6\-\e\a\5\7\9\d\b\a\d\8\2\5 ]] 00:17:52.099 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:52.099 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:52.358 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid fec49080-c7a9-4f85-910a-2310b7e6cdff 00:17:52.358 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@538 -- # tr -d - 00:17:52.358 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g FEC49080C7A94F85910A2310B7E6CDFF 00:17:52.358 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:52.358 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g FEC49080C7A94F85910A2310B7E6CDFF 00:17:52.358 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:52.358 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:52.358 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:52.358 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:52.358 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:52.358 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:52.358 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:52.358 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:52.358 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g FEC49080C7A94F85910A2310B7E6CDFF 00:17:52.616 [2024-12-05 12:01:26.638614] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:17:52.616 [2024-12-05 12:01:26.638643] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:17:52.616 [2024-12-05 12:01:26.638652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.616 request: 00:17:52.616 { 00:17:52.616 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:52.616 "namespace": { 00:17:52.616 "bdev_name": "invalid", 00:17:52.616 "nsid": 1, 00:17:52.616 "nguid": "FEC49080C7A94F85910A2310B7E6CDFF", 00:17:52.616 "no_auto_visible": false, 00:17:52.616 "hide_metadata": false 00:17:52.616 }, 00:17:52.616 "method": "nvmf_subsystem_add_ns", 00:17:52.616 "req_id": 1 00:17:52.616 } 00:17:52.616 Got JSON-RPC error response 00:17:52.616 response: 00:17:52.616 { 00:17:52.616 "code": -32602, 00:17:52.616 "message": "Invalid parameters" 00:17:52.616 } 00:17:52.616 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:52.616 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:52.616 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:52.616 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:52.616 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid fec49080-c7a9-4f85-910a-2310b7e6cdff 00:17:52.616 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@538 -- # tr -d - 00:17:52.616 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g FEC49080C7A94F85910A2310B7E6CDFF -i 00:17:52.875 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:17:54.779 12:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:17:54.779 12:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:17:54.779 12:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:17:55.039 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:17:55.039 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 41288 00:17:55.039 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 41288 ']' 00:17:55.039 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 41288 00:17:55.039 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:17:55.039 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:55.039 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 41288 00:17:55.039 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:55.039 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:55.039 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 41288' 00:17:55.039 killing process with pid 41288 00:17:55.039 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 41288 00:17:55.039 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 41288 00:17:55.298 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:55.556 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:17:55.556 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:17:55.556 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # nvmfcleanup 00:17:55.556 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@99 -- # sync 00:17:55.556 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:17:55.556 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@102 -- # set +e 00:17:55.556 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@103 -- # for i in {1..20} 00:17:55.556 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:17:55.556 rmmod nvme_tcp 00:17:55.556 rmmod nvme_fabrics 00:17:55.556 rmmod nvme_keyring 00:17:55.556 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:17:55.556 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # set -e 00:17:55.556 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # return 0 00:17:55.556 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # '[' -n 39281 ']' 00:17:55.556 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@337 -- # killprocess 39281 00:17:55.556 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 39281 ']' 00:17:55.556 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 39281 00:17:55.556 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:17:55.556 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:55.556 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 39281 00:17:55.816 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:55.816 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:55.816 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 39281' 00:17:55.816 killing process with pid 39281 00:17:55.816 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 39281 00:17:55.816 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 39281 00:17:55.816 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:17:55.816 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # nvmf_fini 00:17:55.816 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@264 -- # local dev 00:17:55.816 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@267 -- # remove_target_ns 00:17:55.816 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:17:55.816 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:17:55.816 12:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_target_ns 00:17:58.353 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@268 -- # delete_main_bridge 00:17:58.353 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:17:58.353 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@130 -- # return 0 00:17:58.353 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:17:58.353 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:17:58.353 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:17:58.353 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:17:58.353 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:17:58.353 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:17:58.353 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:17:58.353 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:17:58.353 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:17:58.353 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:17:58.353 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:17:58.353 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:17:58.353 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:17:58.353 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:17:58.353 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:17:58.353 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:17:58.353 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:17:58.353 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@41 -- # _dev=0 00:17:58.353 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@41 -- # dev_map=() 00:17:58.353 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@284 -- # iptr 00:17:58.353 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@542 -- # iptables-save 00:17:58.353 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:17:58.353 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@542 -- # iptables-restore 00:17:58.353 00:17:58.353 real 0m26.501s 00:17:58.353 user 0m31.509s 00:17:58.353 sys 0m7.191s 00:17:58.353 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:58.353 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:58.353 ************************************ 00:17:58.353 END TEST nvmf_ns_masking 00:17:58.353 ************************************ 00:17:58.353 12:01:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:17:58.353 12:01:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:58.353 12:01:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:58.353 12:01:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:58.353 12:01:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:58.353 ************************************ 00:17:58.353 START TEST nvmf_nvme_cli 00:17:58.353 ************************************ 00:17:58.353 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:58.353 * Looking for test storage... 00:17:58.353 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:58.353 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:58.353 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:17:58.353 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:58.353 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:58.353 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:58.353 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:58.353 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:58.353 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:17:58.353 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:17:58.353 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:17:58.353 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:17:58.353 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:17:58.353 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:17:58.353 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:17:58.353 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:58.353 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:17:58.353 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:17:58.353 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:58.353 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:58.353 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:17:58.353 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:17:58.353 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:58.353 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:17:58.353 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:17:58.353 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:17:58.353 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:17:58.353 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:58.354 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:17:58.354 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:17:58.354 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:58.354 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:58.354 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:17:58.354 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:58.354 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:58.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.354 --rc genhtml_branch_coverage=1 00:17:58.354 --rc genhtml_function_coverage=1 00:17:58.354 --rc genhtml_legend=1 00:17:58.354 --rc geninfo_all_blocks=1 00:17:58.354 --rc geninfo_unexecuted_blocks=1 00:17:58.354 00:17:58.354 ' 00:17:58.354 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:58.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.354 --rc genhtml_branch_coverage=1 00:17:58.354 --rc genhtml_function_coverage=1 00:17:58.354 --rc genhtml_legend=1 00:17:58.354 --rc geninfo_all_blocks=1 00:17:58.354 --rc geninfo_unexecuted_blocks=1 00:17:58.354 00:17:58.354 ' 00:17:58.354 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:58.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.354 --rc genhtml_branch_coverage=1 00:17:58.354 --rc genhtml_function_coverage=1 00:17:58.354 --rc genhtml_legend=1 00:17:58.354 --rc geninfo_all_blocks=1 00:17:58.354 --rc geninfo_unexecuted_blocks=1 00:17:58.354 00:17:58.354 ' 00:17:58.354 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:58.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.354 --rc genhtml_branch_coverage=1 00:17:58.354 --rc genhtml_function_coverage=1 00:17:58.354 --rc genhtml_legend=1 00:17:58.354 --rc geninfo_all_blocks=1 00:17:58.354 --rc geninfo_unexecuted_blocks=1 00:17:58.354 00:17:58.354 ' 00:17:58.354 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:58.354 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:17:58.354 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:58.354 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:58.354 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:58.354 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:58.354 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:58.354 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:17:58.354 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:58.354 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:17:58.354 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:58.354 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:17:58.354 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:58.354 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:17:58.354 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:17:58.354 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:58.354 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:58.354 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:17:58.354 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:58.354 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:58.354 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:58.354 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.354 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.354 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.354 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:17:58.354 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.354 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:17:58.354 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:17:58.354 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:17:58.354 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:17:58.354 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@50 -- # : 0 00:17:58.354 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:17:58.354 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:17:58.354 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:17:58.354 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:58.354 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:58.354 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:17:58.354 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:17:58.354 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:17:58.354 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:17:58.354 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@54 -- # have_pci_nics=0 00:17:58.354 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:58.354 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:58.354 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:17:58.354 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:17:58.354 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:17:58.354 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:58.354 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # prepare_net_devs 00:17:58.354 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # local -g is_hw=no 00:17:58.354 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@260 -- # remove_target_ns 00:17:58.354 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:17:58.354 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:17:58.354 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_target_ns 00:17:58.354 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:17:58.354 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:17:58.354 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # xtrace_disable 00:17:58.354 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:04.923 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:04.923 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@131 -- # pci_devs=() 00:18:04.923 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@131 -- # local -a pci_devs 00:18:04.923 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@132 -- # pci_net_devs=() 00:18:04.923 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:18:04.923 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@133 -- # pci_drivers=() 00:18:04.923 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@133 -- # local -A pci_drivers 00:18:04.923 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@135 -- # net_devs=() 00:18:04.923 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@135 -- # local -ga net_devs 00:18:04.923 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@136 -- # e810=() 00:18:04.923 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@136 -- # local -ga e810 00:18:04.923 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@137 -- # x722=() 00:18:04.923 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@137 -- # local -ga x722 00:18:04.923 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@138 -- # mlx=() 00:18:04.923 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@138 -- # local -ga mlx 00:18:04.923 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:04.924 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:04.924 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # [[ up == up ]] 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:04.924 Found net devices under 0000:86:00.0: cvl_0_0 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # [[ up == up ]] 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:04.924 Found net devices under 0000:86:00.1: cvl_0_1 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # is_hw=yes 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@257 -- # create_target_ns 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@27 -- # local -gA dev_map 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@28 -- # local -g _dev 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@44 -- # ips=() 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:18:04.924 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:18:04.924 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:18:04.924 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:18:04.924 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:18:04.924 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:18:04.924 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@11 -- # local val=167772161 00:18:04.924 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:18:04.924 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:18:04.924 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:18:04.924 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:18:04.924 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:18:04.925 10.0.0.1 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@11 -- # local val=167772162 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:18:04.925 10.0.0.2 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@38 -- # ping_ips 1 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@107 -- # local dev=initiator0 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:18:04.925 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:04.925 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.379 ms 00:18:04.925 00:18:04.925 --- 10.0.0.1 ping statistics --- 00:18:04.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:04.925 rtt min/avg/max/mdev = 0.379/0.379/0.379/0.000 ms 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@168 -- # get_net_dev target0 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@107 -- # local dev=target0 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:18:04.925 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:04.925 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:18:04.925 00:18:04.925 --- 10.0.0.2 ping statistics --- 00:18:04.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:04.925 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@98 -- # (( pair++ )) 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@270 -- # return 0 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@107 -- # local dev=initiator0 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:18:04.925 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:18:04.926 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:18:04.926 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:18:04.926 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:18:04.926 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:18:04.926 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:18:04.926 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:04.926 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:18:04.926 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:18:04.926 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:18:04.926 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:18:04.926 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:18:04.926 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:18:04.926 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@107 -- # local dev=initiator1 00:18:04.926 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:18:04.926 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:18:04.926 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@109 -- # return 1 00:18:04.926 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@168 -- # dev= 00:18:04.926 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@169 -- # return 0 00:18:04.926 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:18:04.926 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:18:04.926 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:18:04.926 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:18:04.926 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:18:04.926 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:04.926 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:04.926 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@168 -- # get_net_dev target0 00:18:04.926 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@107 -- # local dev=target0 00:18:04.926 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:18:04.926 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:18:04.926 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:18:04.926 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:18:04.926 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:18:04.926 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:18:04.926 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:18:04.926 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:18:04.926 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:18:04.926 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:04.926 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:18:04.926 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:18:04.926 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:18:04.926 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:18:04.926 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:04.926 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:04.926 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@168 -- # get_net_dev target1 00:18:04.926 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@107 -- # local dev=target1 00:18:04.926 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:18:04.926 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:18:04.926 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@109 -- # return 1 00:18:04.926 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@168 -- # dev= 00:18:04.926 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@169 -- # return 0 00:18:04.926 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:18:04.926 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:04.926 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:18:04.926 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:18:04.926 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:04.926 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:18:04.926 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:18:04.926 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:18:04.926 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:18:04.926 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:04.926 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:04.926 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # nvmfpid=46028 00:18:04.926 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:04.926 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@329 -- # waitforlisten 46028 00:18:04.926 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 46028 ']' 00:18:04.926 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:04.926 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:04.926 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:04.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:04.926 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:04.926 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:04.926 [2024-12-05 12:01:38.460952] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:18:04.926 [2024-12-05 12:01:38.460996] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:04.926 [2024-12-05 12:01:38.540393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:04.926 [2024-12-05 12:01:38.583030] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:04.926 [2024-12-05 12:01:38.583066] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:04.926 [2024-12-05 12:01:38.583073] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:04.926 [2024-12-05 12:01:38.583079] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:04.926 [2024-12-05 12:01:38.583084] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:04.926 [2024-12-05 12:01:38.584542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:04.926 [2024-12-05 12:01:38.584663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:04.926 [2024-12-05 12:01:38.584769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:04.926 [2024-12-05 12:01:38.584771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:05.186 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:05.186 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:18:05.186 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:18:05.186 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:05.186 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:05.186 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:05.186 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:05.186 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.186 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:05.186 [2024-12-05 12:01:39.351715] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:05.186 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.186 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:05.186 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.186 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:05.444 Malloc0 00:18:05.444 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.444 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:05.444 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.444 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:05.444 Malloc1 00:18:05.444 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.444 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:18:05.445 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.445 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:05.445 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.445 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:05.445 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.445 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:05.445 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.445 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:05.445 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.445 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:05.445 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.445 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:05.445 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.445 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:05.445 [2024-12-05 12:01:39.449202] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:05.445 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.445 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:05.445 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.445 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:05.445 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.445 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:18:05.445 00:18:05.445 Discovery Log Number of Records 2, Generation counter 2 00:18:05.445 =====Discovery Log Entry 0====== 00:18:05.445 trtype: tcp 00:18:05.445 adrfam: ipv4 00:18:05.445 subtype: current discovery subsystem 00:18:05.445 treq: not required 00:18:05.445 portid: 0 00:18:05.445 trsvcid: 4420 00:18:05.445 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:05.445 traddr: 10.0.0.2 00:18:05.445 eflags: explicit discovery connections, duplicate discovery information 00:18:05.445 sectype: none 00:18:05.445 =====Discovery Log Entry 1====== 00:18:05.445 trtype: tcp 00:18:05.445 adrfam: ipv4 00:18:05.445 subtype: nvme subsystem 00:18:05.445 treq: not required 00:18:05.445 portid: 0 00:18:05.445 trsvcid: 4420 00:18:05.445 subnqn: nqn.2016-06.io.spdk:cnode1 00:18:05.445 traddr: 10.0.0.2 00:18:05.445 eflags: none 00:18:05.445 sectype: none 00:18:05.445 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:18:05.445 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:18:05.445 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@362 -- # local dev _ 00:18:05.445 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:18:05.445 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # nvme list 00:18:05.445 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ Node == /dev/nvme* ]] 00:18:05.445 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:18:05.445 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ --------------------- == /dev/nvme* ]] 00:18:05.445 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:18:05.445 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:18:05.445 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:06.823 12:01:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:06.823 12:01:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:18:06.823 12:01:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:06.823 12:01:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:18:06.823 12:01:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:18:06.823 12:01:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:18:08.724 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:08.724 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:08.724 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:08.724 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:18:08.724 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:08.724 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:18:08.724 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:18:08.724 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@362 -- # local dev _ 00:18:08.724 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:18:08.724 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # nvme list 00:18:08.724 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ Node == /dev/nvme* ]] 00:18:08.724 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:18:08.724 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ --------------------- == /dev/nvme* ]] 00:18:08.724 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:18:08.724 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:08.724 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # echo /dev/nvme0n1 00:18:08.724 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:18:08.724 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:08.724 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # echo /dev/nvme0n2 00:18:08.724 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:18:08.724 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:18:08.724 /dev/nvme0n2 ]] 00:18:08.724 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:18:08.724 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:18:08.724 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@362 -- # local dev _ 00:18:08.724 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:18:08.724 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # nvme list 00:18:08.724 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ Node == /dev/nvme* ]] 00:18:08.724 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:18:08.724 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ --------------------- == /dev/nvme* ]] 00:18:08.724 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:18:08.724 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:08.724 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # echo /dev/nvme0n1 00:18:08.724 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:18:08.724 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:08.724 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # echo /dev/nvme0n2 00:18:08.724 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:18:08.724 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:18:08.724 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:08.983 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:08.983 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:08.983 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:18:08.983 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:08.983 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:08.983 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:08.983 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:08.983 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:18:08.983 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:18:08.983 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:08.983 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.983 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:08.983 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.983 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:18:08.983 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:18:08.983 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # nvmfcleanup 00:18:08.983 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@99 -- # sync 00:18:08.983 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:18:08.983 12:01:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@102 -- # set +e 00:18:08.983 12:01:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@103 -- # for i in {1..20} 00:18:08.983 12:01:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:18:08.983 rmmod nvme_tcp 00:18:08.983 rmmod nvme_fabrics 00:18:08.983 rmmod nvme_keyring 00:18:08.983 12:01:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:18:08.983 12:01:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # set -e 00:18:08.983 12:01:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # return 0 00:18:08.983 12:01:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # '[' -n 46028 ']' 00:18:08.983 12:01:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@337 -- # killprocess 46028 00:18:08.983 12:01:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 46028 ']' 00:18:08.983 12:01:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 46028 00:18:08.983 12:01:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:18:08.983 12:01:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:08.983 12:01:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 46028 00:18:08.983 12:01:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:08.983 12:01:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:08.983 12:01:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 46028' 00:18:08.983 killing process with pid 46028 00:18:08.983 12:01:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 46028 00:18:08.983 12:01:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 46028 00:18:09.242 12:01:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:18:09.242 12:01:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # nvmf_fini 00:18:09.242 12:01:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@264 -- # local dev 00:18:09.242 12:01:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@267 -- # remove_target_ns 00:18:09.242 12:01:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:18:09.242 12:01:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:18:09.242 12:01:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_target_ns 00:18:11.777 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@268 -- # delete_main_bridge 00:18:11.777 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:18:11.777 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@130 -- # return 0 00:18:11.777 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:18:11.777 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:18:11.777 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:18:11.777 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:18:11.777 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:18:11.777 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:18:11.777 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:18:11.777 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:18:11.777 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:18:11.777 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:18:11.777 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:18:11.777 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:18:11.777 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:18:11.777 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:18:11.777 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:18:11.777 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:18:11.777 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:18:11.777 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@41 -- # _dev=0 00:18:11.777 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@41 -- # dev_map=() 00:18:11.777 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@284 -- # iptr 00:18:11.777 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@542 -- # iptables-save 00:18:11.777 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:18:11.777 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@542 -- # iptables-restore 00:18:11.777 00:18:11.777 real 0m13.274s 00:18:11.777 user 0m20.794s 00:18:11.777 sys 0m5.175s 00:18:11.777 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:11.777 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:11.777 ************************************ 00:18:11.777 END TEST nvmf_nvme_cli 00:18:11.777 ************************************ 00:18:11.777 12:01:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:18:11.777 12:01:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:18:11.777 12:01:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:11.777 12:01:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:11.777 12:01:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:11.777 ************************************ 00:18:11.777 START TEST nvmf_vfio_user 00:18:11.777 ************************************ 00:18:11.777 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:18:11.777 * Looking for test storage... 00:18:11.777 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:11.777 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:11.777 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lcov --version 00:18:11.777 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:11.777 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:11.777 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:11.777 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:11.777 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:11.777 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:18:11.777 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:18:11.777 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:18:11.777 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:18:11.777 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:18:11.777 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:18:11.777 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:18:11.777 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:11.777 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:18:11.777 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:18:11.777 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:11.777 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:11.777 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:18:11.777 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:18:11.777 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:11.777 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:18:11.777 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:18:11.777 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:18:11.777 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:18:11.777 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:11.777 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:18:11.777 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:18:11.777 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:11.777 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:11.777 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:18:11.778 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:11.778 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:11.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:11.778 --rc genhtml_branch_coverage=1 00:18:11.778 --rc genhtml_function_coverage=1 00:18:11.778 --rc genhtml_legend=1 00:18:11.778 --rc geninfo_all_blocks=1 00:18:11.778 --rc geninfo_unexecuted_blocks=1 00:18:11.778 00:18:11.778 ' 00:18:11.778 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:11.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:11.778 --rc genhtml_branch_coverage=1 00:18:11.778 --rc genhtml_function_coverage=1 00:18:11.778 --rc genhtml_legend=1 00:18:11.778 --rc geninfo_all_blocks=1 00:18:11.778 --rc geninfo_unexecuted_blocks=1 00:18:11.778 00:18:11.778 ' 00:18:11.778 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:11.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:11.778 --rc genhtml_branch_coverage=1 00:18:11.778 --rc genhtml_function_coverage=1 00:18:11.778 --rc genhtml_legend=1 00:18:11.778 --rc geninfo_all_blocks=1 00:18:11.778 --rc geninfo_unexecuted_blocks=1 00:18:11.778 00:18:11.778 ' 00:18:11.778 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:11.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:11.778 --rc genhtml_branch_coverage=1 00:18:11.778 --rc genhtml_function_coverage=1 00:18:11.778 --rc genhtml_legend=1 00:18:11.778 --rc geninfo_all_blocks=1 00:18:11.778 --rc geninfo_unexecuted_blocks=1 00:18:11.778 00:18:11.778 ' 00:18:11.778 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:11.778 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:18:11.778 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:11.778 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:11.778 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:11.778 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:11.778 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:11.778 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:18:11.778 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:11.778 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:18:11.778 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:11.778 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:18:11.778 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:11.778 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:18:11.778 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:18:11.778 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:11.778 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:11.778 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:18:11.778 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:11.778 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:11.778 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:11.778 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.778 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.778 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.778 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:18:11.778 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.778 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:18:11.778 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:18:11.778 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:18:11.778 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:18:11.778 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@50 -- # : 0 00:18:11.778 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:18:11.778 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:18:11.778 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:18:11.778 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:11.778 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:11.778 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:18:11.778 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:18:11.778 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:18:11.778 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:18:11.778 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@54 -- # have_pci_nics=0 00:18:11.778 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:11.778 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:11.778 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:18:11.778 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:11.778 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:18:11.778 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:18:11.778 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:18:11.778 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:18:11.778 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:18:11.778 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:18:11.778 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=47322 00:18:11.778 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 47322' 00:18:11.778 Process pid: 47322 00:18:11.778 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:11.778 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 47322 00:18:11.778 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:18:11.778 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 47322 ']' 00:18:11.779 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:11.779 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:11.779 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:11.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:11.779 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:11.779 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:11.779 [2024-12-05 12:01:45.733035] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:18:11.779 [2024-12-05 12:01:45.733081] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:11.779 [2024-12-05 12:01:45.805942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:11.779 [2024-12-05 12:01:45.847990] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:11.779 [2024-12-05 12:01:45.848027] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:11.779 [2024-12-05 12:01:45.848034] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:11.779 [2024-12-05 12:01:45.848041] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:11.779 [2024-12-05 12:01:45.848046] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:11.779 [2024-12-05 12:01:45.849631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:11.779 [2024-12-05 12:01:45.849736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:11.779 [2024-12-05 12:01:45.849846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:11.779 [2024-12-05 12:01:45.849846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:11.779 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:11.779 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:18:11.779 12:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:13.153 12:01:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:18:13.153 12:01:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:13.153 12:01:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:13.153 12:01:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:13.153 12:01:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:13.153 12:01:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:13.411 Malloc1 00:18:13.411 12:01:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:13.411 12:01:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:13.669 12:01:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:13.927 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:13.927 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:13.927 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:14.185 Malloc2 00:18:14.185 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:14.443 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:14.443 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:18:14.702 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:18:14.702 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:18:14.702 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:14.702 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:14.702 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:18:14.702 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:14.702 [2024-12-05 12:01:48.816148] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:18:14.702 [2024-12-05 12:01:48.816184] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid47804 ] 00:18:14.702 [2024-12-05 12:01:48.853366] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:18:14.702 [2024-12-05 12:01:48.865707] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:14.702 [2024-12-05 12:01:48.865731] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fc915191000 00:18:14.702 [2024-12-05 12:01:48.866704] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:14.702 [2024-12-05 12:01:48.867702] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:14.702 [2024-12-05 12:01:48.868713] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:14.702 [2024-12-05 12:01:48.869717] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:14.702 [2024-12-05 12:01:48.870726] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:14.702 [2024-12-05 12:01:48.871733] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:14.702 [2024-12-05 12:01:48.872738] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:14.702 [2024-12-05 12:01:48.873744] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:14.702 [2024-12-05 12:01:48.874756] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:14.702 [2024-12-05 12:01:48.874765] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fc915186000 00:18:14.702 [2024-12-05 12:01:48.875682] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:14.702 [2024-12-05 12:01:48.889632] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:18:14.702 [2024-12-05 12:01:48.889660] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:18:14.702 [2024-12-05 12:01:48.891858] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:18:14.702 [2024-12-05 12:01:48.891894] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:14.702 [2024-12-05 12:01:48.891965] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:18:14.702 [2024-12-05 12:01:48.891987] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:18:14.702 [2024-12-05 12:01:48.891992] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:18:14.702 [2024-12-05 12:01:48.892861] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:18:14.702 [2024-12-05 12:01:48.892872] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:18:14.702 [2024-12-05 12:01:48.892879] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:18:14.702 [2024-12-05 12:01:48.893867] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:18:14.702 [2024-12-05 12:01:48.893875] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:18:14.702 [2024-12-05 12:01:48.893882] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:18:14.702 [2024-12-05 12:01:48.894872] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:18:14.702 [2024-12-05 12:01:48.894880] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:14.702 [2024-12-05 12:01:48.895885] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:18:14.702 [2024-12-05 12:01:48.895898] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:18:14.702 [2024-12-05 12:01:48.895903] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:18:14.702 [2024-12-05 12:01:48.895910] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:14.702 [2024-12-05 12:01:48.896018] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:18:14.702 [2024-12-05 12:01:48.896022] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:14.702 [2024-12-05 12:01:48.896027] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:18:14.703 [2024-12-05 12:01:48.896894] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:18:14.703 [2024-12-05 12:01:48.897902] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:18:14.703 [2024-12-05 12:01:48.898909] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:18:14.962 [2024-12-05 12:01:48.899900] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:14.962 [2024-12-05 12:01:48.899965] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:14.962 [2024-12-05 12:01:48.900915] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:18:14.963 [2024-12-05 12:01:48.900925] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:14.963 [2024-12-05 12:01:48.900932] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:18:14.963 [2024-12-05 12:01:48.900950] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:18:14.963 [2024-12-05 12:01:48.900958] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:18:14.963 [2024-12-05 12:01:48.900978] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:14.963 [2024-12-05 12:01:48.900983] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:14.963 [2024-12-05 12:01:48.900989] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:14.963 [2024-12-05 12:01:48.901003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:14.963 [2024-12-05 12:01:48.901045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:14.963 [2024-12-05 12:01:48.901055] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:18:14.963 [2024-12-05 12:01:48.901060] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:18:14.963 [2024-12-05 12:01:48.901064] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:18:14.963 [2024-12-05 12:01:48.901068] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:14.963 [2024-12-05 12:01:48.901073] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:18:14.963 [2024-12-05 12:01:48.901077] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:18:14.963 [2024-12-05 12:01:48.901081] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:18:14.963 [2024-12-05 12:01:48.901088] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:18:14.963 [2024-12-05 12:01:48.901097] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:14.963 [2024-12-05 12:01:48.901105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:14.963 [2024-12-05 12:01:48.901116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:14.963 [2024-12-05 12:01:48.901123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:14.963 [2024-12-05 12:01:48.901131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:14.963 [2024-12-05 12:01:48.901138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:14.963 [2024-12-05 12:01:48.901142] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:18:14.963 [2024-12-05 12:01:48.901150] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:14.963 [2024-12-05 12:01:48.901158] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:14.963 [2024-12-05 12:01:48.901167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:14.963 [2024-12-05 12:01:48.901172] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:18:14.963 [2024-12-05 12:01:48.901177] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:14.963 [2024-12-05 12:01:48.901184] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:18:14.963 [2024-12-05 12:01:48.901191] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:18:14.963 [2024-12-05 12:01:48.901200] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:14.963 [2024-12-05 12:01:48.901214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:14.963 [2024-12-05 12:01:48.901264] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:18:14.963 [2024-12-05 12:01:48.901271] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:18:14.963 [2024-12-05 12:01:48.901278] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:14.963 [2024-12-05 12:01:48.901282] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:14.963 [2024-12-05 12:01:48.901285] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:14.963 [2024-12-05 12:01:48.901291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:14.963 [2024-12-05 12:01:48.901304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:14.963 [2024-12-05 12:01:48.901315] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:18:14.963 [2024-12-05 12:01:48.901323] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:18:14.963 [2024-12-05 12:01:48.901330] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:18:14.963 [2024-12-05 12:01:48.901336] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:14.963 [2024-12-05 12:01:48.901340] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:14.963 [2024-12-05 12:01:48.901343] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:14.963 [2024-12-05 12:01:48.901348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:14.963 [2024-12-05 12:01:48.901372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:14.963 [2024-12-05 12:01:48.901383] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:14.963 [2024-12-05 12:01:48.901390] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:14.963 [2024-12-05 12:01:48.901396] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:14.963 [2024-12-05 12:01:48.901401] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:14.963 [2024-12-05 12:01:48.901404] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:14.963 [2024-12-05 12:01:48.901410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:14.963 [2024-12-05 12:01:48.901421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:14.963 [2024-12-05 12:01:48.901432] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:14.963 [2024-12-05 12:01:48.901438] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:18:14.963 [2024-12-05 12:01:48.901445] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:18:14.963 [2024-12-05 12:01:48.901452] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:18:14.963 [2024-12-05 12:01:48.901458] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:14.963 [2024-12-05 12:01:48.901463] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:18:14.963 [2024-12-05 12:01:48.901468] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:18:14.963 [2024-12-05 12:01:48.901472] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:18:14.963 [2024-12-05 12:01:48.901477] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:18:14.963 [2024-12-05 12:01:48.901493] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:14.963 [2024-12-05 12:01:48.901503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:14.963 [2024-12-05 12:01:48.901513] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:14.963 [2024-12-05 12:01:48.901528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:14.963 [2024-12-05 12:01:48.901538] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:14.963 [2024-12-05 12:01:48.901551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:14.963 [2024-12-05 12:01:48.901562] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:14.963 [2024-12-05 12:01:48.901571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:14.963 [2024-12-05 12:01:48.901583] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:14.963 [2024-12-05 12:01:48.901589] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:14.964 [2024-12-05 12:01:48.901592] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:14.964 [2024-12-05 12:01:48.901595] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:14.964 [2024-12-05 12:01:48.901598] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:14.964 [2024-12-05 12:01:48.901604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:14.964 [2024-12-05 12:01:48.901611] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:14.964 [2024-12-05 12:01:48.901616] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:14.964 [2024-12-05 12:01:48.901623] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:14.964 [2024-12-05 12:01:48.901631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:14.964 [2024-12-05 12:01:48.901639] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:14.964 [2024-12-05 12:01:48.901646] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:14.964 [2024-12-05 12:01:48.901653] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:14.964 [2024-12-05 12:01:48.901663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:14.964 [2024-12-05 12:01:48.901673] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:14.964 [2024-12-05 12:01:48.901677] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:14.964 [2024-12-05 12:01:48.901680] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:14.964 [2024-12-05 12:01:48.901686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:14.964 [2024-12-05 12:01:48.901693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:14.964 [2024-12-05 12:01:48.901727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:14.964 [2024-12-05 12:01:48.901738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:14.964 [2024-12-05 12:01:48.901745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:14.964 ===================================================== 00:18:14.964 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:14.964 ===================================================== 00:18:14.964 Controller Capabilities/Features 00:18:14.964 ================================ 00:18:14.964 Vendor ID: 4e58 00:18:14.964 Subsystem Vendor ID: 4e58 00:18:14.964 Serial Number: SPDK1 00:18:14.964 Model Number: SPDK bdev Controller 00:18:14.964 Firmware Version: 25.01 00:18:14.964 Recommended Arb Burst: 6 00:18:14.964 IEEE OUI Identifier: 8d 6b 50 00:18:14.964 Multi-path I/O 00:18:14.964 May have multiple subsystem ports: Yes 00:18:14.964 May have multiple controllers: Yes 00:18:14.964 Associated with SR-IOV VF: No 00:18:14.964 Max Data Transfer Size: 131072 00:18:14.964 Max Number of Namespaces: 32 00:18:14.964 Max Number of I/O Queues: 127 00:18:14.964 NVMe Specification Version (VS): 1.3 00:18:14.964 NVMe Specification Version (Identify): 1.3 00:18:14.964 Maximum Queue Entries: 256 00:18:14.964 Contiguous Queues Required: Yes 00:18:14.964 Arbitration Mechanisms Supported 00:18:14.964 Weighted Round Robin: Not Supported 00:18:14.964 Vendor Specific: Not Supported 00:18:14.964 Reset Timeout: 15000 ms 00:18:14.964 Doorbell Stride: 4 bytes 00:18:14.964 NVM Subsystem Reset: Not Supported 00:18:14.964 Command Sets Supported 00:18:14.964 NVM Command Set: Supported 00:18:14.964 Boot Partition: Not Supported 00:18:14.964 Memory Page Size Minimum: 4096 bytes 00:18:14.964 Memory Page Size Maximum: 4096 bytes 00:18:14.964 Persistent Memory Region: Not Supported 00:18:14.964 Optional Asynchronous Events Supported 00:18:14.964 Namespace Attribute Notices: Supported 00:18:14.964 Firmware Activation Notices: Not Supported 00:18:14.964 ANA Change Notices: Not Supported 00:18:14.964 PLE Aggregate Log Change Notices: Not Supported 00:18:14.964 LBA Status Info Alert Notices: Not Supported 00:18:14.964 EGE Aggregate Log Change Notices: Not Supported 00:18:14.964 Normal NVM Subsystem Shutdown event: Not Supported 00:18:14.964 Zone Descriptor Change Notices: Not Supported 00:18:14.964 Discovery Log Change Notices: Not Supported 00:18:14.964 Controller Attributes 00:18:14.964 128-bit Host Identifier: Supported 00:18:14.964 Non-Operational Permissive Mode: Not Supported 00:18:14.964 NVM Sets: Not Supported 00:18:14.964 Read Recovery Levels: Not Supported 00:18:14.964 Endurance Groups: Not Supported 00:18:14.964 Predictable Latency Mode: Not Supported 00:18:14.964 Traffic Based Keep ALive: Not Supported 00:18:14.964 Namespace Granularity: Not Supported 00:18:14.964 SQ Associations: Not Supported 00:18:14.964 UUID List: Not Supported 00:18:14.964 Multi-Domain Subsystem: Not Supported 00:18:14.964 Fixed Capacity Management: Not Supported 00:18:14.964 Variable Capacity Management: Not Supported 00:18:14.964 Delete Endurance Group: Not Supported 00:18:14.964 Delete NVM Set: Not Supported 00:18:14.964 Extended LBA Formats Supported: Not Supported 00:18:14.964 Flexible Data Placement Supported: Not Supported 00:18:14.964 00:18:14.964 Controller Memory Buffer Support 00:18:14.964 ================================ 00:18:14.964 Supported: No 00:18:14.964 00:18:14.964 Persistent Memory Region Support 00:18:14.964 ================================ 00:18:14.964 Supported: No 00:18:14.964 00:18:14.964 Admin Command Set Attributes 00:18:14.964 ============================ 00:18:14.964 Security Send/Receive: Not Supported 00:18:14.964 Format NVM: Not Supported 00:18:14.964 Firmware Activate/Download: Not Supported 00:18:14.964 Namespace Management: Not Supported 00:18:14.964 Device Self-Test: Not Supported 00:18:14.964 Directives: Not Supported 00:18:14.964 NVMe-MI: Not Supported 00:18:14.964 Virtualization Management: Not Supported 00:18:14.964 Doorbell Buffer Config: Not Supported 00:18:14.964 Get LBA Status Capability: Not Supported 00:18:14.964 Command & Feature Lockdown Capability: Not Supported 00:18:14.964 Abort Command Limit: 4 00:18:14.964 Async Event Request Limit: 4 00:18:14.964 Number of Firmware Slots: N/A 00:18:14.964 Firmware Slot 1 Read-Only: N/A 00:18:14.964 Firmware Activation Without Reset: N/A 00:18:14.964 Multiple Update Detection Support: N/A 00:18:14.964 Firmware Update Granularity: No Information Provided 00:18:14.964 Per-Namespace SMART Log: No 00:18:14.964 Asymmetric Namespace Access Log Page: Not Supported 00:18:14.964 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:18:14.964 Command Effects Log Page: Supported 00:18:14.964 Get Log Page Extended Data: Supported 00:18:14.964 Telemetry Log Pages: Not Supported 00:18:14.964 Persistent Event Log Pages: Not Supported 00:18:14.964 Supported Log Pages Log Page: May Support 00:18:14.964 Commands Supported & Effects Log Page: Not Supported 00:18:14.964 Feature Identifiers & Effects Log Page:May Support 00:18:14.964 NVMe-MI Commands & Effects Log Page: May Support 00:18:14.964 Data Area 4 for Telemetry Log: Not Supported 00:18:14.964 Error Log Page Entries Supported: 128 00:18:14.964 Keep Alive: Supported 00:18:14.964 Keep Alive Granularity: 10000 ms 00:18:14.964 00:18:14.964 NVM Command Set Attributes 00:18:14.964 ========================== 00:18:14.964 Submission Queue Entry Size 00:18:14.964 Max: 64 00:18:14.964 Min: 64 00:18:14.964 Completion Queue Entry Size 00:18:14.964 Max: 16 00:18:14.964 Min: 16 00:18:14.964 Number of Namespaces: 32 00:18:14.964 Compare Command: Supported 00:18:14.964 Write Uncorrectable Command: Not Supported 00:18:14.964 Dataset Management Command: Supported 00:18:14.964 Write Zeroes Command: Supported 00:18:14.964 Set Features Save Field: Not Supported 00:18:14.964 Reservations: Not Supported 00:18:14.964 Timestamp: Not Supported 00:18:14.964 Copy: Supported 00:18:14.964 Volatile Write Cache: Present 00:18:14.964 Atomic Write Unit (Normal): 1 00:18:14.965 Atomic Write Unit (PFail): 1 00:18:14.965 Atomic Compare & Write Unit: 1 00:18:14.965 Fused Compare & Write: Supported 00:18:14.965 Scatter-Gather List 00:18:14.965 SGL Command Set: Supported (Dword aligned) 00:18:14.965 SGL Keyed: Not Supported 00:18:14.965 SGL Bit Bucket Descriptor: Not Supported 00:18:14.965 SGL Metadata Pointer: Not Supported 00:18:14.965 Oversized SGL: Not Supported 00:18:14.965 SGL Metadata Address: Not Supported 00:18:14.965 SGL Offset: Not Supported 00:18:14.965 Transport SGL Data Block: Not Supported 00:18:14.965 Replay Protected Memory Block: Not Supported 00:18:14.965 00:18:14.965 Firmware Slot Information 00:18:14.965 ========================= 00:18:14.965 Active slot: 1 00:18:14.965 Slot 1 Firmware Revision: 25.01 00:18:14.965 00:18:14.965 00:18:14.965 Commands Supported and Effects 00:18:14.965 ============================== 00:18:14.965 Admin Commands 00:18:14.965 -------------- 00:18:14.965 Get Log Page (02h): Supported 00:18:14.965 Identify (06h): Supported 00:18:14.965 Abort (08h): Supported 00:18:14.965 Set Features (09h): Supported 00:18:14.965 Get Features (0Ah): Supported 00:18:14.965 Asynchronous Event Request (0Ch): Supported 00:18:14.965 Keep Alive (18h): Supported 00:18:14.965 I/O Commands 00:18:14.965 ------------ 00:18:14.965 Flush (00h): Supported LBA-Change 00:18:14.965 Write (01h): Supported LBA-Change 00:18:14.965 Read (02h): Supported 00:18:14.965 Compare (05h): Supported 00:18:14.965 Write Zeroes (08h): Supported LBA-Change 00:18:14.965 Dataset Management (09h): Supported LBA-Change 00:18:14.965 Copy (19h): Supported LBA-Change 00:18:14.965 00:18:14.965 Error Log 00:18:14.965 ========= 00:18:14.965 00:18:14.965 Arbitration 00:18:14.965 =========== 00:18:14.965 Arbitration Burst: 1 00:18:14.965 00:18:14.965 Power Management 00:18:14.965 ================ 00:18:14.965 Number of Power States: 1 00:18:14.965 Current Power State: Power State #0 00:18:14.965 Power State #0: 00:18:14.965 Max Power: 0.00 W 00:18:14.965 Non-Operational State: Operational 00:18:14.965 Entry Latency: Not Reported 00:18:14.965 Exit Latency: Not Reported 00:18:14.965 Relative Read Throughput: 0 00:18:14.965 Relative Read Latency: 0 00:18:14.965 Relative Write Throughput: 0 00:18:14.965 Relative Write Latency: 0 00:18:14.965 Idle Power: Not Reported 00:18:14.965 Active Power: Not Reported 00:18:14.965 Non-Operational Permissive Mode: Not Supported 00:18:14.965 00:18:14.965 Health Information 00:18:14.965 ================== 00:18:14.965 Critical Warnings: 00:18:14.965 Available Spare Space: OK 00:18:14.965 Temperature: OK 00:18:14.965 Device Reliability: OK 00:18:14.965 Read Only: No 00:18:14.965 Volatile Memory Backup: OK 00:18:14.965 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:14.965 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:14.965 Available Spare: 0% 00:18:14.965 Available Sp[2024-12-05 12:01:48.901836] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:14.965 [2024-12-05 12:01:48.901847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:14.965 [2024-12-05 12:01:48.901878] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:18:14.965 [2024-12-05 12:01:48.901888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.965 [2024-12-05 12:01:48.901894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.965 [2024-12-05 12:01:48.901899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.965 [2024-12-05 12:01:48.901905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.965 [2024-12-05 12:01:48.905377] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:18:14.965 [2024-12-05 12:01:48.905395] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:18:14.965 [2024-12-05 12:01:48.905947] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:14.965 [2024-12-05 12:01:48.905998] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:18:14.965 [2024-12-05 12:01:48.906005] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:18:14.965 [2024-12-05 12:01:48.906949] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:18:14.965 [2024-12-05 12:01:48.906961] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:18:14.965 [2024-12-05 12:01:48.907012] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:18:14.965 [2024-12-05 12:01:48.907977] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:14.965 are Threshold: 0% 00:18:14.965 Life Percentage Used: 0% 00:18:14.965 Data Units Read: 0 00:18:14.965 Data Units Written: 0 00:18:14.965 Host Read Commands: 0 00:18:14.965 Host Write Commands: 0 00:18:14.965 Controller Busy Time: 0 minutes 00:18:14.965 Power Cycles: 0 00:18:14.965 Power On Hours: 0 hours 00:18:14.965 Unsafe Shutdowns: 0 00:18:14.965 Unrecoverable Media Errors: 0 00:18:14.965 Lifetime Error Log Entries: 0 00:18:14.965 Warning Temperature Time: 0 minutes 00:18:14.965 Critical Temperature Time: 0 minutes 00:18:14.965 00:18:14.965 Number of Queues 00:18:14.965 ================ 00:18:14.965 Number of I/O Submission Queues: 127 00:18:14.965 Number of I/O Completion Queues: 127 00:18:14.965 00:18:14.965 Active Namespaces 00:18:14.965 ================= 00:18:14.965 Namespace ID:1 00:18:14.965 Error Recovery Timeout: Unlimited 00:18:14.965 Command Set Identifier: NVM (00h) 00:18:14.965 Deallocate: Supported 00:18:14.965 Deallocated/Unwritten Error: Not Supported 00:18:14.965 Deallocated Read Value: Unknown 00:18:14.965 Deallocate in Write Zeroes: Not Supported 00:18:14.965 Deallocated Guard Field: 0xFFFF 00:18:14.965 Flush: Supported 00:18:14.965 Reservation: Supported 00:18:14.965 Namespace Sharing Capabilities: Multiple Controllers 00:18:14.965 Size (in LBAs): 131072 (0GiB) 00:18:14.965 Capacity (in LBAs): 131072 (0GiB) 00:18:14.965 Utilization (in LBAs): 131072 (0GiB) 00:18:14.965 NGUID: D8EC2CCFEAE54C4DBFDF451EF7472E09 00:18:14.965 UUID: d8ec2ccf-eae5-4c4d-bfdf-451ef7472e09 00:18:14.965 Thin Provisioning: Not Supported 00:18:14.965 Per-NS Atomic Units: Yes 00:18:14.965 Atomic Boundary Size (Normal): 0 00:18:14.965 Atomic Boundary Size (PFail): 0 00:18:14.965 Atomic Boundary Offset: 0 00:18:14.965 Maximum Single Source Range Length: 65535 00:18:14.965 Maximum Copy Length: 65535 00:18:14.965 Maximum Source Range Count: 1 00:18:14.965 NGUID/EUI64 Never Reused: No 00:18:14.965 Namespace Write Protected: No 00:18:14.965 Number of LBA Formats: 1 00:18:14.965 Current LBA Format: LBA Format #00 00:18:14.965 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:14.965 00:18:14.965 12:01:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:14.965 [2024-12-05 12:01:49.139220] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:20.229 Initializing NVMe Controllers 00:18:20.229 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:20.229 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:20.229 Initialization complete. Launching workers. 00:18:20.229 ======================================================== 00:18:20.229 Latency(us) 00:18:20.229 Device Information : IOPS MiB/s Average min max 00:18:20.229 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39956.54 156.08 3203.79 975.82 7594.84 00:18:20.229 ======================================================== 00:18:20.229 Total : 39956.54 156.08 3203.79 975.82 7594.84 00:18:20.229 00:18:20.229 [2024-12-05 12:01:54.161228] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:20.229 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:20.229 [2024-12-05 12:01:54.390353] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:25.493 Initializing NVMe Controllers 00:18:25.493 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:25.493 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:25.493 Initialization complete. Launching workers. 00:18:25.493 ======================================================== 00:18:25.493 Latency(us) 00:18:25.493 Device Information : IOPS MiB/s Average min max 00:18:25.493 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16042.36 62.67 7978.20 6983.48 7996.98 00:18:25.493 ======================================================== 00:18:25.493 Total : 16042.36 62.67 7978.20 6983.48 7996.98 00:18:25.493 00:18:25.493 [2024-12-05 12:01:59.424478] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:25.493 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:25.493 [2024-12-05 12:01:59.623442] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:30.769 [2024-12-05 12:02:04.723905] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:30.769 Initializing NVMe Controllers 00:18:30.769 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:30.769 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:30.769 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:18:30.769 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:18:30.769 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:18:30.769 Initialization complete. Launching workers. 00:18:30.769 Starting thread on core 2 00:18:30.769 Starting thread on core 3 00:18:30.769 Starting thread on core 1 00:18:30.769 12:02:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:18:31.028 [2024-12-05 12:02:05.013647] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:34.316 [2024-12-05 12:02:08.074305] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:34.316 Initializing NVMe Controllers 00:18:34.316 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:34.316 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:34.316 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:18:34.316 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:18:34.316 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:18:34.316 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:18:34.316 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:34.316 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:34.316 Initialization complete. Launching workers. 00:18:34.316 Starting thread on core 1 with urgent priority queue 00:18:34.316 Starting thread on core 2 with urgent priority queue 00:18:34.316 Starting thread on core 3 with urgent priority queue 00:18:34.316 Starting thread on core 0 with urgent priority queue 00:18:34.316 SPDK bdev Controller (SPDK1 ) core 0: 8322.33 IO/s 12.02 secs/100000 ios 00:18:34.316 SPDK bdev Controller (SPDK1 ) core 1: 8428.67 IO/s 11.86 secs/100000 ios 00:18:34.316 SPDK bdev Controller (SPDK1 ) core 2: 8047.00 IO/s 12.43 secs/100000 ios 00:18:34.316 SPDK bdev Controller (SPDK1 ) core 3: 9716.00 IO/s 10.29 secs/100000 ios 00:18:34.316 ======================================================== 00:18:34.316 00:18:34.316 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:34.316 [2024-12-05 12:02:08.361877] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:34.316 Initializing NVMe Controllers 00:18:34.316 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:34.316 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:34.316 Namespace ID: 1 size: 0GB 00:18:34.316 Initialization complete. 00:18:34.316 INFO: using host memory buffer for IO 00:18:34.316 Hello world! 00:18:34.316 [2024-12-05 12:02:08.398114] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:34.316 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:34.575 [2024-12-05 12:02:08.673547] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:35.508 Initializing NVMe Controllers 00:18:35.508 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:35.508 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:35.508 Initialization complete. Launching workers. 00:18:35.508 submit (in ns) avg, min, max = 6278.0, 3166.7, 3999782.9 00:18:35.508 complete (in ns) avg, min, max = 20652.9, 1727.6, 4135934.3 00:18:35.508 00:18:35.508 Submit histogram 00:18:35.508 ================ 00:18:35.508 Range in us Cumulative Count 00:18:35.508 3.154 - 3.170: 0.0060% ( 1) 00:18:35.508 3.170 - 3.185: 0.0181% ( 2) 00:18:35.508 3.185 - 3.200: 0.0483% ( 5) 00:18:35.508 3.200 - 3.215: 0.1208% ( 12) 00:18:35.508 3.215 - 3.230: 1.2076% ( 180) 00:18:35.508 3.230 - 3.246: 5.2590% ( 671) 00:18:35.508 3.246 - 3.261: 10.9226% ( 938) 00:18:35.508 3.261 - 3.276: 16.3809% ( 904) 00:18:35.508 3.276 - 3.291: 23.2218% ( 1133) 00:18:35.508 3.291 - 3.307: 29.4832% ( 1037) 00:18:35.508 3.307 - 3.322: 35.2011% ( 947) 00:18:35.508 3.322 - 3.337: 42.0722% ( 1138) 00:18:35.508 3.337 - 3.352: 47.4641% ( 893) 00:18:35.508 3.352 - 3.368: 52.7050% ( 868) 00:18:35.508 3.368 - 3.383: 59.0931% ( 1058) 00:18:35.508 3.383 - 3.398: 67.9326% ( 1464) 00:18:35.508 3.398 - 3.413: 72.9743% ( 835) 00:18:35.508 3.413 - 3.429: 77.6959% ( 782) 00:18:35.508 3.429 - 3.444: 82.0251% ( 717) 00:18:35.508 3.444 - 3.459: 84.5007% ( 410) 00:18:35.508 3.459 - 3.474: 86.2637% ( 292) 00:18:35.508 3.474 - 3.490: 87.2781% ( 168) 00:18:35.508 3.490 - 3.505: 87.8215% ( 90) 00:18:35.508 3.505 - 3.520: 88.2079% ( 64) 00:18:35.508 3.520 - 3.535: 88.7514% ( 90) 00:18:35.508 3.535 - 3.550: 89.4336% ( 113) 00:18:35.508 3.550 - 3.566: 90.4299% ( 165) 00:18:35.508 3.566 - 3.581: 91.3597% ( 154) 00:18:35.508 3.581 - 3.596: 92.2835% ( 153) 00:18:35.508 3.596 - 3.611: 93.1530% ( 144) 00:18:35.508 3.611 - 3.627: 94.1493% ( 165) 00:18:35.508 3.627 - 3.642: 95.2119% ( 176) 00:18:35.508 3.642 - 3.657: 96.1961% ( 163) 00:18:35.508 3.657 - 3.672: 96.9629% ( 127) 00:18:35.508 3.672 - 3.688: 97.6211% ( 109) 00:18:35.508 3.688 - 3.703: 98.1464% ( 87) 00:18:35.508 3.703 - 3.718: 98.5449% ( 66) 00:18:35.508 3.718 - 3.733: 98.8528% ( 51) 00:18:35.508 3.733 - 3.749: 99.0762% ( 37) 00:18:35.508 3.749 - 3.764: 99.2453% ( 28) 00:18:35.508 3.764 - 3.779: 99.4083% ( 27) 00:18:35.508 3.779 - 3.794: 99.4868% ( 13) 00:18:35.508 3.794 - 3.810: 99.5653% ( 13) 00:18:35.508 3.810 - 3.825: 99.6196% ( 9) 00:18:35.508 3.825 - 3.840: 99.6438% ( 4) 00:18:35.508 3.840 - 3.855: 99.6558% ( 2) 00:18:35.508 3.855 - 3.870: 99.6679% ( 2) 00:18:35.508 3.901 - 3.931: 99.6921% ( 4) 00:18:35.508 3.931 - 3.962: 99.6981% ( 1) 00:18:35.508 4.023 - 4.053: 99.7041% ( 1) 00:18:35.508 4.114 - 4.145: 99.7102% ( 1) 00:18:35.508 4.267 - 4.297: 99.7162% ( 1) 00:18:35.508 5.150 - 5.181: 99.7223% ( 1) 00:18:35.508 5.181 - 5.211: 99.7283% ( 1) 00:18:35.508 5.211 - 5.242: 99.7404% ( 2) 00:18:35.508 5.242 - 5.272: 99.7464% ( 1) 00:18:35.508 5.272 - 5.303: 99.7585% ( 2) 00:18:35.508 5.394 - 5.425: 99.7645% ( 1) 00:18:35.508 5.455 - 5.486: 99.7826% ( 3) 00:18:35.508 5.516 - 5.547: 99.7887% ( 1) 00:18:35.508 5.973 - 6.004: 99.7947% ( 1) 00:18:35.508 6.126 - 6.156: 99.8007% ( 1) 00:18:35.508 6.278 - 6.309: 99.8068% ( 1) 00:18:35.508 6.339 - 6.370: 99.8128% ( 1) 00:18:35.508 6.491 - 6.522: 99.8189% ( 1) 00:18:35.508 6.583 - 6.613: 99.8249% ( 1) 00:18:35.508 6.644 - 6.674: 99.8309% ( 1) 00:18:35.508 6.766 - 6.796: 99.8370% ( 1) 00:18:35.508 6.827 - 6.857: 99.8430% ( 1) 00:18:35.508 6.857 - 6.888: 99.8491% ( 1) 00:18:35.508 7.131 - 7.162: 99.8551% ( 1) 00:18:35.508 7.314 - 7.345: 99.8611% ( 1) 00:18:35.508 7.406 - 7.436: 99.8672% ( 1) 00:18:35.508 7.467 - 7.497: 99.8732% ( 1) 00:18:35.508 7.619 - 7.650: 99.8792% ( 1) 00:18:35.508 7.710 - 7.741: 99.8853% ( 1) 00:18:35.508 7.771 - 7.802: 99.8974% ( 2) 00:18:35.508 7.924 - 7.985: 99.9094% ( 2) 00:18:35.509 8.290 - 8.350: 99.9155% ( 1) 00:18:35.509 [2024-12-05 12:02:09.695341] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:35.767 8.350 - 8.411: 99.9215% ( 1) 00:18:35.767 8.411 - 8.472: 99.9275% ( 1) 00:18:35.767 3994.575 - 4025.783: 100.0000% ( 12) 00:18:35.767 00:18:35.767 Complete histogram 00:18:35.767 ================== 00:18:35.767 Range in us Cumulative Count 00:18:35.767 1.722 - 1.730: 0.0060% ( 1) 00:18:35.767 1.730 - 1.737: 0.0121% ( 1) 00:18:35.767 1.745 - 1.752: 0.0181% ( 1) 00:18:35.767 1.752 - 1.760: 0.0604% ( 7) 00:18:35.767 1.760 - 1.768: 0.6461% ( 97) 00:18:35.767 1.768 - 1.775: 2.5118% ( 309) 00:18:35.767 1.775 - 1.783: 4.8545% ( 388) 00:18:35.767 1.783 - 1.790: 6.3760% ( 252) 00:18:35.767 1.790 - 1.798: 7.7406% ( 226) 00:18:35.767 1.798 - 1.806: 8.9905% ( 207) 00:18:35.767 1.806 - 1.813: 10.9528% ( 325) 00:18:35.767 1.813 - 1.821: 23.1313% ( 2017) 00:18:35.767 1.821 - 1.829: 51.4793% ( 4695) 00:18:35.767 1.829 - 1.836: 77.2612% ( 4270) 00:18:35.767 1.836 - 1.844: 88.8419% ( 1918) 00:18:35.767 1.844 - 1.851: 93.7689% ( 816) 00:18:35.767 1.851 - 1.859: 95.9908% ( 368) 00:18:35.767 1.859 - 1.867: 96.9327% ( 156) 00:18:35.767 1.867 - 1.874: 97.2467% ( 52) 00:18:35.767 1.874 - 1.882: 97.5003% ( 42) 00:18:35.767 1.882 - 1.890: 97.7116% ( 35) 00:18:35.767 1.890 - 1.897: 98.1524% ( 73) 00:18:35.767 1.897 - 1.905: 98.5751% ( 70) 00:18:35.767 1.905 - 1.912: 98.8468% ( 45) 00:18:35.767 1.912 - 1.920: 99.0037% ( 26) 00:18:35.767 1.920 - 1.928: 99.1004% ( 16) 00:18:35.767 1.928 - 1.935: 99.1366% ( 6) 00:18:35.767 1.935 - 1.943: 99.1547% ( 3) 00:18:35.767 1.943 - 1.950: 99.1728% ( 3) 00:18:35.767 1.950 - 1.966: 99.1970% ( 4) 00:18:35.767 1.981 - 1.996: 99.2151% ( 3) 00:18:35.767 1.996 - 2.011: 99.2211% ( 1) 00:18:35.767 2.011 - 2.027: 99.2332% ( 2) 00:18:35.767 2.027 - 2.042: 99.2513% ( 3) 00:18:35.767 2.042 - 2.057: 99.2573% ( 1) 00:18:35.767 2.057 - 2.072: 99.2634% ( 1) 00:18:35.767 2.072 - 2.088: 99.2754% ( 2) 00:18:35.767 2.088 - 2.103: 99.2815% ( 1) 00:18:35.767 2.118 - 2.133: 99.2875% ( 1) 00:18:35.767 2.133 - 2.149: 99.3056% ( 3) 00:18:35.767 2.149 - 2.164: 99.3238% ( 3) 00:18:35.767 2.164 - 2.179: 99.3298% ( 1) 00:18:35.767 2.179 - 2.194: 99.3358% ( 1) 00:18:35.767 2.194 - 2.210: 99.3479% ( 2) 00:18:35.767 2.225 - 2.240: 99.3539% ( 1) 00:18:35.767 2.347 - 2.362: 99.3600% ( 1) 00:18:35.767 2.560 - 2.575: 99.3660% ( 1) 00:18:35.767 3.550 - 3.566: 99.3721% ( 1) 00:18:35.767 3.703 - 3.718: 99.3781% ( 1) 00:18:35.767 3.733 - 3.749: 99.3841% ( 1) 00:18:35.767 3.810 - 3.825: 99.3902% ( 1) 00:18:35.767 3.855 - 3.870: 99.3962% ( 1) 00:18:35.767 3.870 - 3.886: 99.4022% ( 1) 00:18:35.767 3.886 - 3.901: 99.4083% ( 1) 00:18:35.767 4.297 - 4.328: 99.4143% ( 1) 00:18:35.767 4.541 - 4.571: 99.4204% ( 1) 00:18:35.767 4.571 - 4.602: 99.4264% ( 1) 00:18:35.767 4.724 - 4.754: 99.4324% ( 1) 00:18:35.767 4.907 - 4.937: 99.4385% ( 1) 00:18:35.767 4.998 - 5.029: 99.4566% ( 3) 00:18:35.767 5.029 - 5.059: 99.4626% ( 1) 00:18:35.767 5.120 - 5.150: 99.4687% ( 1) 00:18:35.767 5.272 - 5.303: 99.4747% ( 1) 00:18:35.767 5.364 - 5.394: 99.4807% ( 1) 00:18:35.767 5.699 - 5.730: 99.4868% ( 1) 00:18:35.767 6.034 - 6.065: 99.4928% ( 1) 00:18:35.767 6.126 - 6.156: 99.4989% ( 1) 00:18:35.767 6.400 - 6.430: 99.5049% ( 1) 00:18:35.767 7.192 - 7.223: 99.5109% ( 1) 00:18:35.767 8.716 - 8.777: 99.5170% ( 1) 00:18:35.767 9.082 - 9.143: 99.5230% ( 1) 00:18:35.767 13.288 - 13.349: 99.5290% ( 1) 00:18:35.767 3994.575 - 4025.783: 99.9940% ( 77) 00:18:35.767 4119.406 - 4150.613: 100.0000% ( 1) 00:18:35.767 00:18:35.767 12:02:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:18:35.767 12:02:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:35.767 12:02:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:18:35.767 12:02:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:18:35.767 12:02:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:35.767 [ 00:18:35.767 { 00:18:35.767 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:35.767 "subtype": "Discovery", 00:18:35.767 "listen_addresses": [], 00:18:35.767 "allow_any_host": true, 00:18:35.767 "hosts": [] 00:18:35.767 }, 00:18:35.767 { 00:18:35.767 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:35.767 "subtype": "NVMe", 00:18:35.767 "listen_addresses": [ 00:18:35.767 { 00:18:35.767 "trtype": "VFIOUSER", 00:18:35.767 "adrfam": "IPv4", 00:18:35.767 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:35.767 "trsvcid": "0" 00:18:35.767 } 00:18:35.767 ], 00:18:35.767 "allow_any_host": true, 00:18:35.767 "hosts": [], 00:18:35.767 "serial_number": "SPDK1", 00:18:35.767 "model_number": "SPDK bdev Controller", 00:18:35.767 "max_namespaces": 32, 00:18:35.767 "min_cntlid": 1, 00:18:35.767 "max_cntlid": 65519, 00:18:35.767 "namespaces": [ 00:18:35.767 { 00:18:35.767 "nsid": 1, 00:18:35.767 "bdev_name": "Malloc1", 00:18:35.767 "name": "Malloc1", 00:18:35.767 "nguid": "D8EC2CCFEAE54C4DBFDF451EF7472E09", 00:18:35.767 "uuid": "d8ec2ccf-eae5-4c4d-bfdf-451ef7472e09" 00:18:35.767 } 00:18:35.767 ] 00:18:35.767 }, 00:18:35.767 { 00:18:35.767 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:35.767 "subtype": "NVMe", 00:18:35.767 "listen_addresses": [ 00:18:35.767 { 00:18:35.767 "trtype": "VFIOUSER", 00:18:35.767 "adrfam": "IPv4", 00:18:35.767 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:35.767 "trsvcid": "0" 00:18:35.767 } 00:18:35.767 ], 00:18:35.767 "allow_any_host": true, 00:18:35.767 "hosts": [], 00:18:35.767 "serial_number": "SPDK2", 00:18:35.767 "model_number": "SPDK bdev Controller", 00:18:35.767 "max_namespaces": 32, 00:18:35.767 "min_cntlid": 1, 00:18:35.767 "max_cntlid": 65519, 00:18:35.767 "namespaces": [ 00:18:35.767 { 00:18:35.767 "nsid": 1, 00:18:35.767 "bdev_name": "Malloc2", 00:18:35.767 "name": "Malloc2", 00:18:35.767 "nguid": "940026C9693F450CBC61C727BA707E57", 00:18:35.767 "uuid": "940026c9-693f-450c-bc61-c727ba707e57" 00:18:35.767 } 00:18:35.767 ] 00:18:35.767 } 00:18:35.767 ] 00:18:35.767 12:02:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:35.767 12:02:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:18:35.767 12:02:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=51278 00:18:35.767 12:02:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:35.767 12:02:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:18:35.767 12:02:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:35.767 12:02:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:35.767 12:02:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:18:35.767 12:02:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:35.767 12:02:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:18:36.026 [2024-12-05 12:02:10.096776] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:36.026 Malloc3 00:18:36.026 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:18:36.284 [2024-12-05 12:02:10.370906] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:36.284 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:36.284 Asynchronous Event Request test 00:18:36.284 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:36.284 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:36.284 Registering asynchronous event callbacks... 00:18:36.284 Starting namespace attribute notice tests for all controllers... 00:18:36.284 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:36.284 aer_cb - Changed Namespace 00:18:36.284 Cleaning up... 00:18:36.544 [ 00:18:36.544 { 00:18:36.544 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:36.544 "subtype": "Discovery", 00:18:36.544 "listen_addresses": [], 00:18:36.544 "allow_any_host": true, 00:18:36.544 "hosts": [] 00:18:36.544 }, 00:18:36.544 { 00:18:36.544 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:36.544 "subtype": "NVMe", 00:18:36.544 "listen_addresses": [ 00:18:36.545 { 00:18:36.545 "trtype": "VFIOUSER", 00:18:36.545 "adrfam": "IPv4", 00:18:36.545 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:36.545 "trsvcid": "0" 00:18:36.545 } 00:18:36.545 ], 00:18:36.545 "allow_any_host": true, 00:18:36.545 "hosts": [], 00:18:36.545 "serial_number": "SPDK1", 00:18:36.545 "model_number": "SPDK bdev Controller", 00:18:36.545 "max_namespaces": 32, 00:18:36.545 "min_cntlid": 1, 00:18:36.545 "max_cntlid": 65519, 00:18:36.545 "namespaces": [ 00:18:36.545 { 00:18:36.545 "nsid": 1, 00:18:36.545 "bdev_name": "Malloc1", 00:18:36.545 "name": "Malloc1", 00:18:36.545 "nguid": "D8EC2CCFEAE54C4DBFDF451EF7472E09", 00:18:36.545 "uuid": "d8ec2ccf-eae5-4c4d-bfdf-451ef7472e09" 00:18:36.545 }, 00:18:36.545 { 00:18:36.545 "nsid": 2, 00:18:36.545 "bdev_name": "Malloc3", 00:18:36.545 "name": "Malloc3", 00:18:36.545 "nguid": "9BFF3728C6E240E9AAD7565D138407C4", 00:18:36.545 "uuid": "9bff3728-c6e2-40e9-aad7-565d138407c4" 00:18:36.545 } 00:18:36.545 ] 00:18:36.545 }, 00:18:36.545 { 00:18:36.545 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:36.545 "subtype": "NVMe", 00:18:36.545 "listen_addresses": [ 00:18:36.545 { 00:18:36.545 "trtype": "VFIOUSER", 00:18:36.545 "adrfam": "IPv4", 00:18:36.545 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:36.545 "trsvcid": "0" 00:18:36.545 } 00:18:36.545 ], 00:18:36.545 "allow_any_host": true, 00:18:36.545 "hosts": [], 00:18:36.545 "serial_number": "SPDK2", 00:18:36.545 "model_number": "SPDK bdev Controller", 00:18:36.545 "max_namespaces": 32, 00:18:36.545 "min_cntlid": 1, 00:18:36.545 "max_cntlid": 65519, 00:18:36.545 "namespaces": [ 00:18:36.545 { 00:18:36.545 "nsid": 1, 00:18:36.545 "bdev_name": "Malloc2", 00:18:36.545 "name": "Malloc2", 00:18:36.545 "nguid": "940026C9693F450CBC61C727BA707E57", 00:18:36.545 "uuid": "940026c9-693f-450c-bc61-c727ba707e57" 00:18:36.545 } 00:18:36.545 ] 00:18:36.545 } 00:18:36.545 ] 00:18:36.545 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 51278 00:18:36.545 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:36.545 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:36.545 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:18:36.545 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:36.545 [2024-12-05 12:02:10.624693] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:18:36.545 [2024-12-05 12:02:10.624726] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid51484 ] 00:18:36.545 [2024-12-05 12:02:10.662086] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:18:36.545 [2024-12-05 12:02:10.671371] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:36.545 [2024-12-05 12:02:10.671395] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f0f42d2c000 00:18:36.545 [2024-12-05 12:02:10.672379] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:36.545 [2024-12-05 12:02:10.673395] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:36.545 [2024-12-05 12:02:10.674393] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:36.545 [2024-12-05 12:02:10.675397] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:36.545 [2024-12-05 12:02:10.676404] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:36.545 [2024-12-05 12:02:10.677410] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:36.545 [2024-12-05 12:02:10.678417] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:36.545 [2024-12-05 12:02:10.679426] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:36.545 [2024-12-05 12:02:10.680431] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:36.545 [2024-12-05 12:02:10.680441] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f0f42d21000 00:18:36.545 [2024-12-05 12:02:10.681353] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:36.545 [2024-12-05 12:02:10.690717] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:18:36.545 [2024-12-05 12:02:10.690740] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:18:36.545 [2024-12-05 12:02:10.695828] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:36.545 [2024-12-05 12:02:10.695865] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:36.545 [2024-12-05 12:02:10.695935] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:18:36.545 [2024-12-05 12:02:10.695948] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:18:36.545 [2024-12-05 12:02:10.695953] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:18:36.545 [2024-12-05 12:02:10.696829] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:18:36.545 [2024-12-05 12:02:10.696840] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:18:36.545 [2024-12-05 12:02:10.696847] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:18:36.545 [2024-12-05 12:02:10.697839] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:36.545 [2024-12-05 12:02:10.697848] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:18:36.545 [2024-12-05 12:02:10.697857] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:18:36.545 [2024-12-05 12:02:10.698841] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:18:36.545 [2024-12-05 12:02:10.698850] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:36.545 [2024-12-05 12:02:10.699849] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:18:36.545 [2024-12-05 12:02:10.699858] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:18:36.545 [2024-12-05 12:02:10.699862] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:18:36.545 [2024-12-05 12:02:10.699869] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:36.545 [2024-12-05 12:02:10.699976] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:18:36.545 [2024-12-05 12:02:10.699980] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:36.545 [2024-12-05 12:02:10.699985] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:18:36.545 [2024-12-05 12:02:10.700860] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:18:36.545 [2024-12-05 12:02:10.701872] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:18:36.545 [2024-12-05 12:02:10.702876] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:36.545 [2024-12-05 12:02:10.703872] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:36.545 [2024-12-05 12:02:10.703911] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:36.545 [2024-12-05 12:02:10.704885] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:18:36.545 [2024-12-05 12:02:10.704894] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:36.545 [2024-12-05 12:02:10.704898] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:18:36.545 [2024-12-05 12:02:10.704915] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:18:36.545 [2024-12-05 12:02:10.704924] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:18:36.545 [2024-12-05 12:02:10.704939] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:36.545 [2024-12-05 12:02:10.704944] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:36.545 [2024-12-05 12:02:10.704948] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:36.545 [2024-12-05 12:02:10.704959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:36.545 [2024-12-05 12:02:10.712374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:36.546 [2024-12-05 12:02:10.712388] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:18:36.546 [2024-12-05 12:02:10.712393] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:18:36.546 [2024-12-05 12:02:10.712396] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:18:36.546 [2024-12-05 12:02:10.712401] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:36.546 [2024-12-05 12:02:10.712405] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:18:36.546 [2024-12-05 12:02:10.712409] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:18:36.546 [2024-12-05 12:02:10.712414] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:18:36.546 [2024-12-05 12:02:10.712421] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:18:36.546 [2024-12-05 12:02:10.712430] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:36.546 [2024-12-05 12:02:10.720372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:36.546 [2024-12-05 12:02:10.720384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:36.546 [2024-12-05 12:02:10.720392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:36.546 [2024-12-05 12:02:10.720399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:36.546 [2024-12-05 12:02:10.720406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:36.546 [2024-12-05 12:02:10.720410] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:18:36.546 [2024-12-05 12:02:10.720419] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:36.546 [2024-12-05 12:02:10.720427] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:36.546 [2024-12-05 12:02:10.728373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:36.546 [2024-12-05 12:02:10.728381] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:18:36.546 [2024-12-05 12:02:10.728386] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:36.546 [2024-12-05 12:02:10.728396] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:18:36.546 [2024-12-05 12:02:10.728401] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:18:36.546 [2024-12-05 12:02:10.728409] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:36.546 [2024-12-05 12:02:10.736375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:36.546 [2024-12-05 12:02:10.736436] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:18:36.546 [2024-12-05 12:02:10.736449] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:18:36.546 [2024-12-05 12:02:10.736457] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:36.546 [2024-12-05 12:02:10.736461] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:36.546 [2024-12-05 12:02:10.736465] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:36.546 [2024-12-05 12:02:10.736472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:36.804 [2024-12-05 12:02:10.744374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:36.804 [2024-12-05 12:02:10.744394] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:18:36.804 [2024-12-05 12:02:10.744402] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:18:36.804 [2024-12-05 12:02:10.744410] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:18:36.804 [2024-12-05 12:02:10.744417] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:36.804 [2024-12-05 12:02:10.744421] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:36.804 [2024-12-05 12:02:10.744424] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:36.804 [2024-12-05 12:02:10.744431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:36.804 [2024-12-05 12:02:10.752374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:36.804 [2024-12-05 12:02:10.752389] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:36.804 [2024-12-05 12:02:10.752397] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:36.804 [2024-12-05 12:02:10.752404] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:36.804 [2024-12-05 12:02:10.752408] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:36.804 [2024-12-05 12:02:10.752411] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:36.804 [2024-12-05 12:02:10.752418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:36.804 [2024-12-05 12:02:10.760374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:36.804 [2024-12-05 12:02:10.760386] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:36.804 [2024-12-05 12:02:10.760393] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:18:36.804 [2024-12-05 12:02:10.760400] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:18:36.804 [2024-12-05 12:02:10.760405] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:18:36.804 [2024-12-05 12:02:10.760410] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:36.804 [2024-12-05 12:02:10.760417] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:18:36.804 [2024-12-05 12:02:10.760422] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:18:36.804 [2024-12-05 12:02:10.760426] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:18:36.804 [2024-12-05 12:02:10.760431] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:18:36.804 [2024-12-05 12:02:10.760447] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:36.804 [2024-12-05 12:02:10.768373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:36.804 [2024-12-05 12:02:10.768385] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:36.804 [2024-12-05 12:02:10.776371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:36.804 [2024-12-05 12:02:10.776383] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:36.804 [2024-12-05 12:02:10.784371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:36.804 [2024-12-05 12:02:10.784383] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:36.804 [2024-12-05 12:02:10.792375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:36.804 [2024-12-05 12:02:10.792390] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:36.804 [2024-12-05 12:02:10.792395] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:36.804 [2024-12-05 12:02:10.792398] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:36.804 [2024-12-05 12:02:10.792401] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:36.804 [2024-12-05 12:02:10.792404] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:36.804 [2024-12-05 12:02:10.792410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:36.804 [2024-12-05 12:02:10.792416] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:36.804 [2024-12-05 12:02:10.792420] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:36.804 [2024-12-05 12:02:10.792423] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:36.804 [2024-12-05 12:02:10.792428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:36.804 [2024-12-05 12:02:10.792434] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:36.804 [2024-12-05 12:02:10.792438] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:36.804 [2024-12-05 12:02:10.792441] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:36.804 [2024-12-05 12:02:10.792446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:36.804 [2024-12-05 12:02:10.792453] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:36.804 [2024-12-05 12:02:10.792459] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:36.805 [2024-12-05 12:02:10.792462] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:36.805 [2024-12-05 12:02:10.792467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:36.805 [2024-12-05 12:02:10.800373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:36.805 [2024-12-05 12:02:10.800386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:36.805 [2024-12-05 12:02:10.800396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:36.805 [2024-12-05 12:02:10.800402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:36.805 ===================================================== 00:18:36.805 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:36.805 ===================================================== 00:18:36.805 Controller Capabilities/Features 00:18:36.805 ================================ 00:18:36.805 Vendor ID: 4e58 00:18:36.805 Subsystem Vendor ID: 4e58 00:18:36.805 Serial Number: SPDK2 00:18:36.805 Model Number: SPDK bdev Controller 00:18:36.805 Firmware Version: 25.01 00:18:36.805 Recommended Arb Burst: 6 00:18:36.805 IEEE OUI Identifier: 8d 6b 50 00:18:36.805 Multi-path I/O 00:18:36.805 May have multiple subsystem ports: Yes 00:18:36.805 May have multiple controllers: Yes 00:18:36.805 Associated with SR-IOV VF: No 00:18:36.805 Max Data Transfer Size: 131072 00:18:36.805 Max Number of Namespaces: 32 00:18:36.805 Max Number of I/O Queues: 127 00:18:36.805 NVMe Specification Version (VS): 1.3 00:18:36.805 NVMe Specification Version (Identify): 1.3 00:18:36.805 Maximum Queue Entries: 256 00:18:36.805 Contiguous Queues Required: Yes 00:18:36.805 Arbitration Mechanisms Supported 00:18:36.805 Weighted Round Robin: Not Supported 00:18:36.805 Vendor Specific: Not Supported 00:18:36.805 Reset Timeout: 15000 ms 00:18:36.805 Doorbell Stride: 4 bytes 00:18:36.805 NVM Subsystem Reset: Not Supported 00:18:36.805 Command Sets Supported 00:18:36.805 NVM Command Set: Supported 00:18:36.805 Boot Partition: Not Supported 00:18:36.805 Memory Page Size Minimum: 4096 bytes 00:18:36.805 Memory Page Size Maximum: 4096 bytes 00:18:36.805 Persistent Memory Region: Not Supported 00:18:36.805 Optional Asynchronous Events Supported 00:18:36.805 Namespace Attribute Notices: Supported 00:18:36.805 Firmware Activation Notices: Not Supported 00:18:36.805 ANA Change Notices: Not Supported 00:18:36.805 PLE Aggregate Log Change Notices: Not Supported 00:18:36.805 LBA Status Info Alert Notices: Not Supported 00:18:36.805 EGE Aggregate Log Change Notices: Not Supported 00:18:36.805 Normal NVM Subsystem Shutdown event: Not Supported 00:18:36.805 Zone Descriptor Change Notices: Not Supported 00:18:36.805 Discovery Log Change Notices: Not Supported 00:18:36.805 Controller Attributes 00:18:36.805 128-bit Host Identifier: Supported 00:18:36.805 Non-Operational Permissive Mode: Not Supported 00:18:36.805 NVM Sets: Not Supported 00:18:36.805 Read Recovery Levels: Not Supported 00:18:36.805 Endurance Groups: Not Supported 00:18:36.805 Predictable Latency Mode: Not Supported 00:18:36.805 Traffic Based Keep ALive: Not Supported 00:18:36.805 Namespace Granularity: Not Supported 00:18:36.805 SQ Associations: Not Supported 00:18:36.805 UUID List: Not Supported 00:18:36.805 Multi-Domain Subsystem: Not Supported 00:18:36.805 Fixed Capacity Management: Not Supported 00:18:36.805 Variable Capacity Management: Not Supported 00:18:36.805 Delete Endurance Group: Not Supported 00:18:36.805 Delete NVM Set: Not Supported 00:18:36.805 Extended LBA Formats Supported: Not Supported 00:18:36.805 Flexible Data Placement Supported: Not Supported 00:18:36.805 00:18:36.805 Controller Memory Buffer Support 00:18:36.805 ================================ 00:18:36.805 Supported: No 00:18:36.805 00:18:36.805 Persistent Memory Region Support 00:18:36.805 ================================ 00:18:36.805 Supported: No 00:18:36.805 00:18:36.805 Admin Command Set Attributes 00:18:36.805 ============================ 00:18:36.805 Security Send/Receive: Not Supported 00:18:36.805 Format NVM: Not Supported 00:18:36.805 Firmware Activate/Download: Not Supported 00:18:36.805 Namespace Management: Not Supported 00:18:36.805 Device Self-Test: Not Supported 00:18:36.805 Directives: Not Supported 00:18:36.805 NVMe-MI: Not Supported 00:18:36.805 Virtualization Management: Not Supported 00:18:36.805 Doorbell Buffer Config: Not Supported 00:18:36.805 Get LBA Status Capability: Not Supported 00:18:36.805 Command & Feature Lockdown Capability: Not Supported 00:18:36.805 Abort Command Limit: 4 00:18:36.805 Async Event Request Limit: 4 00:18:36.805 Number of Firmware Slots: N/A 00:18:36.805 Firmware Slot 1 Read-Only: N/A 00:18:36.805 Firmware Activation Without Reset: N/A 00:18:36.805 Multiple Update Detection Support: N/A 00:18:36.805 Firmware Update Granularity: No Information Provided 00:18:36.805 Per-Namespace SMART Log: No 00:18:36.805 Asymmetric Namespace Access Log Page: Not Supported 00:18:36.805 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:18:36.805 Command Effects Log Page: Supported 00:18:36.805 Get Log Page Extended Data: Supported 00:18:36.805 Telemetry Log Pages: Not Supported 00:18:36.805 Persistent Event Log Pages: Not Supported 00:18:36.805 Supported Log Pages Log Page: May Support 00:18:36.805 Commands Supported & Effects Log Page: Not Supported 00:18:36.805 Feature Identifiers & Effects Log Page:May Support 00:18:36.805 NVMe-MI Commands & Effects Log Page: May Support 00:18:36.805 Data Area 4 for Telemetry Log: Not Supported 00:18:36.805 Error Log Page Entries Supported: 128 00:18:36.805 Keep Alive: Supported 00:18:36.805 Keep Alive Granularity: 10000 ms 00:18:36.805 00:18:36.805 NVM Command Set Attributes 00:18:36.805 ========================== 00:18:36.805 Submission Queue Entry Size 00:18:36.805 Max: 64 00:18:36.805 Min: 64 00:18:36.805 Completion Queue Entry Size 00:18:36.805 Max: 16 00:18:36.805 Min: 16 00:18:36.805 Number of Namespaces: 32 00:18:36.805 Compare Command: Supported 00:18:36.805 Write Uncorrectable Command: Not Supported 00:18:36.805 Dataset Management Command: Supported 00:18:36.805 Write Zeroes Command: Supported 00:18:36.805 Set Features Save Field: Not Supported 00:18:36.805 Reservations: Not Supported 00:18:36.805 Timestamp: Not Supported 00:18:36.805 Copy: Supported 00:18:36.805 Volatile Write Cache: Present 00:18:36.805 Atomic Write Unit (Normal): 1 00:18:36.805 Atomic Write Unit (PFail): 1 00:18:36.805 Atomic Compare & Write Unit: 1 00:18:36.805 Fused Compare & Write: Supported 00:18:36.805 Scatter-Gather List 00:18:36.805 SGL Command Set: Supported (Dword aligned) 00:18:36.805 SGL Keyed: Not Supported 00:18:36.805 SGL Bit Bucket Descriptor: Not Supported 00:18:36.805 SGL Metadata Pointer: Not Supported 00:18:36.805 Oversized SGL: Not Supported 00:18:36.805 SGL Metadata Address: Not Supported 00:18:36.805 SGL Offset: Not Supported 00:18:36.805 Transport SGL Data Block: Not Supported 00:18:36.805 Replay Protected Memory Block: Not Supported 00:18:36.805 00:18:36.805 Firmware Slot Information 00:18:36.805 ========================= 00:18:36.805 Active slot: 1 00:18:36.805 Slot 1 Firmware Revision: 25.01 00:18:36.805 00:18:36.805 00:18:36.805 Commands Supported and Effects 00:18:36.805 ============================== 00:18:36.805 Admin Commands 00:18:36.805 -------------- 00:18:36.805 Get Log Page (02h): Supported 00:18:36.805 Identify (06h): Supported 00:18:36.805 Abort (08h): Supported 00:18:36.805 Set Features (09h): Supported 00:18:36.805 Get Features (0Ah): Supported 00:18:36.805 Asynchronous Event Request (0Ch): Supported 00:18:36.805 Keep Alive (18h): Supported 00:18:36.805 I/O Commands 00:18:36.805 ------------ 00:18:36.805 Flush (00h): Supported LBA-Change 00:18:36.805 Write (01h): Supported LBA-Change 00:18:36.805 Read (02h): Supported 00:18:36.805 Compare (05h): Supported 00:18:36.805 Write Zeroes (08h): Supported LBA-Change 00:18:36.805 Dataset Management (09h): Supported LBA-Change 00:18:36.805 Copy (19h): Supported LBA-Change 00:18:36.805 00:18:36.805 Error Log 00:18:36.805 ========= 00:18:36.805 00:18:36.805 Arbitration 00:18:36.805 =========== 00:18:36.805 Arbitration Burst: 1 00:18:36.805 00:18:36.805 Power Management 00:18:36.805 ================ 00:18:36.805 Number of Power States: 1 00:18:36.805 Current Power State: Power State #0 00:18:36.805 Power State #0: 00:18:36.805 Max Power: 0.00 W 00:18:36.805 Non-Operational State: Operational 00:18:36.805 Entry Latency: Not Reported 00:18:36.805 Exit Latency: Not Reported 00:18:36.805 Relative Read Throughput: 0 00:18:36.805 Relative Read Latency: 0 00:18:36.805 Relative Write Throughput: 0 00:18:36.805 Relative Write Latency: 0 00:18:36.805 Idle Power: Not Reported 00:18:36.805 Active Power: Not Reported 00:18:36.805 Non-Operational Permissive Mode: Not Supported 00:18:36.805 00:18:36.805 Health Information 00:18:36.805 ================== 00:18:36.805 Critical Warnings: 00:18:36.805 Available Spare Space: OK 00:18:36.805 Temperature: OK 00:18:36.805 Device Reliability: OK 00:18:36.805 Read Only: No 00:18:36.805 Volatile Memory Backup: OK 00:18:36.805 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:36.805 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:36.805 Available Spare: 0% 00:18:36.805 Available Sp[2024-12-05 12:02:10.800489] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:36.805 [2024-12-05 12:02:10.808373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:36.805 [2024-12-05 12:02:10.808403] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:18:36.805 [2024-12-05 12:02:10.808411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.805 [2024-12-05 12:02:10.808417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.805 [2024-12-05 12:02:10.808423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.805 [2024-12-05 12:02:10.808428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.805 [2024-12-05 12:02:10.808476] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:36.805 [2024-12-05 12:02:10.808486] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:18:36.805 [2024-12-05 12:02:10.809478] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:36.805 [2024-12-05 12:02:10.809523] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:18:36.805 [2024-12-05 12:02:10.809530] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:18:36.805 [2024-12-05 12:02:10.810481] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:18:36.805 [2024-12-05 12:02:10.810492] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:18:36.805 [2024-12-05 12:02:10.810539] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:18:36.805 [2024-12-05 12:02:10.811501] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:36.805 are Threshold: 0% 00:18:36.805 Life Percentage Used: 0% 00:18:36.805 Data Units Read: 0 00:18:36.805 Data Units Written: 0 00:18:36.805 Host Read Commands: 0 00:18:36.805 Host Write Commands: 0 00:18:36.805 Controller Busy Time: 0 minutes 00:18:36.805 Power Cycles: 0 00:18:36.805 Power On Hours: 0 hours 00:18:36.805 Unsafe Shutdowns: 0 00:18:36.805 Unrecoverable Media Errors: 0 00:18:36.805 Lifetime Error Log Entries: 0 00:18:36.805 Warning Temperature Time: 0 minutes 00:18:36.805 Critical Temperature Time: 0 minutes 00:18:36.805 00:18:36.805 Number of Queues 00:18:36.805 ================ 00:18:36.805 Number of I/O Submission Queues: 127 00:18:36.806 Number of I/O Completion Queues: 127 00:18:36.806 00:18:36.806 Active Namespaces 00:18:36.806 ================= 00:18:36.806 Namespace ID:1 00:18:36.806 Error Recovery Timeout: Unlimited 00:18:36.806 Command Set Identifier: NVM (00h) 00:18:36.806 Deallocate: Supported 00:18:36.806 Deallocated/Unwritten Error: Not Supported 00:18:36.806 Deallocated Read Value: Unknown 00:18:36.806 Deallocate in Write Zeroes: Not Supported 00:18:36.806 Deallocated Guard Field: 0xFFFF 00:18:36.806 Flush: Supported 00:18:36.806 Reservation: Supported 00:18:36.806 Namespace Sharing Capabilities: Multiple Controllers 00:18:36.806 Size (in LBAs): 131072 (0GiB) 00:18:36.806 Capacity (in LBAs): 131072 (0GiB) 00:18:36.806 Utilization (in LBAs): 131072 (0GiB) 00:18:36.806 NGUID: 940026C9693F450CBC61C727BA707E57 00:18:36.806 UUID: 940026c9-693f-450c-bc61-c727ba707e57 00:18:36.806 Thin Provisioning: Not Supported 00:18:36.806 Per-NS Atomic Units: Yes 00:18:36.806 Atomic Boundary Size (Normal): 0 00:18:36.806 Atomic Boundary Size (PFail): 0 00:18:36.806 Atomic Boundary Offset: 0 00:18:36.806 Maximum Single Source Range Length: 65535 00:18:36.806 Maximum Copy Length: 65535 00:18:36.806 Maximum Source Range Count: 1 00:18:36.806 NGUID/EUI64 Never Reused: No 00:18:36.806 Namespace Write Protected: No 00:18:36.806 Number of LBA Formats: 1 00:18:36.806 Current LBA Format: LBA Format #00 00:18:36.806 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:36.806 00:18:36.806 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:37.137 [2024-12-05 12:02:11.044680] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:42.475 Initializing NVMe Controllers 00:18:42.475 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:42.475 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:42.475 Initialization complete. Launching workers. 00:18:42.475 ======================================================== 00:18:42.475 Latency(us) 00:18:42.475 Device Information : IOPS MiB/s Average min max 00:18:42.475 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39945.64 156.04 3204.18 971.93 8592.12 00:18:42.475 ======================================================== 00:18:42.475 Total : 39945.64 156.04 3204.18 971.93 8592.12 00:18:42.475 00:18:42.475 [2024-12-05 12:02:16.147625] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:42.475 12:02:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:42.475 [2024-12-05 12:02:16.385330] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:47.760 Initializing NVMe Controllers 00:18:47.760 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:47.760 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:47.760 Initialization complete. Launching workers. 00:18:47.760 ======================================================== 00:18:47.760 Latency(us) 00:18:47.760 Device Information : IOPS MiB/s Average min max 00:18:47.760 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39931.39 155.98 3205.05 964.89 7602.77 00:18:47.760 ======================================================== 00:18:47.760 Total : 39931.39 155.98 3205.05 964.89 7602.77 00:18:47.760 00:18:47.760 [2024-12-05 12:02:21.401467] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:47.760 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:47.760 [2024-12-05 12:02:21.607724] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:53.030 [2024-12-05 12:02:26.745460] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:53.030 Initializing NVMe Controllers 00:18:53.030 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:53.030 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:53.030 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:18:53.030 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:18:53.030 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:18:53.030 Initialization complete. Launching workers. 00:18:53.030 Starting thread on core 2 00:18:53.030 Starting thread on core 3 00:18:53.030 Starting thread on core 1 00:18:53.030 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:18:53.030 [2024-12-05 12:02:27.047830] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:56.317 [2024-12-05 12:02:30.112575] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:56.317 Initializing NVMe Controllers 00:18:56.317 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:56.317 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:56.317 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:18:56.317 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:18:56.317 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:18:56.317 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:18:56.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:56.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:56.317 Initialization complete. Launching workers. 00:18:56.317 Starting thread on core 1 with urgent priority queue 00:18:56.317 Starting thread on core 2 with urgent priority queue 00:18:56.317 Starting thread on core 3 with urgent priority queue 00:18:56.317 Starting thread on core 0 with urgent priority queue 00:18:56.317 SPDK bdev Controller (SPDK2 ) core 0: 5453.67 IO/s 18.34 secs/100000 ios 00:18:56.318 SPDK bdev Controller (SPDK2 ) core 1: 5054.67 IO/s 19.78 secs/100000 ios 00:18:56.318 SPDK bdev Controller (SPDK2 ) core 2: 6048.33 IO/s 16.53 secs/100000 ios 00:18:56.318 SPDK bdev Controller (SPDK2 ) core 3: 5799.33 IO/s 17.24 secs/100000 ios 00:18:56.318 ======================================================== 00:18:56.318 00:18:56.318 12:02:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:56.318 [2024-12-05 12:02:30.402778] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:56.318 Initializing NVMe Controllers 00:18:56.318 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:56.318 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:56.318 Namespace ID: 1 size: 0GB 00:18:56.318 Initialization complete. 00:18:56.318 INFO: using host memory buffer for IO 00:18:56.318 Hello world! 00:18:56.318 [2024-12-05 12:02:30.412833] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:56.318 12:02:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:56.575 [2024-12-05 12:02:30.688764] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:57.950 Initializing NVMe Controllers 00:18:57.950 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:57.950 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:57.950 Initialization complete. Launching workers. 00:18:57.950 submit (in ns) avg, min, max = 7524.9, 3181.9, 4995300.0 00:18:57.950 complete (in ns) avg, min, max = 18859.8, 1761.0, 4994769.5 00:18:57.950 00:18:57.950 Submit histogram 00:18:57.950 ================ 00:18:57.950 Range in us Cumulative Count 00:18:57.950 3.170 - 3.185: 0.0060% ( 1) 00:18:57.950 3.185 - 3.200: 0.0359% ( 5) 00:18:57.950 3.200 - 3.215: 0.2331% ( 33) 00:18:57.950 3.215 - 3.230: 0.6095% ( 63) 00:18:57.950 3.230 - 3.246: 1.6015% ( 166) 00:18:57.950 3.246 - 3.261: 4.3684% ( 463) 00:18:57.950 3.261 - 3.276: 9.6092% ( 877) 00:18:57.950 3.276 - 3.291: 15.6866% ( 1017) 00:18:57.950 3.291 - 3.307: 21.9613% ( 1050) 00:18:57.950 3.307 - 3.322: 28.8216% ( 1148) 00:18:57.950 3.322 - 3.337: 34.5285% ( 955) 00:18:57.950 3.337 - 3.352: 39.3988% ( 815) 00:18:57.950 3.352 - 3.368: 45.1655% ( 965) 00:18:57.950 3.368 - 3.383: 50.9442% ( 967) 00:18:57.950 3.383 - 3.398: 56.0834% ( 860) 00:18:57.950 3.398 - 3.413: 62.6509% ( 1099) 00:18:57.950 3.413 - 3.429: 71.4474% ( 1472) 00:18:57.950 3.429 - 3.444: 76.2818% ( 809) 00:18:57.950 3.444 - 3.459: 80.4410% ( 696) 00:18:57.950 3.459 - 3.474: 83.7696% ( 557) 00:18:57.950 3.474 - 3.490: 85.8014% ( 340) 00:18:57.950 3.490 - 3.505: 87.1101% ( 219) 00:18:57.950 3.505 - 3.520: 87.6061% ( 83) 00:18:57.950 3.520 - 3.535: 88.0124% ( 68) 00:18:57.950 3.535 - 3.550: 88.3710% ( 60) 00:18:57.950 3.550 - 3.566: 89.1000% ( 122) 00:18:57.950 3.566 - 3.581: 90.0621% ( 161) 00:18:57.950 3.581 - 3.596: 90.8988% ( 140) 00:18:57.950 3.596 - 3.611: 91.7772% ( 147) 00:18:57.950 3.611 - 3.627: 92.6497% ( 146) 00:18:57.950 3.627 - 3.642: 93.5520% ( 151) 00:18:57.950 3.642 - 3.657: 94.4365% ( 148) 00:18:57.950 3.657 - 3.672: 95.5659% ( 189) 00:18:57.950 3.672 - 3.688: 96.4862% ( 154) 00:18:57.950 3.688 - 3.703: 97.2989% ( 136) 00:18:57.950 3.703 - 3.718: 97.9802% ( 114) 00:18:57.950 3.718 - 3.733: 98.3626% ( 64) 00:18:57.950 3.733 - 3.749: 98.7092% ( 58) 00:18:57.950 3.749 - 3.764: 99.0558% ( 58) 00:18:57.950 3.764 - 3.779: 99.2769% ( 37) 00:18:57.950 3.779 - 3.794: 99.3964% ( 20) 00:18:57.950 3.794 - 3.810: 99.5040% ( 18) 00:18:57.950 3.810 - 3.825: 99.5638% ( 10) 00:18:57.950 3.825 - 3.840: 99.6056% ( 7) 00:18:57.950 3.840 - 3.855: 99.6175% ( 2) 00:18:57.951 3.855 - 3.870: 99.6295% ( 2) 00:18:57.951 3.886 - 3.901: 99.6355% ( 1) 00:18:57.951 5.181 - 5.211: 99.6414% ( 1) 00:18:57.951 5.242 - 5.272: 99.6474% ( 1) 00:18:57.951 5.333 - 5.364: 99.6594% ( 2) 00:18:57.951 5.364 - 5.394: 99.6654% ( 1) 00:18:57.951 5.455 - 5.486: 99.6713% ( 1) 00:18:57.951 5.516 - 5.547: 99.6773% ( 1) 00:18:57.951 5.577 - 5.608: 99.6833% ( 1) 00:18:57.951 5.638 - 5.669: 99.6952% ( 2) 00:18:57.951 5.760 - 5.790: 99.7012% ( 1) 00:18:57.951 5.821 - 5.851: 99.7072% ( 1) 00:18:57.951 6.004 - 6.034: 99.7132% ( 1) 00:18:57.951 6.156 - 6.187: 99.7191% ( 1) 00:18:57.951 6.278 - 6.309: 99.7251% ( 1) 00:18:57.951 6.339 - 6.370: 99.7371% ( 2) 00:18:57.951 6.430 - 6.461: 99.7430% ( 1) 00:18:57.951 6.491 - 6.522: 99.7490% ( 1) 00:18:57.951 6.552 - 6.583: 99.7550% ( 1) 00:18:57.951 6.644 - 6.674: 99.7610% ( 1) 00:18:57.951 6.735 - 6.766: 99.7669% ( 1) 00:18:57.951 6.766 - 6.796: 99.7729% ( 1) 00:18:57.951 6.918 - 6.949: 99.7789% ( 1) 00:18:57.951 7.101 - 7.131: 99.7849% ( 1) 00:18:57.951 7.192 - 7.223: 99.7908% ( 1) 00:18:57.951 7.223 - 7.253: 99.7968% ( 1) 00:18:57.951 7.253 - 7.284: 99.8028% ( 1) 00:18:57.951 7.284 - 7.314: 99.8088% ( 1) 00:18:57.951 7.314 - 7.345: 99.8147% ( 1) 00:18:57.951 7.345 - 7.375: 99.8207% ( 1) 00:18:57.951 7.528 - 7.558: 99.8267% ( 1) 00:18:57.951 7.863 - 7.924: 99.8327% ( 1) 00:18:57.951 7.924 - 7.985: 99.8387% ( 1) 00:18:57.951 8.229 - 8.290: 99.8446% ( 1) 00:18:57.951 [2024-12-05 12:02:31.780347] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:57.951 8.290 - 8.350: 99.8566% ( 2) 00:18:57.951 8.350 - 8.411: 99.8626% ( 1) 00:18:57.951 8.533 - 8.594: 99.8685% ( 1) 00:18:57.951 8.594 - 8.655: 99.8745% ( 1) 00:18:57.951 9.204 - 9.265: 99.8805% ( 1) 00:18:57.951 9.265 - 9.326: 99.8865% ( 1) 00:18:57.951 10.362 - 10.423: 99.8924% ( 1) 00:18:57.951 19.261 - 19.383: 99.8984% ( 1) 00:18:57.951 3994.575 - 4025.783: 99.9940% ( 16) 00:18:57.951 4993.219 - 5024.427: 100.0000% ( 1) 00:18:57.951 00:18:57.951 Complete histogram 00:18:57.951 ================== 00:18:57.951 Range in us Cumulative Count 00:18:57.951 1.760 - 1.768: 0.0359% ( 6) 00:18:57.951 1.768 - 1.775: 0.1912% ( 26) 00:18:57.951 1.775 - 1.783: 0.5856% ( 66) 00:18:57.951 1.783 - 1.790: 1.5179% ( 156) 00:18:57.951 1.790 - 1.798: 2.3903% ( 146) 00:18:57.951 1.798 - 1.806: 3.1074% ( 120) 00:18:57.951 1.806 - 1.813: 4.8643% ( 294) 00:18:57.951 1.813 - 1.821: 17.2882% ( 2079) 00:18:57.951 1.821 - 1.829: 48.1774% ( 5169) 00:18:57.951 1.829 - 1.836: 75.3854% ( 4553) 00:18:57.951 1.836 - 1.844: 88.1260% ( 2132) 00:18:57.951 1.844 - 1.851: 92.6497% ( 757) 00:18:57.951 1.851 - 1.859: 95.2432% ( 434) 00:18:57.951 1.859 - 1.867: 96.5639% ( 221) 00:18:57.951 1.867 - 1.874: 97.0838% ( 87) 00:18:57.951 1.874 - 1.882: 97.3527% ( 45) 00:18:57.951 1.882 - 1.890: 97.6395% ( 48) 00:18:57.951 1.890 - 1.897: 97.9563% ( 53) 00:18:57.951 1.897 - 1.905: 98.3805% ( 71) 00:18:57.951 1.905 - 1.912: 98.6674% ( 48) 00:18:57.951 1.912 - 1.920: 98.9363% ( 45) 00:18:57.951 1.920 - 1.928: 99.0259% ( 15) 00:18:57.951 1.928 - 1.935: 99.0976% ( 12) 00:18:57.951 1.935 - 1.943: 99.1992% ( 17) 00:18:57.951 1.943 - 1.950: 99.2650% ( 11) 00:18:57.951 1.950 - 1.966: 99.3247% ( 10) 00:18:57.951 1.966 - 1.981: 99.3546% ( 5) 00:18:57.951 1.981 - 1.996: 99.3606% ( 1) 00:18:57.951 2.149 - 2.164: 99.3666% ( 1) 00:18:57.951 2.194 - 2.210: 99.3725% ( 1) 00:18:57.951 2.270 - 2.286: 99.3785% ( 1) 00:18:57.951 3.581 - 3.596: 99.3845% ( 1) 00:18:57.951 3.672 - 3.688: 99.3905% ( 1) 00:18:57.951 3.688 - 3.703: 99.3964% ( 1) 00:18:57.951 3.825 - 3.840: 99.4024% ( 1) 00:18:57.951 3.901 - 3.931: 99.4084% ( 1) 00:18:57.951 3.962 - 3.992: 99.4144% ( 1) 00:18:57.951 4.206 - 4.236: 99.4203% ( 1) 00:18:57.951 4.541 - 4.571: 99.4263% ( 1) 00:18:57.951 4.663 - 4.693: 99.4323% ( 1) 00:18:57.951 4.693 - 4.724: 99.4383% ( 1) 00:18:57.951 4.968 - 4.998: 99.4442% ( 1) 00:18:57.951 5.029 - 5.059: 99.4502% ( 1) 00:18:57.951 5.242 - 5.272: 99.4622% ( 2) 00:18:57.951 5.394 - 5.425: 99.4681% ( 1) 00:18:57.951 5.486 - 5.516: 99.4741% ( 1) 00:18:57.951 5.547 - 5.577: 99.4801% ( 1) 00:18:57.951 5.730 - 5.760: 99.4861% ( 1) 00:18:57.951 6.034 - 6.065: 99.4921% ( 1) 00:18:57.951 6.217 - 6.248: 99.4980% ( 1) 00:18:57.951 6.339 - 6.370: 99.5040% ( 1) 00:18:57.951 6.979 - 7.010: 99.5100% ( 1) 00:18:57.951 7.162 - 7.192: 99.5160% ( 1) 00:18:57.951 7.314 - 7.345: 99.5219% ( 1) 00:18:57.951 7.345 - 7.375: 99.5279% ( 1) 00:18:57.951 7.680 - 7.710: 99.5339% ( 1) 00:18:57.951 7.985 - 8.046: 99.5399% ( 1) 00:18:57.951 8.168 - 8.229: 99.5458% ( 1) 00:18:57.951 8.594 - 8.655: 99.5518% ( 1) 00:18:57.951 8.716 - 8.777: 99.5578% ( 1) 00:18:57.951 9.509 - 9.570: 99.5638% ( 1) 00:18:57.951 13.044 - 13.105: 99.5697% ( 1) 00:18:57.951 38.522 - 38.766: 99.5757% ( 1) 00:18:57.951 3167.573 - 3183.177: 99.5817% ( 1) 00:18:57.951 3994.575 - 4025.783: 99.9880% ( 68) 00:18:57.951 4993.219 - 5024.427: 100.0000% ( 2) 00:18:57.951 00:18:57.951 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:18:57.951 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:57.951 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:18:57.951 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:18:57.951 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:57.951 [ 00:18:57.951 { 00:18:57.951 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:57.951 "subtype": "Discovery", 00:18:57.951 "listen_addresses": [], 00:18:57.951 "allow_any_host": true, 00:18:57.951 "hosts": [] 00:18:57.951 }, 00:18:57.951 { 00:18:57.951 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:57.951 "subtype": "NVMe", 00:18:57.951 "listen_addresses": [ 00:18:57.951 { 00:18:57.951 "trtype": "VFIOUSER", 00:18:57.951 "adrfam": "IPv4", 00:18:57.951 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:57.951 "trsvcid": "0" 00:18:57.951 } 00:18:57.951 ], 00:18:57.951 "allow_any_host": true, 00:18:57.951 "hosts": [], 00:18:57.951 "serial_number": "SPDK1", 00:18:57.951 "model_number": "SPDK bdev Controller", 00:18:57.951 "max_namespaces": 32, 00:18:57.951 "min_cntlid": 1, 00:18:57.951 "max_cntlid": 65519, 00:18:57.951 "namespaces": [ 00:18:57.951 { 00:18:57.951 "nsid": 1, 00:18:57.951 "bdev_name": "Malloc1", 00:18:57.951 "name": "Malloc1", 00:18:57.951 "nguid": "D8EC2CCFEAE54C4DBFDF451EF7472E09", 00:18:57.951 "uuid": "d8ec2ccf-eae5-4c4d-bfdf-451ef7472e09" 00:18:57.951 }, 00:18:57.951 { 00:18:57.951 "nsid": 2, 00:18:57.951 "bdev_name": "Malloc3", 00:18:57.951 "name": "Malloc3", 00:18:57.951 "nguid": "9BFF3728C6E240E9AAD7565D138407C4", 00:18:57.951 "uuid": "9bff3728-c6e2-40e9-aad7-565d138407c4" 00:18:57.951 } 00:18:57.951 ] 00:18:57.951 }, 00:18:57.951 { 00:18:57.951 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:57.951 "subtype": "NVMe", 00:18:57.951 "listen_addresses": [ 00:18:57.951 { 00:18:57.951 "trtype": "VFIOUSER", 00:18:57.951 "adrfam": "IPv4", 00:18:57.951 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:57.951 "trsvcid": "0" 00:18:57.951 } 00:18:57.951 ], 00:18:57.951 "allow_any_host": true, 00:18:57.951 "hosts": [], 00:18:57.952 "serial_number": "SPDK2", 00:18:57.952 "model_number": "SPDK bdev Controller", 00:18:57.952 "max_namespaces": 32, 00:18:57.952 "min_cntlid": 1, 00:18:57.952 "max_cntlid": 65519, 00:18:57.952 "namespaces": [ 00:18:57.952 { 00:18:57.952 "nsid": 1, 00:18:57.952 "bdev_name": "Malloc2", 00:18:57.952 "name": "Malloc2", 00:18:57.952 "nguid": "940026C9693F450CBC61C727BA707E57", 00:18:57.952 "uuid": "940026c9-693f-450c-bc61-c727ba707e57" 00:18:57.952 } 00:18:57.952 ] 00:18:57.952 } 00:18:57.952 ] 00:18:57.952 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:57.952 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:18:57.952 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=54940 00:18:57.952 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:57.952 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:18:57.952 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:57.952 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:57.952 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:18:57.952 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:57.952 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:18:58.210 [2024-12-05 12:02:32.177817] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:58.210 Malloc4 00:18:58.210 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:18:58.469 [2024-12-05 12:02:32.414602] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:58.469 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:58.469 Asynchronous Event Request test 00:18:58.469 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:58.469 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:58.469 Registering asynchronous event callbacks... 00:18:58.469 Starting namespace attribute notice tests for all controllers... 00:18:58.469 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:58.469 aer_cb - Changed Namespace 00:18:58.469 Cleaning up... 00:18:58.469 [ 00:18:58.469 { 00:18:58.469 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:58.469 "subtype": "Discovery", 00:18:58.469 "listen_addresses": [], 00:18:58.469 "allow_any_host": true, 00:18:58.469 "hosts": [] 00:18:58.469 }, 00:18:58.469 { 00:18:58.469 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:58.469 "subtype": "NVMe", 00:18:58.469 "listen_addresses": [ 00:18:58.469 { 00:18:58.469 "trtype": "VFIOUSER", 00:18:58.469 "adrfam": "IPv4", 00:18:58.469 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:58.469 "trsvcid": "0" 00:18:58.469 } 00:18:58.469 ], 00:18:58.469 "allow_any_host": true, 00:18:58.469 "hosts": [], 00:18:58.469 "serial_number": "SPDK1", 00:18:58.469 "model_number": "SPDK bdev Controller", 00:18:58.469 "max_namespaces": 32, 00:18:58.469 "min_cntlid": 1, 00:18:58.469 "max_cntlid": 65519, 00:18:58.469 "namespaces": [ 00:18:58.469 { 00:18:58.469 "nsid": 1, 00:18:58.469 "bdev_name": "Malloc1", 00:18:58.469 "name": "Malloc1", 00:18:58.469 "nguid": "D8EC2CCFEAE54C4DBFDF451EF7472E09", 00:18:58.469 "uuid": "d8ec2ccf-eae5-4c4d-bfdf-451ef7472e09" 00:18:58.469 }, 00:18:58.469 { 00:18:58.469 "nsid": 2, 00:18:58.469 "bdev_name": "Malloc3", 00:18:58.469 "name": "Malloc3", 00:18:58.469 "nguid": "9BFF3728C6E240E9AAD7565D138407C4", 00:18:58.469 "uuid": "9bff3728-c6e2-40e9-aad7-565d138407c4" 00:18:58.469 } 00:18:58.469 ] 00:18:58.469 }, 00:18:58.469 { 00:18:58.469 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:58.469 "subtype": "NVMe", 00:18:58.469 "listen_addresses": [ 00:18:58.469 { 00:18:58.469 "trtype": "VFIOUSER", 00:18:58.469 "adrfam": "IPv4", 00:18:58.469 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:58.469 "trsvcid": "0" 00:18:58.469 } 00:18:58.469 ], 00:18:58.469 "allow_any_host": true, 00:18:58.469 "hosts": [], 00:18:58.469 "serial_number": "SPDK2", 00:18:58.470 "model_number": "SPDK bdev Controller", 00:18:58.470 "max_namespaces": 32, 00:18:58.470 "min_cntlid": 1, 00:18:58.470 "max_cntlid": 65519, 00:18:58.470 "namespaces": [ 00:18:58.470 { 00:18:58.470 "nsid": 1, 00:18:58.470 "bdev_name": "Malloc2", 00:18:58.470 "name": "Malloc2", 00:18:58.470 "nguid": "940026C9693F450CBC61C727BA707E57", 00:18:58.470 "uuid": "940026c9-693f-450c-bc61-c727ba707e57" 00:18:58.470 }, 00:18:58.470 { 00:18:58.470 "nsid": 2, 00:18:58.470 "bdev_name": "Malloc4", 00:18:58.470 "name": "Malloc4", 00:18:58.470 "nguid": "C847CCDB8C8D48B7B675C444722BCE50", 00:18:58.470 "uuid": "c847ccdb-8c8d-48b7-b675-c444722bce50" 00:18:58.470 } 00:18:58.470 ] 00:18:58.470 } 00:18:58.470 ] 00:18:58.470 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 54940 00:18:58.470 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:18:58.470 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 47322 00:18:58.470 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 47322 ']' 00:18:58.470 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 47322 00:18:58.470 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:18:58.470 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:58.470 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 47322 00:18:58.728 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:58.728 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:58.728 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 47322' 00:18:58.728 killing process with pid 47322 00:18:58.728 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 47322 00:18:58.728 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 47322 00:18:58.986 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:58.986 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:58.986 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:18:58.986 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:18:58.986 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:18:58.986 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=55177 00:18:58.986 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 55177' 00:18:58.986 Process pid: 55177 00:18:58.986 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:18:58.986 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:58.986 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 55177 00:18:58.986 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 55177 ']' 00:18:58.986 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:58.986 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:58.986 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:58.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:58.986 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:58.986 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:58.986 [2024-12-05 12:02:32.996434] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:18:58.986 [2024-12-05 12:02:32.997272] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:18:58.986 [2024-12-05 12:02:32.997310] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:58.986 [2024-12-05 12:02:33.069333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:58.986 [2024-12-05 12:02:33.106421] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:58.986 [2024-12-05 12:02:33.106460] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:58.986 [2024-12-05 12:02:33.106468] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:58.986 [2024-12-05 12:02:33.106473] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:58.986 [2024-12-05 12:02:33.106478] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:58.986 [2024-12-05 12:02:33.108078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:58.986 [2024-12-05 12:02:33.108185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:58.986 [2024-12-05 12:02:33.108292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:58.986 [2024-12-05 12:02:33.108293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:58.986 [2024-12-05 12:02:33.176925] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:18:58.986 [2024-12-05 12:02:33.177276] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:18:58.986 [2024-12-05 12:02:33.177781] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:18:58.986 [2024-12-05 12:02:33.177975] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:18:58.986 [2024-12-05 12:02:33.178032] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:18:59.245 12:02:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:59.245 12:02:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:18:59.245 12:02:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:19:00.182 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:19:00.441 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:19:00.441 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:19:00.441 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:00.441 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:19:00.441 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:00.441 Malloc1 00:19:00.700 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:19:00.700 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:19:00.958 12:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:19:01.216 12:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:01.216 12:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:19:01.216 12:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:19:01.475 Malloc2 00:19:01.475 12:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:19:01.475 12:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:19:01.733 12:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:19:01.992 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:19:01.992 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 55177 00:19:01.992 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 55177 ']' 00:19:01.992 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 55177 00:19:01.992 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:19:01.992 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:01.992 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 55177 00:19:01.992 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:01.992 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:01.992 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 55177' 00:19:01.992 killing process with pid 55177 00:19:01.992 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 55177 00:19:01.992 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 55177 00:19:02.251 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:19:02.251 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:19:02.251 00:19:02.251 real 0m50.837s 00:19:02.251 user 3m16.699s 00:19:02.251 sys 0m3.250s 00:19:02.251 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:02.251 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:19:02.251 ************************************ 00:19:02.251 END TEST nvmf_vfio_user 00:19:02.251 ************************************ 00:19:02.251 12:02:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:19:02.251 12:02:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:02.251 12:02:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:02.251 12:02:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:02.251 ************************************ 00:19:02.251 START TEST nvmf_vfio_user_nvme_compliance 00:19:02.251 ************************************ 00:19:02.251 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:19:02.511 * Looking for test storage... 00:19:02.511 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:19:02.511 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:02.511 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lcov --version 00:19:02.511 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:02.511 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:02.511 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:02.511 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:02.511 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:02.511 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:19:02.511 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:19:02.511 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:19:02.511 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:19:02.511 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:19:02.511 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:19:02.511 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:19:02.511 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:02.511 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:19:02.511 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:19:02.511 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:02.511 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:02.511 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:19:02.511 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:19:02.511 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:02.511 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:19:02.511 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:19:02.511 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:19:02.511 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:19:02.511 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:02.511 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:19:02.511 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:19:02.511 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:02.511 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:02.511 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:19:02.511 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:02.511 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:02.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.511 --rc genhtml_branch_coverage=1 00:19:02.511 --rc genhtml_function_coverage=1 00:19:02.511 --rc genhtml_legend=1 00:19:02.511 --rc geninfo_all_blocks=1 00:19:02.511 --rc geninfo_unexecuted_blocks=1 00:19:02.511 00:19:02.511 ' 00:19:02.511 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:02.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.511 --rc genhtml_branch_coverage=1 00:19:02.511 --rc genhtml_function_coverage=1 00:19:02.511 --rc genhtml_legend=1 00:19:02.511 --rc geninfo_all_blocks=1 00:19:02.511 --rc geninfo_unexecuted_blocks=1 00:19:02.511 00:19:02.511 ' 00:19:02.511 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:02.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.511 --rc genhtml_branch_coverage=1 00:19:02.511 --rc genhtml_function_coverage=1 00:19:02.511 --rc genhtml_legend=1 00:19:02.511 --rc geninfo_all_blocks=1 00:19:02.511 --rc geninfo_unexecuted_blocks=1 00:19:02.511 00:19:02.511 ' 00:19:02.511 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:02.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.511 --rc genhtml_branch_coverage=1 00:19:02.511 --rc genhtml_function_coverage=1 00:19:02.511 --rc genhtml_legend=1 00:19:02.511 --rc geninfo_all_blocks=1 00:19:02.511 --rc geninfo_unexecuted_blocks=1 00:19:02.511 00:19:02.511 ' 00:19:02.511 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:02.511 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:19:02.511 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:02.511 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:02.511 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:02.511 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:02.511 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:02.511 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:19:02.511 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:02.511 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:19:02.511 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:02.511 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:19:02.511 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:02.511 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:19:02.511 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:19:02.511 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:02.511 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:02.511 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:19:02.511 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:02.511 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:02.511 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:02.512 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.512 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.512 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.512 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:19:02.512 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.512 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:19:02.512 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:19:02.512 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:02.512 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:19:02.512 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@50 -- # : 0 00:19:02.512 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:19:02.512 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:19:02.512 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:19:02.512 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:02.512 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:02.512 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:19:02.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:19:02.512 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:19:02.512 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:19:02.512 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@54 -- # have_pci_nics=0 00:19:02.512 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:02.512 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:02.512 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:19:02.512 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:19:02.512 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:19:02.512 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=55759 00:19:02.512 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 55759' 00:19:02.512 Process pid: 55759 00:19:02.512 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:02.512 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 55759 00:19:02.512 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:19:02.512 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 55759 ']' 00:19:02.512 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:02.512 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:02.512 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:02.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:02.512 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:02.512 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:02.512 [2024-12-05 12:02:36.632926] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:19:02.512 [2024-12-05 12:02:36.632975] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:02.771 [2024-12-05 12:02:36.709354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:02.771 [2024-12-05 12:02:36.748470] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:02.771 [2024-12-05 12:02:36.748508] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:02.771 [2024-12-05 12:02:36.748517] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:02.771 [2024-12-05 12:02:36.748524] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:02.771 [2024-12-05 12:02:36.748529] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:02.771 [2024-12-05 12:02:36.749845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:02.771 [2024-12-05 12:02:36.749952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:02.771 [2024-12-05 12:02:36.749953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:02.771 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:02.771 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:19:02.771 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:19:03.708 12:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:19:03.708 12:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:19:03.708 12:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:19:03.708 12:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.708 12:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:03.708 12:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.708 12:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:19:03.708 12:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:19:03.708 12:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.708 12:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:03.967 malloc0 00:19:03.967 12:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.967 12:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:19:03.967 12:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.967 12:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:03.967 12:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.967 12:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:19:03.967 12:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.967 12:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:03.967 12:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.967 12:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:19:03.967 12:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.967 12:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:03.967 12:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.967 12:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:19:03.967 00:19:03.967 00:19:03.967 CUnit - A unit testing framework for C - Version 2.1-3 00:19:03.967 http://cunit.sourceforge.net/ 00:19:03.967 00:19:03.967 00:19:03.967 Suite: nvme_compliance 00:19:03.967 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-05 12:02:38.098838] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:03.967 [2024-12-05 12:02:38.100198] vfio_user.c: 832:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:19:03.967 [2024-12-05 12:02:38.100214] vfio_user.c:5544:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:19:03.967 [2024-12-05 12:02:38.100220] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:19:03.967 [2024-12-05 12:02:38.103873] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:03.967 passed 00:19:04.226 Test: admin_identify_ctrlr_verify_fused ...[2024-12-05 12:02:38.180429] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:04.226 [2024-12-05 12:02:38.183443] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:04.226 passed 00:19:04.226 Test: admin_identify_ns ...[2024-12-05 12:02:38.264630] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:04.226 [2024-12-05 12:02:38.324380] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:19:04.226 [2024-12-05 12:02:38.332377] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:19:04.226 [2024-12-05 12:02:38.353461] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:04.226 passed 00:19:04.484 Test: admin_get_features_mandatory_features ...[2024-12-05 12:02:38.430233] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:04.484 [2024-12-05 12:02:38.433254] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:04.484 passed 00:19:04.484 Test: admin_get_features_optional_features ...[2024-12-05 12:02:38.510782] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:04.484 [2024-12-05 12:02:38.513796] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:04.484 passed 00:19:04.484 Test: admin_set_features_number_of_queues ...[2024-12-05 12:02:38.588688] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:04.744 [2024-12-05 12:02:38.704469] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:04.744 passed 00:19:04.744 Test: admin_get_log_page_mandatory_logs ...[2024-12-05 12:02:38.777173] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:04.744 [2024-12-05 12:02:38.780197] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:04.744 passed 00:19:04.744 Test: admin_get_log_page_with_lpo ...[2024-12-05 12:02:38.856859] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:04.744 [2024-12-05 12:02:38.924382] ctrlr.c:2699:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:19:04.744 [2024-12-05 12:02:38.937452] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:05.004 passed 00:19:05.004 Test: fabric_property_get ...[2024-12-05 12:02:39.013077] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:05.004 [2024-12-05 12:02:39.014324] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:19:05.004 [2024-12-05 12:02:39.016094] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:05.004 passed 00:19:05.004 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-05 12:02:39.092589] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:05.004 [2024-12-05 12:02:39.093830] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:19:05.004 [2024-12-05 12:02:39.095609] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:05.004 passed 00:19:05.004 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-05 12:02:39.171625] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:05.263 [2024-12-05 12:02:39.259377] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:05.263 [2024-12-05 12:02:39.275376] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:05.263 [2024-12-05 12:02:39.280456] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:05.263 passed 00:19:05.263 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-05 12:02:39.356308] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:05.263 [2024-12-05 12:02:39.357545] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:19:05.263 [2024-12-05 12:02:39.359335] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:05.263 passed 00:19:05.263 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-05 12:02:39.434149] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:05.522 [2024-12-05 12:02:39.509383] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:19:05.522 [2024-12-05 12:02:39.533374] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:05.522 [2024-12-05 12:02:39.538466] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:05.522 passed 00:19:05.522 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-05 12:02:39.617041] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:05.522 [2024-12-05 12:02:39.618281] vfio_user.c:2178:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:19:05.522 [2024-12-05 12:02:39.618307] vfio_user.c:2172:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:19:05.522 [2024-12-05 12:02:39.620066] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:05.522 passed 00:19:05.522 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-05 12:02:39.695692] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:05.781 [2024-12-05 12:02:39.786373] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:19:05.781 [2024-12-05 12:02:39.794372] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:19:05.781 [2024-12-05 12:02:39.802374] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:19:05.781 [2024-12-05 12:02:39.810373] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:19:05.782 [2024-12-05 12:02:39.839466] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:05.782 passed 00:19:05.782 Test: admin_create_io_sq_verify_pc ...[2024-12-05 12:02:39.914183] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:05.782 [2024-12-05 12:02:39.930380] vfio_user.c:2071:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:19:05.782 [2024-12-05 12:02:39.948353] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:05.782 passed 00:19:06.041 Test: admin_create_io_qp_max_qps ...[2024-12-05 12:02:40.027902] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:06.976 [2024-12-05 12:02:41.133377] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:19:07.544 [2024-12-05 12:02:41.529631] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:07.544 passed 00:19:07.544 Test: admin_create_io_sq_shared_cq ...[2024-12-05 12:02:41.606566] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:07.544 [2024-12-05 12:02:41.739374] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:19:07.803 [2024-12-05 12:02:41.776439] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:07.803 passed 00:19:07.803 00:19:07.803 Run Summary: Type Total Ran Passed Failed Inactive 00:19:07.803 suites 1 1 n/a 0 0 00:19:07.803 tests 18 18 18 0 0 00:19:07.803 asserts 360 360 360 0 n/a 00:19:07.803 00:19:07.803 Elapsed time = 1.512 seconds 00:19:07.803 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 55759 00:19:07.803 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 55759 ']' 00:19:07.803 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 55759 00:19:07.803 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:19:07.803 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:07.803 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 55759 00:19:07.803 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:07.803 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:07.803 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 55759' 00:19:07.803 killing process with pid 55759 00:19:07.803 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 55759 00:19:07.803 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 55759 00:19:08.062 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:19:08.062 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:19:08.062 00:19:08.062 real 0m5.679s 00:19:08.062 user 0m15.874s 00:19:08.062 sys 0m0.519s 00:19:08.062 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:08.062 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:08.062 ************************************ 00:19:08.062 END TEST nvmf_vfio_user_nvme_compliance 00:19:08.062 ************************************ 00:19:08.062 12:02:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:19:08.062 12:02:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:08.062 12:02:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:08.062 12:02:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:08.062 ************************************ 00:19:08.062 START TEST nvmf_vfio_user_fuzz 00:19:08.062 ************************************ 00:19:08.062 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:19:08.062 * Looking for test storage... 00:19:08.062 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:08.062 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:08.062 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:19:08.062 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:08.322 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:08.323 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:08.323 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:08.323 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:08.323 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:19:08.323 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:19:08.323 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:19:08.323 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:19:08.323 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:19:08.323 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:19:08.323 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:19:08.323 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:08.323 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:19:08.323 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:19:08.323 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:08.323 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:08.323 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:19:08.323 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:19:08.323 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:08.323 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:19:08.323 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:19:08.323 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:19:08.323 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:19:08.323 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:08.323 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:19:08.323 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:19:08.323 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:08.323 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:08.323 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:19:08.323 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:08.323 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:08.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:08.323 --rc genhtml_branch_coverage=1 00:19:08.323 --rc genhtml_function_coverage=1 00:19:08.323 --rc genhtml_legend=1 00:19:08.323 --rc geninfo_all_blocks=1 00:19:08.323 --rc geninfo_unexecuted_blocks=1 00:19:08.323 00:19:08.323 ' 00:19:08.323 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:08.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:08.323 --rc genhtml_branch_coverage=1 00:19:08.323 --rc genhtml_function_coverage=1 00:19:08.323 --rc genhtml_legend=1 00:19:08.323 --rc geninfo_all_blocks=1 00:19:08.323 --rc geninfo_unexecuted_blocks=1 00:19:08.323 00:19:08.323 ' 00:19:08.323 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:08.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:08.323 --rc genhtml_branch_coverage=1 00:19:08.323 --rc genhtml_function_coverage=1 00:19:08.323 --rc genhtml_legend=1 00:19:08.323 --rc geninfo_all_blocks=1 00:19:08.323 --rc geninfo_unexecuted_blocks=1 00:19:08.323 00:19:08.323 ' 00:19:08.323 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:08.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:08.323 --rc genhtml_branch_coverage=1 00:19:08.323 --rc genhtml_function_coverage=1 00:19:08.323 --rc genhtml_legend=1 00:19:08.323 --rc geninfo_all_blocks=1 00:19:08.323 --rc geninfo_unexecuted_blocks=1 00:19:08.323 00:19:08.323 ' 00:19:08.323 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:08.323 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:19:08.323 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:08.323 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:08.323 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:08.323 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:08.323 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:08.323 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:19:08.323 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:08.323 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:19:08.323 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:08.323 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:19:08.323 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:08.323 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:19:08.323 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:19:08.323 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:08.323 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:08.323 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:19:08.323 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:08.323 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:08.323 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:08.323 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.323 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.323 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.323 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:19:08.324 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.324 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:19:08.324 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:19:08.324 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:08.324 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:19:08.324 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@50 -- # : 0 00:19:08.324 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:19:08.324 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:19:08.324 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:19:08.324 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:08.324 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:08.324 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:19:08.324 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:19:08.324 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:19:08.324 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:19:08.324 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@54 -- # have_pci_nics=0 00:19:08.324 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:08.324 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:08.324 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:19:08.324 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:19:08.324 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:19:08.324 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:19:08.324 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:19:08.324 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=56829 00:19:08.324 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 56829' 00:19:08.324 Process pid: 56829 00:19:08.324 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:08.324 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 56829 00:19:08.324 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:08.324 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 56829 ']' 00:19:08.324 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:08.324 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:08.324 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:08.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:08.324 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:08.324 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:08.583 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:08.583 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:19:08.583 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:19:09.521 12:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:19:09.521 12:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.521 12:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:09.521 12:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.521 12:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:19:09.521 12:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:19:09.521 12:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.521 12:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:09.521 malloc0 00:19:09.521 12:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.521 12:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:19:09.521 12:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.521 12:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:09.521 12:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.521 12:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:19:09.521 12:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.521 12:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:09.521 12:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.521 12:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:19:09.521 12:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.521 12:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:09.521 12:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.521 12:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:19:09.521 12:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:19:41.599 Fuzzing completed. Shutting down the fuzz application 00:19:41.599 00:19:41.599 Dumping successful admin opcodes: 00:19:41.599 9, 10, 00:19:41.599 Dumping successful io opcodes: 00:19:41.599 0, 00:19:41.599 NS: 0x20000081ef00 I/O qp, Total commands completed: 1100946, total successful commands: 4331, random_seed: 889139904 00:19:41.599 NS: 0x20000081ef00 admin qp, Total commands completed: 272320, total successful commands: 64, random_seed: 1554270528 00:19:41.599 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:19:41.599 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.599 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:41.599 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.599 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 56829 00:19:41.599 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 56829 ']' 00:19:41.599 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 56829 00:19:41.599 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:19:41.599 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:41.599 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56829 00:19:41.599 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:41.599 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:41.599 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56829' 00:19:41.599 killing process with pid 56829 00:19:41.599 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 56829 00:19:41.599 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 56829 00:19:41.599 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:19:41.599 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:19:41.599 00:19:41.599 real 0m32.229s 00:19:41.599 user 0m33.837s 00:19:41.599 sys 0m26.844s 00:19:41.599 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:41.599 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:41.599 ************************************ 00:19:41.599 END TEST nvmf_vfio_user_fuzz 00:19:41.599 ************************************ 00:19:41.599 12:03:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:41.599 12:03:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:41.599 12:03:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:41.599 12:03:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:41.599 ************************************ 00:19:41.599 START TEST nvmf_auth_target 00:19:41.599 ************************************ 00:19:41.599 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:41.599 * Looking for test storage... 00:19:41.599 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:41.599 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:41.599 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:19:41.599 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:41.599 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:41.599 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:41.599 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:41.599 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:41.599 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:19:41.599 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:19:41.599 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:19:41.599 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:19:41.599 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:19:41.599 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:19:41.599 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:19:41.599 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:41.599 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:19:41.599 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:19:41.599 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:41.599 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:41.599 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:19:41.599 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:19:41.599 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:41.599 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:19:41.599 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:19:41.599 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:19:41.599 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:19:41.599 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:41.599 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:19:41.599 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:19:41.599 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:41.599 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:41.599 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:19:41.599 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:41.599 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:41.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.599 --rc genhtml_branch_coverage=1 00:19:41.599 --rc genhtml_function_coverage=1 00:19:41.599 --rc genhtml_legend=1 00:19:41.599 --rc geninfo_all_blocks=1 00:19:41.599 --rc geninfo_unexecuted_blocks=1 00:19:41.599 00:19:41.599 ' 00:19:41.599 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:41.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.599 --rc genhtml_branch_coverage=1 00:19:41.599 --rc genhtml_function_coverage=1 00:19:41.599 --rc genhtml_legend=1 00:19:41.599 --rc geninfo_all_blocks=1 00:19:41.599 --rc geninfo_unexecuted_blocks=1 00:19:41.599 00:19:41.599 ' 00:19:41.599 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:41.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.599 --rc genhtml_branch_coverage=1 00:19:41.599 --rc genhtml_function_coverage=1 00:19:41.599 --rc genhtml_legend=1 00:19:41.599 --rc geninfo_all_blocks=1 00:19:41.599 --rc geninfo_unexecuted_blocks=1 00:19:41.599 00:19:41.599 ' 00:19:41.599 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:41.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.599 --rc genhtml_branch_coverage=1 00:19:41.599 --rc genhtml_function_coverage=1 00:19:41.599 --rc genhtml_legend=1 00:19:41.599 --rc geninfo_all_blocks=1 00:19:41.599 --rc geninfo_unexecuted_blocks=1 00:19:41.599 00:19:41.599 ' 00:19:41.599 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:41.599 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:41.599 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:41.599 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:41.599 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:41.599 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:41.599 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:41.599 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:19:41.599 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:41.599 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:19:41.600 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:41.600 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:19:41.600 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:41.600 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:19:41.600 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:19:41.600 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:41.600 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:41.600 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:19:41.600 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:41.600 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:41.600 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:41.600 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.600 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.600 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.600 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:41.600 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.600 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:19:41.600 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:19:41.600 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:41.600 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:19:41.600 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@50 -- # : 0 00:19:41.600 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:19:41.600 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:19:41.600 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:19:41.600 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:41.600 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:41.600 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:19:41.600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:19:41.600 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:19:41.600 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:19:41.600 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@54 -- # have_pci_nics=0 00:19:41.600 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:41.600 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:41.600 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:41.600 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:41.600 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:41.600 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:41.600 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:41.600 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:19:41.600 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:19:41.600 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:41.600 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # prepare_net_devs 00:19:41.600 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # local -g is_hw=no 00:19:41.600 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@260 -- # remove_target_ns 00:19:41.600 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:19:41.600 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:19:41.600 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:19:41.600 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:19:41.600 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:19:41.600 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # xtrace_disable 00:19:41.600 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.877 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:46.877 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@131 -- # pci_devs=() 00:19:46.877 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@131 -- # local -a pci_devs 00:19:46.877 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@132 -- # pci_net_devs=() 00:19:46.877 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:19:46.877 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@133 -- # pci_drivers=() 00:19:46.877 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@133 -- # local -A pci_drivers 00:19:46.877 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@135 -- # net_devs=() 00:19:46.877 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@135 -- # local -ga net_devs 00:19:46.877 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@136 -- # e810=() 00:19:46.877 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@136 -- # local -ga e810 00:19:46.877 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@137 -- # x722=() 00:19:46.877 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@137 -- # local -ga x722 00:19:46.877 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@138 -- # mlx=() 00:19:46.877 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@138 -- # local -ga mlx 00:19:46.877 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:46.877 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:46.877 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:46.877 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:46.877 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:46.877 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:46.877 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:46.877 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:46.877 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:46.877 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:46.877 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:46.877 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:46.877 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:19:46.877 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:19:46.877 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:19:46.877 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:19:46.877 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:19:46.877 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:19:46.877 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:19:46.877 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:46.877 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:46.877 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:19:46.877 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:19:46.877 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:46.877 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:46.877 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:19:46.877 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:46.878 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # [[ up == up ]] 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:46.878 Found net devices under 0000:86:00.0: cvl_0_0 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # [[ up == up ]] 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:46.878 Found net devices under 0000:86:00.1: cvl_0_1 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # is_hw=yes 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@257 -- # create_target_ns 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@27 -- # local -gA dev_map 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@28 -- # local -g _dev 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@44 -- # ips=() 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@11 -- # local val=167772161 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:19:46.878 10.0.0.1 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@11 -- # local val=167772162 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:19:46.878 10.0.0.2 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:46.878 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@38 -- # ping_ips 1 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@107 -- # local dev=initiator0 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:19:46.879 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:46.879 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.506 ms 00:19:46.879 00:19:46.879 --- 10.0.0.1 ping statistics --- 00:19:46.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:46.879 rtt min/avg/max/mdev = 0.506/0.506/0.506/0.000 ms 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # get_net_dev target0 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@107 -- # local dev=target0 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:19:46.879 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:46.879 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.160 ms 00:19:46.879 00:19:46.879 --- 10.0.0.2 ping statistics --- 00:19:46.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:46.879 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # (( pair++ )) 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@270 -- # return 0 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@107 -- # local dev=initiator0 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@107 -- # local dev=initiator1 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:19:46.879 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:19:46.880 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # return 1 00:19:46.880 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # dev= 00:19:46.880 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@169 -- # return 0 00:19:46.880 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:19:46.880 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:19:46.880 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:19:46.880 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:19:46.880 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:19:46.880 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:46.880 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:46.880 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # get_net_dev target0 00:19:46.880 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@107 -- # local dev=target0 00:19:46.880 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:19:46.880 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:19:46.880 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:19:46.880 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:19:46.880 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:19:46.880 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:19:46.880 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:19:46.880 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:19:46.880 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:19:46.880 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:46.880 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:19:46.880 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:19:46.880 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:19:46.880 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:19:46.880 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:46.880 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:46.880 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # get_net_dev target1 00:19:46.880 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@107 -- # local dev=target1 00:19:46.880 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:19:46.880 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:19:46.880 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # return 1 00:19:46.880 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # dev= 00:19:46.880 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@169 -- # return 0 00:19:46.880 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:19:46.880 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:46.880 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:19:46.880 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:19:46.880 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:46.880 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:19:46.880 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:19:46.880 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:19:46.880 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:19:46.880 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:46.880 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.880 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # nvmfpid=65763 00:19:46.880 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # waitforlisten 65763 00:19:46.880 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:46.880 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 65763 ']' 00:19:46.880 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:46.880 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:46.880 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:46.880 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:46.880 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.448 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:47.448 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:47.448 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:19:47.448 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:47.448 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.708 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:47.708 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=65883 00:19:47.708 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:47.708 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:47.708 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:19:47.708 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:19:47.708 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:47.708 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:19:47.708 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=null 00:19:47.708 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=48 00:19:47.708 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:47.708 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=30a546d6a1b515c7ab9dbe779960de3c84c47212db62e272 00:19:47.708 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-null.XXX 00:19:47.708 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-null.Po1 00:19:47.708 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key 30a546d6a1b515c7ab9dbe779960de3c84c47212db62e272 0 00:19:47.708 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 30a546d6a1b515c7ab9dbe779960de3c84c47212db62e272 0 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=30a546d6a1b515c7ab9dbe779960de3c84c47212db62e272 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=0 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-null.Po1 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-null.Po1 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.Po1 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha512 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=64 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=a8f1b93d5506dce6ab2a8eac1d0894bf2753058b77b998d0f4cab85eaaebb1e9 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha512.XXX 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha512.sNs 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key a8f1b93d5506dce6ab2a8eac1d0894bf2753058b77b998d0f4cab85eaaebb1e9 3 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 a8f1b93d5506dce6ab2a8eac1d0894bf2753058b77b998d0f4cab85eaaebb1e9 3 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=a8f1b93d5506dce6ab2a8eac1d0894bf2753058b77b998d0f4cab85eaaebb1e9 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=3 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha512.sNs 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha512.sNs 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.sNs 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha256 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=32 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=95dedf5996bd61a1002d15ec575b265a 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha256.XXX 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha256.sGX 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key 95dedf5996bd61a1002d15ec575b265a 1 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 95dedf5996bd61a1002d15ec575b265a 1 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=95dedf5996bd61a1002d15ec575b265a 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=1 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha256.sGX 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha256.sGX 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.sGX 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha384 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=48 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=52dce977b46e78d5d5e82e237e5f42b99064155f15e5f3a1 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha384.XXX 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha384.eQ7 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key 52dce977b46e78d5d5e82e237e5f42b99064155f15e5f3a1 2 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 52dce977b46e78d5d5e82e237e5f42b99064155f15e5f3a1 2 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=52dce977b46e78d5d5e82e237e5f42b99064155f15e5f3a1 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=2 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha384.eQ7 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha384.eQ7 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.eQ7 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha384 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=48 00:19:47.709 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:47.969 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=66d585deedb3fc74767df0a519e48c77001d22ff5225741d 00:19:47.969 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha384.XXX 00:19:47.969 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha384.GuI 00:19:47.969 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key 66d585deedb3fc74767df0a519e48c77001d22ff5225741d 2 00:19:47.969 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 66d585deedb3fc74767df0a519e48c77001d22ff5225741d 2 00:19:47.969 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:19:47.969 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:19:47.969 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=66d585deedb3fc74767df0a519e48c77001d22ff5225741d 00:19:47.969 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=2 00:19:47.969 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:19:47.969 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha384.GuI 00:19:47.969 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha384.GuI 00:19:47.969 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.GuI 00:19:47.969 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:19:47.969 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:19:47.969 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:47.969 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:19:47.969 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha256 00:19:47.969 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=32 00:19:47.969 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:47.969 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=8e9fc9cb9a10b4c1c0ae66199de4ad0d 00:19:47.969 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha256.XXX 00:19:47.969 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha256.EWe 00:19:47.969 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key 8e9fc9cb9a10b4c1c0ae66199de4ad0d 1 00:19:47.969 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 8e9fc9cb9a10b4c1c0ae66199de4ad0d 1 00:19:47.969 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:19:47.969 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:19:47.969 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=8e9fc9cb9a10b4c1c0ae66199de4ad0d 00:19:47.969 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=1 00:19:47.969 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:19:47.969 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha256.EWe 00:19:47.969 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha256.EWe 00:19:47.969 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.EWe 00:19:47.969 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:19:47.969 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:19:47.969 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:47.969 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:19:47.969 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha512 00:19:47.969 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=64 00:19:47.969 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:47.969 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=17066ecd0db444c36f7fbda212d36a4112706baebe8ba91fd75f16b44f71a95d 00:19:47.969 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha512.XXX 00:19:47.969 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha512.xIs 00:19:47.969 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key 17066ecd0db444c36f7fbda212d36a4112706baebe8ba91fd75f16b44f71a95d 3 00:19:47.969 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 17066ecd0db444c36f7fbda212d36a4112706baebe8ba91fd75f16b44f71a95d 3 00:19:47.969 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:19:47.969 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:19:47.969 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=17066ecd0db444c36f7fbda212d36a4112706baebe8ba91fd75f16b44f71a95d 00:19:47.969 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=3 00:19:47.969 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:19:47.969 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha512.xIs 00:19:47.969 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha512.xIs 00:19:47.969 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.xIs 00:19:47.969 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:19:47.969 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 65763 00:19:47.969 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 65763 ']' 00:19:47.969 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:47.969 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:47.969 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:47.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:47.969 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:47.969 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.228 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:48.229 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:48.229 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 65883 /var/tmp/host.sock 00:19:48.229 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 65883 ']' 00:19:48.229 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:19:48.229 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:48.229 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:48.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:48.229 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:48.229 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.487 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:48.487 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:48.487 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:19:48.487 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.487 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.487 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.487 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:48.487 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Po1 00:19:48.487 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.487 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.487 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.487 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Po1 00:19:48.487 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Po1 00:19:48.746 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.sNs ]] 00:19:48.746 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.sNs 00:19:48.746 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.746 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.746 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.746 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.sNs 00:19:48.746 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.sNs 00:19:48.746 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:48.746 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.sGX 00:19:48.746 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.746 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.746 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.746 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.sGX 00:19:48.746 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.sGX 00:19:49.005 12:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.eQ7 ]] 00:19:49.005 12:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.eQ7 00:19:49.005 12:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.005 12:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.005 12:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.005 12:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.eQ7 00:19:49.005 12:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.eQ7 00:19:49.263 12:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:49.263 12:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.GuI 00:19:49.263 12:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.263 12:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.263 12:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.263 12:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.GuI 00:19:49.263 12:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.GuI 00:19:49.522 12:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.EWe ]] 00:19:49.522 12:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.EWe 00:19:49.522 12:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.522 12:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.522 12:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.522 12:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.EWe 00:19:49.522 12:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.EWe 00:19:49.522 12:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:49.522 12:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.xIs 00:19:49.522 12:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.522 12:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.522 12:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.522 12:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.xIs 00:19:49.522 12:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.xIs 00:19:49.780 12:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:19:49.780 12:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:49.780 12:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:49.780 12:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:49.780 12:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:49.780 12:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:50.038 12:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:19:50.038 12:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:50.038 12:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:50.038 12:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:50.038 12:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:50.038 12:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.038 12:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:50.038 12:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.038 12:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.038 12:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.038 12:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:50.038 12:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:50.038 12:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:50.295 00:19:50.295 12:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:50.295 12:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:50.295 12:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.554 12:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.554 12:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.554 12:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.554 12:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.554 12:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.554 12:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:50.554 { 00:19:50.554 "cntlid": 1, 00:19:50.554 "qid": 0, 00:19:50.554 "state": "enabled", 00:19:50.554 "thread": "nvmf_tgt_poll_group_000", 00:19:50.554 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:50.554 "listen_address": { 00:19:50.554 "trtype": "TCP", 00:19:50.554 "adrfam": "IPv4", 00:19:50.554 "traddr": "10.0.0.2", 00:19:50.554 "trsvcid": "4420" 00:19:50.554 }, 00:19:50.554 "peer_address": { 00:19:50.554 "trtype": "TCP", 00:19:50.554 "adrfam": "IPv4", 00:19:50.554 "traddr": "10.0.0.1", 00:19:50.554 "trsvcid": "45768" 00:19:50.554 }, 00:19:50.554 "auth": { 00:19:50.554 "state": "completed", 00:19:50.554 "digest": "sha256", 00:19:50.554 "dhgroup": "null" 00:19:50.554 } 00:19:50.554 } 00:19:50.554 ]' 00:19:50.554 12:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:50.554 12:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:50.554 12:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:50.554 12:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:50.554 12:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:50.554 12:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.554 12:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.554 12:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.811 12:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzBhNTQ2ZDZhMWI1MTVjN2FiOWRiZTc3OTk2MGRlM2M4NGM0NzIxMmRiNjJlMjcyIJ7+pQ==: --dhchap-ctrl-secret DHHC-1:03:YThmMWI5M2Q1NTA2ZGNlNmFiMmE4ZWFjMWQwODk0YmYyNzUzMDU4Yjc3Yjk5OGQwZjRjYWI4NWVhYWViYjFlOQ77tv4=: 00:19:50.811 12:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MzBhNTQ2ZDZhMWI1MTVjN2FiOWRiZTc3OTk2MGRlM2M4NGM0NzIxMmRiNjJlMjcyIJ7+pQ==: --dhchap-ctrl-secret DHHC-1:03:YThmMWI5M2Q1NTA2ZGNlNmFiMmE4ZWFjMWQwODk0YmYyNzUzMDU4Yjc3Yjk5OGQwZjRjYWI4NWVhYWViYjFlOQ77tv4=: 00:19:51.380 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.381 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.381 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:51.381 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.381 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.381 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.381 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:51.381 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:51.381 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:51.637 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:19:51.637 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:51.637 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:51.637 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:51.637 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:51.637 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.637 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.637 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.637 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.637 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.637 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.637 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.637 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.894 00:19:51.894 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:51.894 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.894 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:51.894 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.894 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.894 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.894 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.163 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.163 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:52.163 { 00:19:52.163 "cntlid": 3, 00:19:52.163 "qid": 0, 00:19:52.163 "state": "enabled", 00:19:52.163 "thread": "nvmf_tgt_poll_group_000", 00:19:52.163 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:52.163 "listen_address": { 00:19:52.163 "trtype": "TCP", 00:19:52.163 "adrfam": "IPv4", 00:19:52.163 "traddr": "10.0.0.2", 00:19:52.163 "trsvcid": "4420" 00:19:52.163 }, 00:19:52.163 "peer_address": { 00:19:52.163 "trtype": "TCP", 00:19:52.163 "adrfam": "IPv4", 00:19:52.163 "traddr": "10.0.0.1", 00:19:52.163 "trsvcid": "45786" 00:19:52.163 }, 00:19:52.163 "auth": { 00:19:52.163 "state": "completed", 00:19:52.163 "digest": "sha256", 00:19:52.163 "dhgroup": "null" 00:19:52.163 } 00:19:52.163 } 00:19:52.163 ]' 00:19:52.163 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:52.163 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:52.163 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:52.163 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:52.163 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:52.163 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.163 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.163 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.420 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTVkZWRmNTk5NmJkNjFhMTAwMmQxNWVjNTc1YjI2NWFEY+3Z: --dhchap-ctrl-secret DHHC-1:02:NTJkY2U5NzdiNDZlNzhkNWQ1ZTgyZTIzN2U1ZjQyYjk5MDY0MTU1ZjE1ZTVmM2Exmzo2Ow==: 00:19:52.420 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTVkZWRmNTk5NmJkNjFhMTAwMmQxNWVjNTc1YjI2NWFEY+3Z: --dhchap-ctrl-secret DHHC-1:02:NTJkY2U5NzdiNDZlNzhkNWQ1ZTgyZTIzN2U1ZjQyYjk5MDY0MTU1ZjE1ZTVmM2Exmzo2Ow==: 00:19:52.987 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.987 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.987 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:52.987 12:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.987 12:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.987 12:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.987 12:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:52.987 12:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:52.987 12:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:53.246 12:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:19:53.246 12:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:53.246 12:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:53.246 12:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:53.246 12:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:53.246 12:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.246 12:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.246 12:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.246 12:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.246 12:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.246 12:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.246 12:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.246 12:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.505 00:19:53.505 12:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:53.505 12:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:53.505 12:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.505 12:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.505 12:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.505 12:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.505 12:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.764 12:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.764 12:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:53.764 { 00:19:53.764 "cntlid": 5, 00:19:53.764 "qid": 0, 00:19:53.764 "state": "enabled", 00:19:53.764 "thread": "nvmf_tgt_poll_group_000", 00:19:53.764 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:53.764 "listen_address": { 00:19:53.764 "trtype": "TCP", 00:19:53.764 "adrfam": "IPv4", 00:19:53.764 "traddr": "10.0.0.2", 00:19:53.764 "trsvcid": "4420" 00:19:53.764 }, 00:19:53.764 "peer_address": { 00:19:53.764 "trtype": "TCP", 00:19:53.764 "adrfam": "IPv4", 00:19:53.764 "traddr": "10.0.0.1", 00:19:53.764 "trsvcid": "45814" 00:19:53.764 }, 00:19:53.764 "auth": { 00:19:53.764 "state": "completed", 00:19:53.764 "digest": "sha256", 00:19:53.764 "dhgroup": "null" 00:19:53.764 } 00:19:53.764 } 00:19:53.764 ]' 00:19:53.764 12:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:53.764 12:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:53.764 12:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:53.764 12:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:53.764 12:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:53.764 12:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.764 12:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.764 12:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.023 12:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjZkNTg1ZGVlZGIzZmM3NDc2N2RmMGE1MTllNDhjNzcwMDFkMjJmZjUyMjU3NDFkNmmLaw==: --dhchap-ctrl-secret DHHC-1:01:OGU5ZmM5Y2I5YTEwYjRjMWMwYWU2NjE5OWRlNGFkMGT1IRUP: 00:19:54.023 12:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjZkNTg1ZGVlZGIzZmM3NDc2N2RmMGE1MTllNDhjNzcwMDFkMjJmZjUyMjU3NDFkNmmLaw==: --dhchap-ctrl-secret DHHC-1:01:OGU5ZmM5Y2I5YTEwYjRjMWMwYWU2NjE5OWRlNGFkMGT1IRUP: 00:19:54.591 12:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.591 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.591 12:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:54.591 12:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.591 12:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.591 12:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.591 12:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:54.591 12:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:54.591 12:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:54.851 12:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:19:54.851 12:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:54.851 12:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:54.851 12:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:54.851 12:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:54.851 12:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.851 12:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:19:54.851 12:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.851 12:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.851 12:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.851 12:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:54.851 12:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:54.851 12:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:54.851 00:19:55.109 12:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:55.109 12:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:55.109 12:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.109 12:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.109 12:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.109 12:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.109 12:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.109 12:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.109 12:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:55.109 { 00:19:55.109 "cntlid": 7, 00:19:55.109 "qid": 0, 00:19:55.109 "state": "enabled", 00:19:55.109 "thread": "nvmf_tgt_poll_group_000", 00:19:55.109 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:55.109 "listen_address": { 00:19:55.109 "trtype": "TCP", 00:19:55.109 "adrfam": "IPv4", 00:19:55.109 "traddr": "10.0.0.2", 00:19:55.109 "trsvcid": "4420" 00:19:55.109 }, 00:19:55.109 "peer_address": { 00:19:55.109 "trtype": "TCP", 00:19:55.109 "adrfam": "IPv4", 00:19:55.109 "traddr": "10.0.0.1", 00:19:55.109 "trsvcid": "45842" 00:19:55.109 }, 00:19:55.109 "auth": { 00:19:55.109 "state": "completed", 00:19:55.109 "digest": "sha256", 00:19:55.109 "dhgroup": "null" 00:19:55.109 } 00:19:55.109 } 00:19:55.109 ]' 00:19:55.109 12:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:55.402 12:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:55.402 12:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:55.402 12:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:55.402 12:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:55.402 12:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.402 12:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.402 12:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.792 12:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTcwNjZlY2QwZGI0NDRjMzZmN2ZiZGEyMTJkMzZhNDExMjcwNmJhZWJlOGJhOTFmZDc1ZjE2YjQ0ZjcxYTk1ZKgNbOo=: 00:19:55.792 12:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MTcwNjZlY2QwZGI0NDRjMzZmN2ZiZGEyMTJkMzZhNDExMjcwNmJhZWJlOGJhOTFmZDc1ZjE2YjQ0ZjcxYTk1ZKgNbOo=: 00:19:56.051 12:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.051 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.051 12:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:56.051 12:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.051 12:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.051 12:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.051 12:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:56.051 12:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:56.051 12:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:56.051 12:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:56.310 12:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:19:56.311 12:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:56.311 12:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:56.311 12:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:56.311 12:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:56.311 12:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.311 12:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.311 12:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.311 12:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.311 12:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.311 12:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.311 12:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.311 12:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.570 00:19:56.570 12:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:56.570 12:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.570 12:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:56.829 12:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.829 12:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.829 12:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.829 12:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.829 12:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.829 12:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:56.829 { 00:19:56.829 "cntlid": 9, 00:19:56.829 "qid": 0, 00:19:56.829 "state": "enabled", 00:19:56.829 "thread": "nvmf_tgt_poll_group_000", 00:19:56.829 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:56.829 "listen_address": { 00:19:56.829 "trtype": "TCP", 00:19:56.829 "adrfam": "IPv4", 00:19:56.829 "traddr": "10.0.0.2", 00:19:56.829 "trsvcid": "4420" 00:19:56.829 }, 00:19:56.829 "peer_address": { 00:19:56.829 "trtype": "TCP", 00:19:56.829 "adrfam": "IPv4", 00:19:56.829 "traddr": "10.0.0.1", 00:19:56.829 "trsvcid": "60136" 00:19:56.829 }, 00:19:56.829 "auth": { 00:19:56.829 "state": "completed", 00:19:56.829 "digest": "sha256", 00:19:56.829 "dhgroup": "ffdhe2048" 00:19:56.829 } 00:19:56.829 } 00:19:56.829 ]' 00:19:56.829 12:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:56.829 12:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:56.829 12:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:56.829 12:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:56.829 12:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:56.829 12:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.829 12:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.829 12:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.088 12:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzBhNTQ2ZDZhMWI1MTVjN2FiOWRiZTc3OTk2MGRlM2M4NGM0NzIxMmRiNjJlMjcyIJ7+pQ==: --dhchap-ctrl-secret DHHC-1:03:YThmMWI5M2Q1NTA2ZGNlNmFiMmE4ZWFjMWQwODk0YmYyNzUzMDU4Yjc3Yjk5OGQwZjRjYWI4NWVhYWViYjFlOQ77tv4=: 00:19:57.089 12:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MzBhNTQ2ZDZhMWI1MTVjN2FiOWRiZTc3OTk2MGRlM2M4NGM0NzIxMmRiNjJlMjcyIJ7+pQ==: --dhchap-ctrl-secret DHHC-1:03:YThmMWI5M2Q1NTA2ZGNlNmFiMmE4ZWFjMWQwODk0YmYyNzUzMDU4Yjc3Yjk5OGQwZjRjYWI4NWVhYWViYjFlOQ77tv4=: 00:19:57.680 12:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.680 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.680 12:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:57.680 12:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.680 12:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.680 12:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.680 12:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:57.680 12:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:57.680 12:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:57.939 12:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:19:57.939 12:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:57.939 12:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:57.939 12:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:57.939 12:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:57.939 12:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.939 12:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.939 12:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.939 12:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.939 12:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.939 12:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.939 12:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.939 12:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.198 00:19:58.198 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:58.198 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.198 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:58.457 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.457 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.457 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.457 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.457 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.457 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:58.457 { 00:19:58.457 "cntlid": 11, 00:19:58.457 "qid": 0, 00:19:58.457 "state": "enabled", 00:19:58.457 "thread": "nvmf_tgt_poll_group_000", 00:19:58.457 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:58.457 "listen_address": { 00:19:58.457 "trtype": "TCP", 00:19:58.457 "adrfam": "IPv4", 00:19:58.457 "traddr": "10.0.0.2", 00:19:58.457 "trsvcid": "4420" 00:19:58.457 }, 00:19:58.457 "peer_address": { 00:19:58.457 "trtype": "TCP", 00:19:58.457 "adrfam": "IPv4", 00:19:58.457 "traddr": "10.0.0.1", 00:19:58.457 "trsvcid": "60150" 00:19:58.457 }, 00:19:58.457 "auth": { 00:19:58.457 "state": "completed", 00:19:58.457 "digest": "sha256", 00:19:58.457 "dhgroup": "ffdhe2048" 00:19:58.457 } 00:19:58.457 } 00:19:58.457 ]' 00:19:58.457 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:58.457 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:58.457 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:58.457 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:58.457 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:58.457 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.457 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.457 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.716 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTVkZWRmNTk5NmJkNjFhMTAwMmQxNWVjNTc1YjI2NWFEY+3Z: --dhchap-ctrl-secret DHHC-1:02:NTJkY2U5NzdiNDZlNzhkNWQ1ZTgyZTIzN2U1ZjQyYjk5MDY0MTU1ZjE1ZTVmM2Exmzo2Ow==: 00:19:58.716 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTVkZWRmNTk5NmJkNjFhMTAwMmQxNWVjNTc1YjI2NWFEY+3Z: --dhchap-ctrl-secret DHHC-1:02:NTJkY2U5NzdiNDZlNzhkNWQ1ZTgyZTIzN2U1ZjQyYjk5MDY0MTU1ZjE1ZTVmM2Exmzo2Ow==: 00:19:59.284 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.284 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.284 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:59.284 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.284 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.284 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.284 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:59.284 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:59.284 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:59.543 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:19:59.544 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:59.544 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:59.544 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:59.544 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:59.544 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.544 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.544 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.544 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.544 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.544 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.544 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.544 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.803 00:19:59.803 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:59.803 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:59.803 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.803 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.803 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.803 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.803 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.062 12:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.062 12:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:00.062 { 00:20:00.062 "cntlid": 13, 00:20:00.062 "qid": 0, 00:20:00.062 "state": "enabled", 00:20:00.062 "thread": "nvmf_tgt_poll_group_000", 00:20:00.062 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:00.062 "listen_address": { 00:20:00.063 "trtype": "TCP", 00:20:00.063 "adrfam": "IPv4", 00:20:00.063 "traddr": "10.0.0.2", 00:20:00.063 "trsvcid": "4420" 00:20:00.063 }, 00:20:00.063 "peer_address": { 00:20:00.063 "trtype": "TCP", 00:20:00.063 "adrfam": "IPv4", 00:20:00.063 "traddr": "10.0.0.1", 00:20:00.063 "trsvcid": "60172" 00:20:00.063 }, 00:20:00.063 "auth": { 00:20:00.063 "state": "completed", 00:20:00.063 "digest": "sha256", 00:20:00.063 "dhgroup": "ffdhe2048" 00:20:00.063 } 00:20:00.063 } 00:20:00.063 ]' 00:20:00.063 12:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:00.063 12:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:00.063 12:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:00.063 12:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:00.063 12:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:00.063 12:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.063 12:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.063 12:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.321 12:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjZkNTg1ZGVlZGIzZmM3NDc2N2RmMGE1MTllNDhjNzcwMDFkMjJmZjUyMjU3NDFkNmmLaw==: --dhchap-ctrl-secret DHHC-1:01:OGU5ZmM5Y2I5YTEwYjRjMWMwYWU2NjE5OWRlNGFkMGT1IRUP: 00:20:00.321 12:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjZkNTg1ZGVlZGIzZmM3NDc2N2RmMGE1MTllNDhjNzcwMDFkMjJmZjUyMjU3NDFkNmmLaw==: --dhchap-ctrl-secret DHHC-1:01:OGU5ZmM5Y2I5YTEwYjRjMWMwYWU2NjE5OWRlNGFkMGT1IRUP: 00:20:00.889 12:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:00.889 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:00.889 12:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:00.889 12:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.889 12:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.889 12:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.889 12:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:00.889 12:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:00.889 12:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:01.148 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:20:01.148 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:01.148 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:01.148 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:01.148 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:01.148 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.148 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:20:01.148 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.148 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.148 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.148 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:01.148 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:01.148 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:01.148 00:20:01.406 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:01.406 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:01.406 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:01.406 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.406 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:01.406 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.406 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.406 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.406 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:01.406 { 00:20:01.406 "cntlid": 15, 00:20:01.406 "qid": 0, 00:20:01.406 "state": "enabled", 00:20:01.406 "thread": "nvmf_tgt_poll_group_000", 00:20:01.406 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:01.406 "listen_address": { 00:20:01.406 "trtype": "TCP", 00:20:01.406 "adrfam": "IPv4", 00:20:01.406 "traddr": "10.0.0.2", 00:20:01.406 "trsvcid": "4420" 00:20:01.406 }, 00:20:01.406 "peer_address": { 00:20:01.406 "trtype": "TCP", 00:20:01.406 "adrfam": "IPv4", 00:20:01.406 "traddr": "10.0.0.1", 00:20:01.406 "trsvcid": "60200" 00:20:01.406 }, 00:20:01.406 "auth": { 00:20:01.406 "state": "completed", 00:20:01.406 "digest": "sha256", 00:20:01.406 "dhgroup": "ffdhe2048" 00:20:01.406 } 00:20:01.406 } 00:20:01.406 ]' 00:20:01.406 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:01.664 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:01.664 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:01.664 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:01.664 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:01.664 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:01.664 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:01.664 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.922 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTcwNjZlY2QwZGI0NDRjMzZmN2ZiZGEyMTJkMzZhNDExMjcwNmJhZWJlOGJhOTFmZDc1ZjE2YjQ0ZjcxYTk1ZKgNbOo=: 00:20:01.922 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MTcwNjZlY2QwZGI0NDRjMzZmN2ZiZGEyMTJkMzZhNDExMjcwNmJhZWJlOGJhOTFmZDc1ZjE2YjQ0ZjcxYTk1ZKgNbOo=: 00:20:02.489 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.489 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.489 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:02.489 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.489 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.489 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.489 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:02.489 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:02.489 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:02.489 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:02.489 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:20:02.489 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:02.489 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:02.489 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:02.489 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:02.489 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.489 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.489 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.489 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.489 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.489 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.489 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.489 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.748 00:20:02.748 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:02.748 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:02.748 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.006 12:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.006 12:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.006 12:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.006 12:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.007 12:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.007 12:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:03.007 { 00:20:03.007 "cntlid": 17, 00:20:03.007 "qid": 0, 00:20:03.007 "state": "enabled", 00:20:03.007 "thread": "nvmf_tgt_poll_group_000", 00:20:03.007 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:03.007 "listen_address": { 00:20:03.007 "trtype": "TCP", 00:20:03.007 "adrfam": "IPv4", 00:20:03.007 "traddr": "10.0.0.2", 00:20:03.007 "trsvcid": "4420" 00:20:03.007 }, 00:20:03.007 "peer_address": { 00:20:03.007 "trtype": "TCP", 00:20:03.007 "adrfam": "IPv4", 00:20:03.007 "traddr": "10.0.0.1", 00:20:03.007 "trsvcid": "60238" 00:20:03.007 }, 00:20:03.007 "auth": { 00:20:03.007 "state": "completed", 00:20:03.007 "digest": "sha256", 00:20:03.007 "dhgroup": "ffdhe3072" 00:20:03.007 } 00:20:03.007 } 00:20:03.007 ]' 00:20:03.007 12:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:03.007 12:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:03.007 12:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:03.266 12:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:03.266 12:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:03.266 12:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.266 12:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.266 12:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.525 12:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzBhNTQ2ZDZhMWI1MTVjN2FiOWRiZTc3OTk2MGRlM2M4NGM0NzIxMmRiNjJlMjcyIJ7+pQ==: --dhchap-ctrl-secret DHHC-1:03:YThmMWI5M2Q1NTA2ZGNlNmFiMmE4ZWFjMWQwODk0YmYyNzUzMDU4Yjc3Yjk5OGQwZjRjYWI4NWVhYWViYjFlOQ77tv4=: 00:20:03.525 12:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MzBhNTQ2ZDZhMWI1MTVjN2FiOWRiZTc3OTk2MGRlM2M4NGM0NzIxMmRiNjJlMjcyIJ7+pQ==: --dhchap-ctrl-secret DHHC-1:03:YThmMWI5M2Q1NTA2ZGNlNmFiMmE4ZWFjMWQwODk0YmYyNzUzMDU4Yjc3Yjk5OGQwZjRjYWI4NWVhYWViYjFlOQ77tv4=: 00:20:04.092 12:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.092 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.092 12:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:04.092 12:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.092 12:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.092 12:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.092 12:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:04.092 12:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:04.092 12:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:04.092 12:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:20:04.092 12:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:04.092 12:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:04.092 12:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:04.092 12:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:04.092 12:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.093 12:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.093 12:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.093 12:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.093 12:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.093 12:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.093 12:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.093 12:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.352 00:20:04.352 12:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:04.352 12:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:04.352 12:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.611 12:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.611 12:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.611 12:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.611 12:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.611 12:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.611 12:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:04.611 { 00:20:04.611 "cntlid": 19, 00:20:04.611 "qid": 0, 00:20:04.611 "state": "enabled", 00:20:04.611 "thread": "nvmf_tgt_poll_group_000", 00:20:04.611 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:04.611 "listen_address": { 00:20:04.611 "trtype": "TCP", 00:20:04.611 "adrfam": "IPv4", 00:20:04.611 "traddr": "10.0.0.2", 00:20:04.611 "trsvcid": "4420" 00:20:04.611 }, 00:20:04.611 "peer_address": { 00:20:04.611 "trtype": "TCP", 00:20:04.611 "adrfam": "IPv4", 00:20:04.611 "traddr": "10.0.0.1", 00:20:04.611 "trsvcid": "60284" 00:20:04.611 }, 00:20:04.611 "auth": { 00:20:04.611 "state": "completed", 00:20:04.611 "digest": "sha256", 00:20:04.611 "dhgroup": "ffdhe3072" 00:20:04.611 } 00:20:04.611 } 00:20:04.611 ]' 00:20:04.611 12:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:04.611 12:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:04.611 12:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:04.611 12:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:04.611 12:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:04.870 12:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.870 12:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.870 12:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.870 12:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTVkZWRmNTk5NmJkNjFhMTAwMmQxNWVjNTc1YjI2NWFEY+3Z: --dhchap-ctrl-secret DHHC-1:02:NTJkY2U5NzdiNDZlNzhkNWQ1ZTgyZTIzN2U1ZjQyYjk5MDY0MTU1ZjE1ZTVmM2Exmzo2Ow==: 00:20:04.870 12:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTVkZWRmNTk5NmJkNjFhMTAwMmQxNWVjNTc1YjI2NWFEY+3Z: --dhchap-ctrl-secret DHHC-1:02:NTJkY2U5NzdiNDZlNzhkNWQ1ZTgyZTIzN2U1ZjQyYjk5MDY0MTU1ZjE1ZTVmM2Exmzo2Ow==: 00:20:05.438 12:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.438 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.438 12:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:05.438 12:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.438 12:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.438 12:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.438 12:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:05.438 12:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:05.438 12:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:05.696 12:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:20:05.696 12:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:05.696 12:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:05.696 12:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:05.696 12:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:05.696 12:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:05.696 12:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.697 12:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.697 12:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.697 12:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.697 12:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.697 12:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.697 12:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.955 00:20:05.955 12:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:05.955 12:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:05.955 12:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.215 12:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.215 12:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.215 12:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.215 12:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.215 12:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.215 12:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:06.215 { 00:20:06.215 "cntlid": 21, 00:20:06.215 "qid": 0, 00:20:06.215 "state": "enabled", 00:20:06.215 "thread": "nvmf_tgt_poll_group_000", 00:20:06.215 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:06.215 "listen_address": { 00:20:06.215 "trtype": "TCP", 00:20:06.215 "adrfam": "IPv4", 00:20:06.215 "traddr": "10.0.0.2", 00:20:06.215 "trsvcid": "4420" 00:20:06.215 }, 00:20:06.215 "peer_address": { 00:20:06.215 "trtype": "TCP", 00:20:06.215 "adrfam": "IPv4", 00:20:06.215 "traddr": "10.0.0.1", 00:20:06.215 "trsvcid": "60306" 00:20:06.215 }, 00:20:06.215 "auth": { 00:20:06.215 "state": "completed", 00:20:06.215 "digest": "sha256", 00:20:06.215 "dhgroup": "ffdhe3072" 00:20:06.215 } 00:20:06.215 } 00:20:06.215 ]' 00:20:06.215 12:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:06.215 12:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:06.215 12:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:06.215 12:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:06.215 12:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:06.215 12:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:06.215 12:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:06.215 12:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.474 12:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjZkNTg1ZGVlZGIzZmM3NDc2N2RmMGE1MTllNDhjNzcwMDFkMjJmZjUyMjU3NDFkNmmLaw==: --dhchap-ctrl-secret DHHC-1:01:OGU5ZmM5Y2I5YTEwYjRjMWMwYWU2NjE5OWRlNGFkMGT1IRUP: 00:20:06.474 12:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjZkNTg1ZGVlZGIzZmM3NDc2N2RmMGE1MTllNDhjNzcwMDFkMjJmZjUyMjU3NDFkNmmLaw==: --dhchap-ctrl-secret DHHC-1:01:OGU5ZmM5Y2I5YTEwYjRjMWMwYWU2NjE5OWRlNGFkMGT1IRUP: 00:20:07.042 12:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.042 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.042 12:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:07.042 12:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.042 12:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.042 12:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.042 12:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:07.042 12:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:07.042 12:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:07.301 12:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:20:07.301 12:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:07.301 12:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:07.301 12:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:07.301 12:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:07.301 12:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.301 12:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:20:07.301 12:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.301 12:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.301 12:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.301 12:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:07.302 12:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:07.302 12:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:07.560 00:20:07.560 12:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:07.560 12:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:07.560 12:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.818 12:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.818 12:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.818 12:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.818 12:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.818 12:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.818 12:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:07.818 { 00:20:07.818 "cntlid": 23, 00:20:07.818 "qid": 0, 00:20:07.818 "state": "enabled", 00:20:07.818 "thread": "nvmf_tgt_poll_group_000", 00:20:07.818 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:07.818 "listen_address": { 00:20:07.818 "trtype": "TCP", 00:20:07.818 "adrfam": "IPv4", 00:20:07.818 "traddr": "10.0.0.2", 00:20:07.818 "trsvcid": "4420" 00:20:07.818 }, 00:20:07.818 "peer_address": { 00:20:07.818 "trtype": "TCP", 00:20:07.818 "adrfam": "IPv4", 00:20:07.818 "traddr": "10.0.0.1", 00:20:07.818 "trsvcid": "43882" 00:20:07.818 }, 00:20:07.818 "auth": { 00:20:07.818 "state": "completed", 00:20:07.818 "digest": "sha256", 00:20:07.818 "dhgroup": "ffdhe3072" 00:20:07.818 } 00:20:07.818 } 00:20:07.818 ]' 00:20:07.818 12:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:07.818 12:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:07.818 12:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:07.818 12:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:07.818 12:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:07.818 12:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.818 12:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.818 12:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.076 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTcwNjZlY2QwZGI0NDRjMzZmN2ZiZGEyMTJkMzZhNDExMjcwNmJhZWJlOGJhOTFmZDc1ZjE2YjQ0ZjcxYTk1ZKgNbOo=: 00:20:08.076 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MTcwNjZlY2QwZGI0NDRjMzZmN2ZiZGEyMTJkMzZhNDExMjcwNmJhZWJlOGJhOTFmZDc1ZjE2YjQ0ZjcxYTk1ZKgNbOo=: 00:20:08.643 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.643 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.643 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:08.643 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.643 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.643 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.643 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:08.643 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:08.643 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:08.643 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:08.901 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:20:08.901 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:08.901 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:08.901 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:08.901 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:08.901 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:08.901 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.902 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.902 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.902 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.902 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.902 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.902 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.160 00:20:09.160 12:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:09.160 12:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.160 12:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:09.419 12:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.419 12:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.419 12:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.419 12:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.419 12:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.419 12:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:09.419 { 00:20:09.419 "cntlid": 25, 00:20:09.419 "qid": 0, 00:20:09.419 "state": "enabled", 00:20:09.419 "thread": "nvmf_tgt_poll_group_000", 00:20:09.419 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:09.419 "listen_address": { 00:20:09.419 "trtype": "TCP", 00:20:09.419 "adrfam": "IPv4", 00:20:09.419 "traddr": "10.0.0.2", 00:20:09.419 "trsvcid": "4420" 00:20:09.419 }, 00:20:09.419 "peer_address": { 00:20:09.419 "trtype": "TCP", 00:20:09.419 "adrfam": "IPv4", 00:20:09.419 "traddr": "10.0.0.1", 00:20:09.419 "trsvcid": "43898" 00:20:09.419 }, 00:20:09.419 "auth": { 00:20:09.419 "state": "completed", 00:20:09.419 "digest": "sha256", 00:20:09.419 "dhgroup": "ffdhe4096" 00:20:09.419 } 00:20:09.419 } 00:20:09.419 ]' 00:20:09.419 12:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:09.419 12:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:09.419 12:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:09.419 12:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:09.419 12:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:09.419 12:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.419 12:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.419 12:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.677 12:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzBhNTQ2ZDZhMWI1MTVjN2FiOWRiZTc3OTk2MGRlM2M4NGM0NzIxMmRiNjJlMjcyIJ7+pQ==: --dhchap-ctrl-secret DHHC-1:03:YThmMWI5M2Q1NTA2ZGNlNmFiMmE4ZWFjMWQwODk0YmYyNzUzMDU4Yjc3Yjk5OGQwZjRjYWI4NWVhYWViYjFlOQ77tv4=: 00:20:09.677 12:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MzBhNTQ2ZDZhMWI1MTVjN2FiOWRiZTc3OTk2MGRlM2M4NGM0NzIxMmRiNjJlMjcyIJ7+pQ==: --dhchap-ctrl-secret DHHC-1:03:YThmMWI5M2Q1NTA2ZGNlNmFiMmE4ZWFjMWQwODk0YmYyNzUzMDU4Yjc3Yjk5OGQwZjRjYWI4NWVhYWViYjFlOQ77tv4=: 00:20:10.243 12:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.243 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.243 12:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:10.243 12:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.243 12:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.243 12:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.243 12:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:10.243 12:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:10.243 12:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:10.500 12:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:20:10.500 12:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:10.500 12:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:10.500 12:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:10.500 12:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:10.500 12:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.500 12:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.500 12:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.500 12:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.500 12:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.500 12:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.500 12:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.500 12:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.758 00:20:10.758 12:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:10.758 12:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:10.758 12:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.758 12:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.017 12:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.017 12:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.017 12:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.017 12:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.017 12:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:11.017 { 00:20:11.017 "cntlid": 27, 00:20:11.017 "qid": 0, 00:20:11.017 "state": "enabled", 00:20:11.017 "thread": "nvmf_tgt_poll_group_000", 00:20:11.017 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:11.017 "listen_address": { 00:20:11.017 "trtype": "TCP", 00:20:11.017 "adrfam": "IPv4", 00:20:11.017 "traddr": "10.0.0.2", 00:20:11.017 "trsvcid": "4420" 00:20:11.017 }, 00:20:11.017 "peer_address": { 00:20:11.017 "trtype": "TCP", 00:20:11.017 "adrfam": "IPv4", 00:20:11.017 "traddr": "10.0.0.1", 00:20:11.017 "trsvcid": "43914" 00:20:11.017 }, 00:20:11.017 "auth": { 00:20:11.017 "state": "completed", 00:20:11.017 "digest": "sha256", 00:20:11.017 "dhgroup": "ffdhe4096" 00:20:11.017 } 00:20:11.017 } 00:20:11.017 ]' 00:20:11.017 12:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:11.017 12:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:11.017 12:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:11.017 12:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:11.017 12:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:11.017 12:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.017 12:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.017 12:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.276 12:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTVkZWRmNTk5NmJkNjFhMTAwMmQxNWVjNTc1YjI2NWFEY+3Z: --dhchap-ctrl-secret DHHC-1:02:NTJkY2U5NzdiNDZlNzhkNWQ1ZTgyZTIzN2U1ZjQyYjk5MDY0MTU1ZjE1ZTVmM2Exmzo2Ow==: 00:20:11.276 12:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTVkZWRmNTk5NmJkNjFhMTAwMmQxNWVjNTc1YjI2NWFEY+3Z: --dhchap-ctrl-secret DHHC-1:02:NTJkY2U5NzdiNDZlNzhkNWQ1ZTgyZTIzN2U1ZjQyYjk5MDY0MTU1ZjE1ZTVmM2Exmzo2Ow==: 00:20:11.843 12:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.843 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.843 12:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:11.843 12:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.843 12:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.843 12:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.843 12:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:11.843 12:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:11.843 12:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:12.102 12:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:20:12.102 12:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:12.102 12:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:12.102 12:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:12.102 12:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:12.102 12:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.102 12:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.102 12:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.102 12:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.102 12:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.102 12:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.102 12:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.102 12:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.360 00:20:12.360 12:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:12.360 12:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:12.360 12:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.619 12:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.619 12:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.619 12:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.619 12:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.619 12:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.619 12:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:12.619 { 00:20:12.619 "cntlid": 29, 00:20:12.619 "qid": 0, 00:20:12.619 "state": "enabled", 00:20:12.619 "thread": "nvmf_tgt_poll_group_000", 00:20:12.619 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:12.619 "listen_address": { 00:20:12.619 "trtype": "TCP", 00:20:12.619 "adrfam": "IPv4", 00:20:12.619 "traddr": "10.0.0.2", 00:20:12.619 "trsvcid": "4420" 00:20:12.619 }, 00:20:12.619 "peer_address": { 00:20:12.619 "trtype": "TCP", 00:20:12.619 "adrfam": "IPv4", 00:20:12.619 "traddr": "10.0.0.1", 00:20:12.619 "trsvcid": "43954" 00:20:12.619 }, 00:20:12.619 "auth": { 00:20:12.619 "state": "completed", 00:20:12.619 "digest": "sha256", 00:20:12.619 "dhgroup": "ffdhe4096" 00:20:12.619 } 00:20:12.619 } 00:20:12.619 ]' 00:20:12.619 12:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:12.619 12:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:12.619 12:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:12.619 12:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:12.619 12:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:12.619 12:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.619 12:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.619 12:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.878 12:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjZkNTg1ZGVlZGIzZmM3NDc2N2RmMGE1MTllNDhjNzcwMDFkMjJmZjUyMjU3NDFkNmmLaw==: --dhchap-ctrl-secret DHHC-1:01:OGU5ZmM5Y2I5YTEwYjRjMWMwYWU2NjE5OWRlNGFkMGT1IRUP: 00:20:12.878 12:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjZkNTg1ZGVlZGIzZmM3NDc2N2RmMGE1MTllNDhjNzcwMDFkMjJmZjUyMjU3NDFkNmmLaw==: --dhchap-ctrl-secret DHHC-1:01:OGU5ZmM5Y2I5YTEwYjRjMWMwYWU2NjE5OWRlNGFkMGT1IRUP: 00:20:13.444 12:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.444 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.444 12:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:13.444 12:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.444 12:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.444 12:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.444 12:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:13.444 12:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:13.445 12:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:13.703 12:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:20:13.703 12:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:13.703 12:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:13.703 12:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:13.703 12:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:13.703 12:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.703 12:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:20:13.703 12:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.703 12:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.703 12:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.703 12:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:13.703 12:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:13.704 12:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:13.962 00:20:13.962 12:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:13.962 12:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.962 12:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:14.221 12:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.221 12:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.221 12:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.221 12:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.221 12:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.221 12:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:14.221 { 00:20:14.221 "cntlid": 31, 00:20:14.221 "qid": 0, 00:20:14.221 "state": "enabled", 00:20:14.221 "thread": "nvmf_tgt_poll_group_000", 00:20:14.221 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:14.221 "listen_address": { 00:20:14.221 "trtype": "TCP", 00:20:14.221 "adrfam": "IPv4", 00:20:14.221 "traddr": "10.0.0.2", 00:20:14.221 "trsvcid": "4420" 00:20:14.221 }, 00:20:14.221 "peer_address": { 00:20:14.221 "trtype": "TCP", 00:20:14.221 "adrfam": "IPv4", 00:20:14.221 "traddr": "10.0.0.1", 00:20:14.221 "trsvcid": "43986" 00:20:14.221 }, 00:20:14.221 "auth": { 00:20:14.221 "state": "completed", 00:20:14.221 "digest": "sha256", 00:20:14.221 "dhgroup": "ffdhe4096" 00:20:14.221 } 00:20:14.221 } 00:20:14.221 ]' 00:20:14.221 12:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:14.221 12:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:14.221 12:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:14.221 12:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:14.221 12:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:14.221 12:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.221 12:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.221 12:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:14.480 12:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTcwNjZlY2QwZGI0NDRjMzZmN2ZiZGEyMTJkMzZhNDExMjcwNmJhZWJlOGJhOTFmZDc1ZjE2YjQ0ZjcxYTk1ZKgNbOo=: 00:20:14.480 12:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MTcwNjZlY2QwZGI0NDRjMzZmN2ZiZGEyMTJkMzZhNDExMjcwNmJhZWJlOGJhOTFmZDc1ZjE2YjQ0ZjcxYTk1ZKgNbOo=: 00:20:15.048 12:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.048 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.048 12:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:15.048 12:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.048 12:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.048 12:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.048 12:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:15.048 12:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:15.048 12:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:15.048 12:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:15.307 12:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:20:15.307 12:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:15.307 12:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:15.307 12:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:15.307 12:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:15.307 12:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:15.307 12:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:15.307 12:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.307 12:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.307 12:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.307 12:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:15.307 12:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:15.307 12:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:15.565 00:20:15.566 12:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:15.566 12:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:15.566 12:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.824 12:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.824 12:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:15.824 12:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.825 12:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.825 12:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.825 12:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:15.825 { 00:20:15.825 "cntlid": 33, 00:20:15.825 "qid": 0, 00:20:15.825 "state": "enabled", 00:20:15.825 "thread": "nvmf_tgt_poll_group_000", 00:20:15.825 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:15.825 "listen_address": { 00:20:15.825 "trtype": "TCP", 00:20:15.825 "adrfam": "IPv4", 00:20:15.825 "traddr": "10.0.0.2", 00:20:15.825 "trsvcid": "4420" 00:20:15.825 }, 00:20:15.825 "peer_address": { 00:20:15.825 "trtype": "TCP", 00:20:15.825 "adrfam": "IPv4", 00:20:15.825 "traddr": "10.0.0.1", 00:20:15.825 "trsvcid": "44004" 00:20:15.825 }, 00:20:15.825 "auth": { 00:20:15.825 "state": "completed", 00:20:15.825 "digest": "sha256", 00:20:15.825 "dhgroup": "ffdhe6144" 00:20:15.825 } 00:20:15.825 } 00:20:15.825 ]' 00:20:15.825 12:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:15.825 12:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:15.825 12:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:15.825 12:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:15.825 12:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:15.825 12:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.825 12:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.825 12:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.083 12:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzBhNTQ2ZDZhMWI1MTVjN2FiOWRiZTc3OTk2MGRlM2M4NGM0NzIxMmRiNjJlMjcyIJ7+pQ==: --dhchap-ctrl-secret DHHC-1:03:YThmMWI5M2Q1NTA2ZGNlNmFiMmE4ZWFjMWQwODk0YmYyNzUzMDU4Yjc3Yjk5OGQwZjRjYWI4NWVhYWViYjFlOQ77tv4=: 00:20:16.083 12:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MzBhNTQ2ZDZhMWI1MTVjN2FiOWRiZTc3OTk2MGRlM2M4NGM0NzIxMmRiNjJlMjcyIJ7+pQ==: --dhchap-ctrl-secret DHHC-1:03:YThmMWI5M2Q1NTA2ZGNlNmFiMmE4ZWFjMWQwODk0YmYyNzUzMDU4Yjc3Yjk5OGQwZjRjYWI4NWVhYWViYjFlOQ77tv4=: 00:20:16.651 12:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.651 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.651 12:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:16.651 12:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.651 12:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.651 12:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.651 12:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:16.651 12:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:16.651 12:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:16.910 12:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:20:16.910 12:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:16.910 12:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:16.910 12:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:16.910 12:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:16.910 12:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.910 12:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.910 12:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.910 12:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.910 12:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.910 12:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.910 12:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.911 12:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.169 00:20:17.170 12:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:17.170 12:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:17.170 12:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.428 12:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.428 12:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.428 12:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.428 12:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.428 12:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.428 12:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:17.428 { 00:20:17.428 "cntlid": 35, 00:20:17.428 "qid": 0, 00:20:17.429 "state": "enabled", 00:20:17.429 "thread": "nvmf_tgt_poll_group_000", 00:20:17.429 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:17.429 "listen_address": { 00:20:17.429 "trtype": "TCP", 00:20:17.429 "adrfam": "IPv4", 00:20:17.429 "traddr": "10.0.0.2", 00:20:17.429 "trsvcid": "4420" 00:20:17.429 }, 00:20:17.429 "peer_address": { 00:20:17.429 "trtype": "TCP", 00:20:17.429 "adrfam": "IPv4", 00:20:17.429 "traddr": "10.0.0.1", 00:20:17.429 "trsvcid": "39578" 00:20:17.429 }, 00:20:17.429 "auth": { 00:20:17.429 "state": "completed", 00:20:17.429 "digest": "sha256", 00:20:17.429 "dhgroup": "ffdhe6144" 00:20:17.429 } 00:20:17.429 } 00:20:17.429 ]' 00:20:17.429 12:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:17.429 12:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:17.429 12:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:17.688 12:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:17.688 12:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:17.688 12:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.688 12:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.688 12:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.688 12:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTVkZWRmNTk5NmJkNjFhMTAwMmQxNWVjNTc1YjI2NWFEY+3Z: --dhchap-ctrl-secret DHHC-1:02:NTJkY2U5NzdiNDZlNzhkNWQ1ZTgyZTIzN2U1ZjQyYjk5MDY0MTU1ZjE1ZTVmM2Exmzo2Ow==: 00:20:17.688 12:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTVkZWRmNTk5NmJkNjFhMTAwMmQxNWVjNTc1YjI2NWFEY+3Z: --dhchap-ctrl-secret DHHC-1:02:NTJkY2U5NzdiNDZlNzhkNWQ1ZTgyZTIzN2U1ZjQyYjk5MDY0MTU1ZjE1ZTVmM2Exmzo2Ow==: 00:20:18.255 12:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.255 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.255 12:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:18.255 12:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.255 12:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.514 12:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.514 12:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:18.514 12:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:18.514 12:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:18.514 12:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:20:18.514 12:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:18.514 12:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:18.514 12:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:18.514 12:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:18.514 12:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.514 12:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:18.514 12:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.514 12:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.514 12:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.514 12:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:18.514 12:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:18.514 12:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.082 00:20:19.082 12:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:19.082 12:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:19.082 12:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.082 12:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.082 12:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.082 12:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.082 12:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.082 12:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.082 12:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:19.082 { 00:20:19.082 "cntlid": 37, 00:20:19.082 "qid": 0, 00:20:19.082 "state": "enabled", 00:20:19.082 "thread": "nvmf_tgt_poll_group_000", 00:20:19.082 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:19.082 "listen_address": { 00:20:19.082 "trtype": "TCP", 00:20:19.082 "adrfam": "IPv4", 00:20:19.082 "traddr": "10.0.0.2", 00:20:19.082 "trsvcid": "4420" 00:20:19.082 }, 00:20:19.082 "peer_address": { 00:20:19.082 "trtype": "TCP", 00:20:19.082 "adrfam": "IPv4", 00:20:19.082 "traddr": "10.0.0.1", 00:20:19.082 "trsvcid": "39608" 00:20:19.082 }, 00:20:19.082 "auth": { 00:20:19.082 "state": "completed", 00:20:19.082 "digest": "sha256", 00:20:19.082 "dhgroup": "ffdhe6144" 00:20:19.082 } 00:20:19.082 } 00:20:19.082 ]' 00:20:19.082 12:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:19.082 12:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:19.082 12:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:19.341 12:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:19.341 12:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:19.341 12:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.341 12:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.341 12:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.341 12:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjZkNTg1ZGVlZGIzZmM3NDc2N2RmMGE1MTllNDhjNzcwMDFkMjJmZjUyMjU3NDFkNmmLaw==: --dhchap-ctrl-secret DHHC-1:01:OGU5ZmM5Y2I5YTEwYjRjMWMwYWU2NjE5OWRlNGFkMGT1IRUP: 00:20:19.341 12:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjZkNTg1ZGVlZGIzZmM3NDc2N2RmMGE1MTllNDhjNzcwMDFkMjJmZjUyMjU3NDFkNmmLaw==: --dhchap-ctrl-secret DHHC-1:01:OGU5ZmM5Y2I5YTEwYjRjMWMwYWU2NjE5OWRlNGFkMGT1IRUP: 00:20:19.909 12:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.909 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.909 12:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:19.909 12:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.909 12:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.168 12:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.168 12:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:20.168 12:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:20.168 12:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:20.168 12:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:20:20.168 12:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:20.168 12:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:20.168 12:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:20.168 12:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:20.168 12:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.168 12:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:20:20.168 12:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.168 12:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.168 12:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.168 12:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:20.168 12:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:20.168 12:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:20.736 00:20:20.736 12:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:20.736 12:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.736 12:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:20.736 12:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.736 12:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.736 12:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.736 12:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.736 12:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.736 12:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:20.736 { 00:20:20.736 "cntlid": 39, 00:20:20.736 "qid": 0, 00:20:20.736 "state": "enabled", 00:20:20.736 "thread": "nvmf_tgt_poll_group_000", 00:20:20.736 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:20.736 "listen_address": { 00:20:20.736 "trtype": "TCP", 00:20:20.736 "adrfam": "IPv4", 00:20:20.736 "traddr": "10.0.0.2", 00:20:20.736 "trsvcid": "4420" 00:20:20.736 }, 00:20:20.736 "peer_address": { 00:20:20.736 "trtype": "TCP", 00:20:20.736 "adrfam": "IPv4", 00:20:20.736 "traddr": "10.0.0.1", 00:20:20.736 "trsvcid": "39638" 00:20:20.736 }, 00:20:20.736 "auth": { 00:20:20.736 "state": "completed", 00:20:20.736 "digest": "sha256", 00:20:20.736 "dhgroup": "ffdhe6144" 00:20:20.736 } 00:20:20.736 } 00:20:20.736 ]' 00:20:20.736 12:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:20.736 12:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:20.736 12:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:20.995 12:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:20.995 12:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:20.995 12:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.995 12:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.995 12:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.995 12:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTcwNjZlY2QwZGI0NDRjMzZmN2ZiZGEyMTJkMzZhNDExMjcwNmJhZWJlOGJhOTFmZDc1ZjE2YjQ0ZjcxYTk1ZKgNbOo=: 00:20:20.995 12:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MTcwNjZlY2QwZGI0NDRjMzZmN2ZiZGEyMTJkMzZhNDExMjcwNmJhZWJlOGJhOTFmZDc1ZjE2YjQ0ZjcxYTk1ZKgNbOo=: 00:20:21.561 12:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.561 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.561 12:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:21.561 12:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.561 12:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.561 12:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.561 12:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:21.561 12:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:21.561 12:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:21.561 12:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:21.820 12:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:20:21.820 12:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:21.820 12:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:21.820 12:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:21.820 12:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:21.820 12:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.820 12:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.820 12:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.820 12:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.820 12:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.820 12:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.820 12:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.820 12:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.387 00:20:22.387 12:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:22.387 12:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:22.387 12:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.646 12:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.646 12:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.646 12:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.646 12:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.646 12:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.646 12:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:22.646 { 00:20:22.646 "cntlid": 41, 00:20:22.646 "qid": 0, 00:20:22.646 "state": "enabled", 00:20:22.646 "thread": "nvmf_tgt_poll_group_000", 00:20:22.646 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:22.646 "listen_address": { 00:20:22.646 "trtype": "TCP", 00:20:22.646 "adrfam": "IPv4", 00:20:22.646 "traddr": "10.0.0.2", 00:20:22.646 "trsvcid": "4420" 00:20:22.646 }, 00:20:22.646 "peer_address": { 00:20:22.646 "trtype": "TCP", 00:20:22.646 "adrfam": "IPv4", 00:20:22.646 "traddr": "10.0.0.1", 00:20:22.646 "trsvcid": "39668" 00:20:22.646 }, 00:20:22.646 "auth": { 00:20:22.646 "state": "completed", 00:20:22.646 "digest": "sha256", 00:20:22.646 "dhgroup": "ffdhe8192" 00:20:22.646 } 00:20:22.646 } 00:20:22.646 ]' 00:20:22.646 12:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:22.646 12:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:22.646 12:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:22.646 12:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:22.646 12:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:22.646 12:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.646 12:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.646 12:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.905 12:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzBhNTQ2ZDZhMWI1MTVjN2FiOWRiZTc3OTk2MGRlM2M4NGM0NzIxMmRiNjJlMjcyIJ7+pQ==: --dhchap-ctrl-secret DHHC-1:03:YThmMWI5M2Q1NTA2ZGNlNmFiMmE4ZWFjMWQwODk0YmYyNzUzMDU4Yjc3Yjk5OGQwZjRjYWI4NWVhYWViYjFlOQ77tv4=: 00:20:22.905 12:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MzBhNTQ2ZDZhMWI1MTVjN2FiOWRiZTc3OTk2MGRlM2M4NGM0NzIxMmRiNjJlMjcyIJ7+pQ==: --dhchap-ctrl-secret DHHC-1:03:YThmMWI5M2Q1NTA2ZGNlNmFiMmE4ZWFjMWQwODk0YmYyNzUzMDU4Yjc3Yjk5OGQwZjRjYWI4NWVhYWViYjFlOQ77tv4=: 00:20:23.473 12:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.473 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.473 12:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:23.473 12:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.473 12:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.473 12:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.473 12:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:23.473 12:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:23.473 12:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:23.732 12:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:20:23.732 12:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:23.732 12:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:23.732 12:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:23.732 12:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:23.732 12:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.732 12:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.732 12:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.732 12:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.732 12:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.732 12:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.732 12:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.732 12:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.300 00:20:24.300 12:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:24.300 12:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:24.300 12:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.300 12:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.300 12:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.300 12:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.300 12:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.300 12:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.300 12:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:24.300 { 00:20:24.300 "cntlid": 43, 00:20:24.300 "qid": 0, 00:20:24.300 "state": "enabled", 00:20:24.300 "thread": "nvmf_tgt_poll_group_000", 00:20:24.300 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:24.300 "listen_address": { 00:20:24.300 "trtype": "TCP", 00:20:24.300 "adrfam": "IPv4", 00:20:24.300 "traddr": "10.0.0.2", 00:20:24.300 "trsvcid": "4420" 00:20:24.300 }, 00:20:24.300 "peer_address": { 00:20:24.300 "trtype": "TCP", 00:20:24.300 "adrfam": "IPv4", 00:20:24.300 "traddr": "10.0.0.1", 00:20:24.300 "trsvcid": "39708" 00:20:24.300 }, 00:20:24.300 "auth": { 00:20:24.300 "state": "completed", 00:20:24.300 "digest": "sha256", 00:20:24.300 "dhgroup": "ffdhe8192" 00:20:24.300 } 00:20:24.300 } 00:20:24.300 ]' 00:20:24.300 12:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:24.300 12:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:24.300 12:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:24.559 12:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:24.559 12:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:24.559 12:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.560 12:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.560 12:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.818 12:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTVkZWRmNTk5NmJkNjFhMTAwMmQxNWVjNTc1YjI2NWFEY+3Z: --dhchap-ctrl-secret DHHC-1:02:NTJkY2U5NzdiNDZlNzhkNWQ1ZTgyZTIzN2U1ZjQyYjk5MDY0MTU1ZjE1ZTVmM2Exmzo2Ow==: 00:20:24.818 12:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTVkZWRmNTk5NmJkNjFhMTAwMmQxNWVjNTc1YjI2NWFEY+3Z: --dhchap-ctrl-secret DHHC-1:02:NTJkY2U5NzdiNDZlNzhkNWQ1ZTgyZTIzN2U1ZjQyYjk5MDY0MTU1ZjE1ZTVmM2Exmzo2Ow==: 00:20:25.386 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.386 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.386 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:25.386 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.386 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.386 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.386 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:25.386 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:25.386 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:25.386 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:20:25.386 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:25.386 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:25.386 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:25.386 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:25.386 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.386 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.386 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.386 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.386 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.386 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.386 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.386 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.954 00:20:25.954 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:25.954 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:25.954 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.213 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.213 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.213 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.213 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.213 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.213 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:26.213 { 00:20:26.213 "cntlid": 45, 00:20:26.213 "qid": 0, 00:20:26.213 "state": "enabled", 00:20:26.213 "thread": "nvmf_tgt_poll_group_000", 00:20:26.213 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:26.213 "listen_address": { 00:20:26.213 "trtype": "TCP", 00:20:26.213 "adrfam": "IPv4", 00:20:26.213 "traddr": "10.0.0.2", 00:20:26.213 "trsvcid": "4420" 00:20:26.213 }, 00:20:26.213 "peer_address": { 00:20:26.213 "trtype": "TCP", 00:20:26.213 "adrfam": "IPv4", 00:20:26.213 "traddr": "10.0.0.1", 00:20:26.213 "trsvcid": "39742" 00:20:26.213 }, 00:20:26.213 "auth": { 00:20:26.213 "state": "completed", 00:20:26.213 "digest": "sha256", 00:20:26.213 "dhgroup": "ffdhe8192" 00:20:26.213 } 00:20:26.213 } 00:20:26.213 ]' 00:20:26.213 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:26.213 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:26.213 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:26.213 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:26.213 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:26.213 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.213 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.213 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.472 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjZkNTg1ZGVlZGIzZmM3NDc2N2RmMGE1MTllNDhjNzcwMDFkMjJmZjUyMjU3NDFkNmmLaw==: --dhchap-ctrl-secret DHHC-1:01:OGU5ZmM5Y2I5YTEwYjRjMWMwYWU2NjE5OWRlNGFkMGT1IRUP: 00:20:26.472 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjZkNTg1ZGVlZGIzZmM3NDc2N2RmMGE1MTllNDhjNzcwMDFkMjJmZjUyMjU3NDFkNmmLaw==: --dhchap-ctrl-secret DHHC-1:01:OGU5ZmM5Y2I5YTEwYjRjMWMwYWU2NjE5OWRlNGFkMGT1IRUP: 00:20:27.041 12:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.041 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.041 12:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:27.041 12:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.041 12:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.041 12:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.041 12:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:27.041 12:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:27.041 12:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:27.302 12:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:20:27.302 12:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:27.302 12:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:27.302 12:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:27.302 12:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:27.302 12:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.302 12:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:20:27.302 12:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.302 12:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.302 12:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.302 12:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:27.302 12:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:27.302 12:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:27.871 00:20:27.871 12:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:27.871 12:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:27.871 12:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.129 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.129 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.129 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.129 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.129 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.129 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:28.129 { 00:20:28.129 "cntlid": 47, 00:20:28.129 "qid": 0, 00:20:28.129 "state": "enabled", 00:20:28.129 "thread": "nvmf_tgt_poll_group_000", 00:20:28.129 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:28.129 "listen_address": { 00:20:28.129 "trtype": "TCP", 00:20:28.129 "adrfam": "IPv4", 00:20:28.129 "traddr": "10.0.0.2", 00:20:28.129 "trsvcid": "4420" 00:20:28.129 }, 00:20:28.129 "peer_address": { 00:20:28.129 "trtype": "TCP", 00:20:28.129 "adrfam": "IPv4", 00:20:28.129 "traddr": "10.0.0.1", 00:20:28.129 "trsvcid": "44048" 00:20:28.129 }, 00:20:28.129 "auth": { 00:20:28.129 "state": "completed", 00:20:28.129 "digest": "sha256", 00:20:28.129 "dhgroup": "ffdhe8192" 00:20:28.129 } 00:20:28.129 } 00:20:28.129 ]' 00:20:28.129 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:28.129 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:28.129 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:28.129 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:28.129 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:28.129 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.129 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.129 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.387 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTcwNjZlY2QwZGI0NDRjMzZmN2ZiZGEyMTJkMzZhNDExMjcwNmJhZWJlOGJhOTFmZDc1ZjE2YjQ0ZjcxYTk1ZKgNbOo=: 00:20:28.387 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MTcwNjZlY2QwZGI0NDRjMzZmN2ZiZGEyMTJkMzZhNDExMjcwNmJhZWJlOGJhOTFmZDc1ZjE2YjQ0ZjcxYTk1ZKgNbOo=: 00:20:28.953 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.953 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.953 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:28.953 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.953 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.953 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.953 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:28.953 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:28.953 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:28.953 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:28.953 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:29.212 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:20:29.212 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:29.212 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:29.212 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:29.212 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:29.212 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.212 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.212 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.212 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.212 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.212 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.212 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.212 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.212 00:20:29.470 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:29.470 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:29.470 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.470 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.470 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.470 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.470 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.470 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.470 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:29.470 { 00:20:29.470 "cntlid": 49, 00:20:29.470 "qid": 0, 00:20:29.470 "state": "enabled", 00:20:29.470 "thread": "nvmf_tgt_poll_group_000", 00:20:29.470 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:29.470 "listen_address": { 00:20:29.470 "trtype": "TCP", 00:20:29.470 "adrfam": "IPv4", 00:20:29.470 "traddr": "10.0.0.2", 00:20:29.470 "trsvcid": "4420" 00:20:29.470 }, 00:20:29.470 "peer_address": { 00:20:29.470 "trtype": "TCP", 00:20:29.470 "adrfam": "IPv4", 00:20:29.470 "traddr": "10.0.0.1", 00:20:29.470 "trsvcid": "44072" 00:20:29.470 }, 00:20:29.470 "auth": { 00:20:29.470 "state": "completed", 00:20:29.470 "digest": "sha384", 00:20:29.470 "dhgroup": "null" 00:20:29.470 } 00:20:29.470 } 00:20:29.470 ]' 00:20:29.470 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:29.728 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:29.728 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:29.728 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:29.728 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:29.728 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.728 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.728 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.987 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzBhNTQ2ZDZhMWI1MTVjN2FiOWRiZTc3OTk2MGRlM2M4NGM0NzIxMmRiNjJlMjcyIJ7+pQ==: --dhchap-ctrl-secret DHHC-1:03:YThmMWI5M2Q1NTA2ZGNlNmFiMmE4ZWFjMWQwODk0YmYyNzUzMDU4Yjc3Yjk5OGQwZjRjYWI4NWVhYWViYjFlOQ77tv4=: 00:20:29.987 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MzBhNTQ2ZDZhMWI1MTVjN2FiOWRiZTc3OTk2MGRlM2M4NGM0NzIxMmRiNjJlMjcyIJ7+pQ==: --dhchap-ctrl-secret DHHC-1:03:YThmMWI5M2Q1NTA2ZGNlNmFiMmE4ZWFjMWQwODk0YmYyNzUzMDU4Yjc3Yjk5OGQwZjRjYWI4NWVhYWViYjFlOQ77tv4=: 00:20:30.554 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.554 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.554 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:30.554 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.554 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.554 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.554 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:30.554 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:30.554 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:30.554 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:20:30.554 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:30.554 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:30.554 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:30.554 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:30.554 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.554 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:30.554 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.554 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.811 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.812 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:30.812 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:30.812 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:30.812 00:20:30.812 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:30.812 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:30.812 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.069 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.069 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.069 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.069 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.069 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.069 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:31.069 { 00:20:31.069 "cntlid": 51, 00:20:31.069 "qid": 0, 00:20:31.069 "state": "enabled", 00:20:31.069 "thread": "nvmf_tgt_poll_group_000", 00:20:31.069 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:31.069 "listen_address": { 00:20:31.069 "trtype": "TCP", 00:20:31.069 "adrfam": "IPv4", 00:20:31.069 "traddr": "10.0.0.2", 00:20:31.069 "trsvcid": "4420" 00:20:31.069 }, 00:20:31.069 "peer_address": { 00:20:31.069 "trtype": "TCP", 00:20:31.069 "adrfam": "IPv4", 00:20:31.069 "traddr": "10.0.0.1", 00:20:31.069 "trsvcid": "44104" 00:20:31.069 }, 00:20:31.069 "auth": { 00:20:31.069 "state": "completed", 00:20:31.069 "digest": "sha384", 00:20:31.069 "dhgroup": "null" 00:20:31.069 } 00:20:31.069 } 00:20:31.069 ]' 00:20:31.069 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:31.069 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:31.069 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:31.327 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:31.327 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:31.327 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.327 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.327 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.585 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTVkZWRmNTk5NmJkNjFhMTAwMmQxNWVjNTc1YjI2NWFEY+3Z: --dhchap-ctrl-secret DHHC-1:02:NTJkY2U5NzdiNDZlNzhkNWQ1ZTgyZTIzN2U1ZjQyYjk5MDY0MTU1ZjE1ZTVmM2Exmzo2Ow==: 00:20:31.585 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTVkZWRmNTk5NmJkNjFhMTAwMmQxNWVjNTc1YjI2NWFEY+3Z: --dhchap-ctrl-secret DHHC-1:02:NTJkY2U5NzdiNDZlNzhkNWQ1ZTgyZTIzN2U1ZjQyYjk5MDY0MTU1ZjE1ZTVmM2Exmzo2Ow==: 00:20:32.152 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.152 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.152 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:32.152 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.152 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.152 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.152 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:32.152 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:32.152 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:32.152 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:20:32.152 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:32.152 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:32.152 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:32.152 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:32.152 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.152 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.152 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.152 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.152 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.152 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.152 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.152 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.410 00:20:32.410 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:32.410 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:32.410 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.743 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.743 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.743 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.743 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.743 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.743 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:32.743 { 00:20:32.743 "cntlid": 53, 00:20:32.743 "qid": 0, 00:20:32.743 "state": "enabled", 00:20:32.743 "thread": "nvmf_tgt_poll_group_000", 00:20:32.743 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:32.743 "listen_address": { 00:20:32.743 "trtype": "TCP", 00:20:32.743 "adrfam": "IPv4", 00:20:32.743 "traddr": "10.0.0.2", 00:20:32.743 "trsvcid": "4420" 00:20:32.743 }, 00:20:32.743 "peer_address": { 00:20:32.743 "trtype": "TCP", 00:20:32.743 "adrfam": "IPv4", 00:20:32.743 "traddr": "10.0.0.1", 00:20:32.743 "trsvcid": "44118" 00:20:32.743 }, 00:20:32.743 "auth": { 00:20:32.743 "state": "completed", 00:20:32.743 "digest": "sha384", 00:20:32.743 "dhgroup": "null" 00:20:32.743 } 00:20:32.743 } 00:20:32.743 ]' 00:20:32.743 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:32.743 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:32.743 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:32.743 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:32.743 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:32.743 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.743 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.743 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.083 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjZkNTg1ZGVlZGIzZmM3NDc2N2RmMGE1MTllNDhjNzcwMDFkMjJmZjUyMjU3NDFkNmmLaw==: --dhchap-ctrl-secret DHHC-1:01:OGU5ZmM5Y2I5YTEwYjRjMWMwYWU2NjE5OWRlNGFkMGT1IRUP: 00:20:33.083 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjZkNTg1ZGVlZGIzZmM3NDc2N2RmMGE1MTllNDhjNzcwMDFkMjJmZjUyMjU3NDFkNmmLaw==: --dhchap-ctrl-secret DHHC-1:01:OGU5ZmM5Y2I5YTEwYjRjMWMwYWU2NjE5OWRlNGFkMGT1IRUP: 00:20:33.655 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.655 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.655 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:33.655 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.655 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.655 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.655 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:33.655 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:33.655 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:33.913 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:20:33.913 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:33.913 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:33.913 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:33.913 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:33.913 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.913 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:20:33.913 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.913 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.913 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.913 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:33.914 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:33.914 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:34.172 00:20:34.172 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:34.172 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:34.172 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.172 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.172 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.172 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.172 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.172 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.172 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:34.173 { 00:20:34.173 "cntlid": 55, 00:20:34.173 "qid": 0, 00:20:34.173 "state": "enabled", 00:20:34.173 "thread": "nvmf_tgt_poll_group_000", 00:20:34.173 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:34.173 "listen_address": { 00:20:34.173 "trtype": "TCP", 00:20:34.173 "adrfam": "IPv4", 00:20:34.173 "traddr": "10.0.0.2", 00:20:34.173 "trsvcid": "4420" 00:20:34.173 }, 00:20:34.173 "peer_address": { 00:20:34.173 "trtype": "TCP", 00:20:34.173 "adrfam": "IPv4", 00:20:34.173 "traddr": "10.0.0.1", 00:20:34.173 "trsvcid": "44134" 00:20:34.173 }, 00:20:34.173 "auth": { 00:20:34.173 "state": "completed", 00:20:34.173 "digest": "sha384", 00:20:34.173 "dhgroup": "null" 00:20:34.173 } 00:20:34.173 } 00:20:34.173 ]' 00:20:34.173 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:34.432 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:34.432 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:34.432 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:34.432 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:34.432 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.432 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.432 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.690 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTcwNjZlY2QwZGI0NDRjMzZmN2ZiZGEyMTJkMzZhNDExMjcwNmJhZWJlOGJhOTFmZDc1ZjE2YjQ0ZjcxYTk1ZKgNbOo=: 00:20:34.690 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MTcwNjZlY2QwZGI0NDRjMzZmN2ZiZGEyMTJkMzZhNDExMjcwNmJhZWJlOGJhOTFmZDc1ZjE2YjQ0ZjcxYTk1ZKgNbOo=: 00:20:35.257 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.257 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.257 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:35.257 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.257 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.257 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.257 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:35.257 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:35.257 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:35.257 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:35.257 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:20:35.257 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:35.257 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:35.257 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:35.257 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:35.257 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.258 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.258 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.258 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.258 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.258 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.258 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.258 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.517 00:20:35.517 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:35.517 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:35.517 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.776 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.776 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.776 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.776 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.776 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.776 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:35.776 { 00:20:35.776 "cntlid": 57, 00:20:35.776 "qid": 0, 00:20:35.776 "state": "enabled", 00:20:35.776 "thread": "nvmf_tgt_poll_group_000", 00:20:35.776 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:35.776 "listen_address": { 00:20:35.776 "trtype": "TCP", 00:20:35.776 "adrfam": "IPv4", 00:20:35.776 "traddr": "10.0.0.2", 00:20:35.776 "trsvcid": "4420" 00:20:35.776 }, 00:20:35.776 "peer_address": { 00:20:35.776 "trtype": "TCP", 00:20:35.776 "adrfam": "IPv4", 00:20:35.776 "traddr": "10.0.0.1", 00:20:35.776 "trsvcid": "44158" 00:20:35.776 }, 00:20:35.776 "auth": { 00:20:35.776 "state": "completed", 00:20:35.776 "digest": "sha384", 00:20:35.776 "dhgroup": "ffdhe2048" 00:20:35.776 } 00:20:35.776 } 00:20:35.776 ]' 00:20:35.776 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:35.776 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:35.776 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:35.776 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:35.776 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:36.036 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.036 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.036 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.036 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzBhNTQ2ZDZhMWI1MTVjN2FiOWRiZTc3OTk2MGRlM2M4NGM0NzIxMmRiNjJlMjcyIJ7+pQ==: --dhchap-ctrl-secret DHHC-1:03:YThmMWI5M2Q1NTA2ZGNlNmFiMmE4ZWFjMWQwODk0YmYyNzUzMDU4Yjc3Yjk5OGQwZjRjYWI4NWVhYWViYjFlOQ77tv4=: 00:20:36.036 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MzBhNTQ2ZDZhMWI1MTVjN2FiOWRiZTc3OTk2MGRlM2M4NGM0NzIxMmRiNjJlMjcyIJ7+pQ==: --dhchap-ctrl-secret DHHC-1:03:YThmMWI5M2Q1NTA2ZGNlNmFiMmE4ZWFjMWQwODk0YmYyNzUzMDU4Yjc3Yjk5OGQwZjRjYWI4NWVhYWViYjFlOQ77tv4=: 00:20:36.603 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.603 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.603 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:36.603 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.604 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.604 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.604 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:36.604 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:36.604 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:36.863 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:20:36.863 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:36.863 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:36.863 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:36.863 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:36.863 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.863 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.863 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.863 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.863 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.863 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.863 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.863 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.121 00:20:37.121 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:37.121 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.121 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:37.380 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.380 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.380 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.380 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.380 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.380 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:37.380 { 00:20:37.380 "cntlid": 59, 00:20:37.380 "qid": 0, 00:20:37.380 "state": "enabled", 00:20:37.380 "thread": "nvmf_tgt_poll_group_000", 00:20:37.380 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:37.380 "listen_address": { 00:20:37.380 "trtype": "TCP", 00:20:37.380 "adrfam": "IPv4", 00:20:37.380 "traddr": "10.0.0.2", 00:20:37.380 "trsvcid": "4420" 00:20:37.380 }, 00:20:37.380 "peer_address": { 00:20:37.380 "trtype": "TCP", 00:20:37.380 "adrfam": "IPv4", 00:20:37.380 "traddr": "10.0.0.1", 00:20:37.380 "trsvcid": "48190" 00:20:37.380 }, 00:20:37.380 "auth": { 00:20:37.380 "state": "completed", 00:20:37.380 "digest": "sha384", 00:20:37.380 "dhgroup": "ffdhe2048" 00:20:37.380 } 00:20:37.380 } 00:20:37.380 ]' 00:20:37.380 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:37.380 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:37.380 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:37.380 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:37.380 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:37.380 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.380 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.380 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.639 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTVkZWRmNTk5NmJkNjFhMTAwMmQxNWVjNTc1YjI2NWFEY+3Z: --dhchap-ctrl-secret DHHC-1:02:NTJkY2U5NzdiNDZlNzhkNWQ1ZTgyZTIzN2U1ZjQyYjk5MDY0MTU1ZjE1ZTVmM2Exmzo2Ow==: 00:20:37.639 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTVkZWRmNTk5NmJkNjFhMTAwMmQxNWVjNTc1YjI2NWFEY+3Z: --dhchap-ctrl-secret DHHC-1:02:NTJkY2U5NzdiNDZlNzhkNWQ1ZTgyZTIzN2U1ZjQyYjk5MDY0MTU1ZjE1ZTVmM2Exmzo2Ow==: 00:20:38.206 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.206 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.206 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:38.206 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.206 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.206 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.206 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:38.207 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:38.207 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:38.465 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:20:38.465 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:38.465 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:38.465 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:38.465 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:38.465 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.465 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.465 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.465 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.465 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.465 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.465 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.465 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.724 00:20:38.724 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:38.724 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:38.724 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:38.982 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.982 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:38.982 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.982 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.982 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.982 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:38.982 { 00:20:38.982 "cntlid": 61, 00:20:38.982 "qid": 0, 00:20:38.982 "state": "enabled", 00:20:38.982 "thread": "nvmf_tgt_poll_group_000", 00:20:38.982 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:38.982 "listen_address": { 00:20:38.982 "trtype": "TCP", 00:20:38.982 "adrfam": "IPv4", 00:20:38.982 "traddr": "10.0.0.2", 00:20:38.982 "trsvcid": "4420" 00:20:38.982 }, 00:20:38.982 "peer_address": { 00:20:38.982 "trtype": "TCP", 00:20:38.982 "adrfam": "IPv4", 00:20:38.982 "traddr": "10.0.0.1", 00:20:38.982 "trsvcid": "48198" 00:20:38.982 }, 00:20:38.982 "auth": { 00:20:38.982 "state": "completed", 00:20:38.982 "digest": "sha384", 00:20:38.982 "dhgroup": "ffdhe2048" 00:20:38.982 } 00:20:38.982 } 00:20:38.982 ]' 00:20:38.982 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:38.982 12:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:38.982 12:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:38.982 12:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:38.982 12:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:38.982 12:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:38.982 12:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:38.982 12:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.241 12:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjZkNTg1ZGVlZGIzZmM3NDc2N2RmMGE1MTllNDhjNzcwMDFkMjJmZjUyMjU3NDFkNmmLaw==: --dhchap-ctrl-secret DHHC-1:01:OGU5ZmM5Y2I5YTEwYjRjMWMwYWU2NjE5OWRlNGFkMGT1IRUP: 00:20:39.241 12:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjZkNTg1ZGVlZGIzZmM3NDc2N2RmMGE1MTllNDhjNzcwMDFkMjJmZjUyMjU3NDFkNmmLaw==: --dhchap-ctrl-secret DHHC-1:01:OGU5ZmM5Y2I5YTEwYjRjMWMwYWU2NjE5OWRlNGFkMGT1IRUP: 00:20:39.808 12:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:39.808 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:39.808 12:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:39.808 12:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.808 12:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.808 12:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.808 12:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:39.808 12:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:39.808 12:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:40.067 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:20:40.067 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:40.067 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:40.067 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:40.067 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:40.067 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.067 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:20:40.067 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.067 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.067 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.067 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:40.067 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:40.067 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:40.326 00:20:40.326 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:40.326 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:40.326 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.326 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.326 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.326 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.326 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.585 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.585 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:40.585 { 00:20:40.585 "cntlid": 63, 00:20:40.585 "qid": 0, 00:20:40.585 "state": "enabled", 00:20:40.585 "thread": "nvmf_tgt_poll_group_000", 00:20:40.585 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:40.585 "listen_address": { 00:20:40.585 "trtype": "TCP", 00:20:40.585 "adrfam": "IPv4", 00:20:40.585 "traddr": "10.0.0.2", 00:20:40.585 "trsvcid": "4420" 00:20:40.585 }, 00:20:40.585 "peer_address": { 00:20:40.585 "trtype": "TCP", 00:20:40.585 "adrfam": "IPv4", 00:20:40.585 "traddr": "10.0.0.1", 00:20:40.585 "trsvcid": "48230" 00:20:40.585 }, 00:20:40.585 "auth": { 00:20:40.585 "state": "completed", 00:20:40.585 "digest": "sha384", 00:20:40.585 "dhgroup": "ffdhe2048" 00:20:40.585 } 00:20:40.585 } 00:20:40.585 ]' 00:20:40.585 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:40.585 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:40.585 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:40.585 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:40.585 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:40.585 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.585 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.585 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:40.844 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTcwNjZlY2QwZGI0NDRjMzZmN2ZiZGEyMTJkMzZhNDExMjcwNmJhZWJlOGJhOTFmZDc1ZjE2YjQ0ZjcxYTk1ZKgNbOo=: 00:20:40.844 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MTcwNjZlY2QwZGI0NDRjMzZmN2ZiZGEyMTJkMzZhNDExMjcwNmJhZWJlOGJhOTFmZDc1ZjE2YjQ0ZjcxYTk1ZKgNbOo=: 00:20:41.410 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.411 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.411 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:41.411 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.411 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.411 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.411 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:41.411 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:41.411 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:41.411 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:41.670 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:20:41.670 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:41.670 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:41.670 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:41.670 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:41.670 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.670 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.670 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.670 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.670 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.670 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.670 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.670 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.929 00:20:41.929 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:41.929 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:41.929 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.929 12:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.929 12:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.929 12:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.929 12:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.929 12:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.188 12:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:42.188 { 00:20:42.188 "cntlid": 65, 00:20:42.188 "qid": 0, 00:20:42.188 "state": "enabled", 00:20:42.188 "thread": "nvmf_tgt_poll_group_000", 00:20:42.188 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:42.188 "listen_address": { 00:20:42.188 "trtype": "TCP", 00:20:42.188 "adrfam": "IPv4", 00:20:42.188 "traddr": "10.0.0.2", 00:20:42.188 "trsvcid": "4420" 00:20:42.188 }, 00:20:42.188 "peer_address": { 00:20:42.188 "trtype": "TCP", 00:20:42.188 "adrfam": "IPv4", 00:20:42.188 "traddr": "10.0.0.1", 00:20:42.188 "trsvcid": "48258" 00:20:42.188 }, 00:20:42.188 "auth": { 00:20:42.188 "state": "completed", 00:20:42.188 "digest": "sha384", 00:20:42.188 "dhgroup": "ffdhe3072" 00:20:42.188 } 00:20:42.188 } 00:20:42.188 ]' 00:20:42.188 12:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:42.188 12:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:42.188 12:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:42.188 12:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:42.188 12:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:42.188 12:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.188 12:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.188 12:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.447 12:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzBhNTQ2ZDZhMWI1MTVjN2FiOWRiZTc3OTk2MGRlM2M4NGM0NzIxMmRiNjJlMjcyIJ7+pQ==: --dhchap-ctrl-secret DHHC-1:03:YThmMWI5M2Q1NTA2ZGNlNmFiMmE4ZWFjMWQwODk0YmYyNzUzMDU4Yjc3Yjk5OGQwZjRjYWI4NWVhYWViYjFlOQ77tv4=: 00:20:42.448 12:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MzBhNTQ2ZDZhMWI1MTVjN2FiOWRiZTc3OTk2MGRlM2M4NGM0NzIxMmRiNjJlMjcyIJ7+pQ==: --dhchap-ctrl-secret DHHC-1:03:YThmMWI5M2Q1NTA2ZGNlNmFiMmE4ZWFjMWQwODk0YmYyNzUzMDU4Yjc3Yjk5OGQwZjRjYWI4NWVhYWViYjFlOQ77tv4=: 00:20:43.014 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.014 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.014 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:43.014 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.014 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.014 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.014 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:43.014 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:43.014 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:43.273 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:20:43.273 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:43.273 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:43.273 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:43.273 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:43.273 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.273 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.273 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.273 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.273 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.273 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.273 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.273 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.532 00:20:43.532 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:43.532 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:43.532 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.532 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.532 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.532 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.532 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.532 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.532 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:43.532 { 00:20:43.532 "cntlid": 67, 00:20:43.532 "qid": 0, 00:20:43.532 "state": "enabled", 00:20:43.532 "thread": "nvmf_tgt_poll_group_000", 00:20:43.532 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:43.532 "listen_address": { 00:20:43.532 "trtype": "TCP", 00:20:43.532 "adrfam": "IPv4", 00:20:43.532 "traddr": "10.0.0.2", 00:20:43.532 "trsvcid": "4420" 00:20:43.532 }, 00:20:43.532 "peer_address": { 00:20:43.532 "trtype": "TCP", 00:20:43.532 "adrfam": "IPv4", 00:20:43.532 "traddr": "10.0.0.1", 00:20:43.532 "trsvcid": "48288" 00:20:43.532 }, 00:20:43.532 "auth": { 00:20:43.532 "state": "completed", 00:20:43.532 "digest": "sha384", 00:20:43.532 "dhgroup": "ffdhe3072" 00:20:43.532 } 00:20:43.532 } 00:20:43.532 ]' 00:20:43.532 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:43.791 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:43.791 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:43.791 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:43.791 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:43.791 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.791 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.791 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.050 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTVkZWRmNTk5NmJkNjFhMTAwMmQxNWVjNTc1YjI2NWFEY+3Z: --dhchap-ctrl-secret DHHC-1:02:NTJkY2U5NzdiNDZlNzhkNWQ1ZTgyZTIzN2U1ZjQyYjk5MDY0MTU1ZjE1ZTVmM2Exmzo2Ow==: 00:20:44.050 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTVkZWRmNTk5NmJkNjFhMTAwMmQxNWVjNTc1YjI2NWFEY+3Z: --dhchap-ctrl-secret DHHC-1:02:NTJkY2U5NzdiNDZlNzhkNWQ1ZTgyZTIzN2U1ZjQyYjk5MDY0MTU1ZjE1ZTVmM2Exmzo2Ow==: 00:20:44.618 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.618 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.618 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:44.618 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.618 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.618 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.618 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:44.618 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:44.618 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:44.876 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:20:44.876 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:44.876 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:44.876 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:44.876 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:44.876 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.876 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.876 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.876 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.876 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.876 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.876 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.876 12:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.134 00:20:45.134 12:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:45.134 12:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.134 12:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:45.134 12:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.134 12:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.134 12:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.134 12:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.134 12:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.134 12:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:45.134 { 00:20:45.134 "cntlid": 69, 00:20:45.134 "qid": 0, 00:20:45.134 "state": "enabled", 00:20:45.134 "thread": "nvmf_tgt_poll_group_000", 00:20:45.134 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:45.134 "listen_address": { 00:20:45.134 "trtype": "TCP", 00:20:45.134 "adrfam": "IPv4", 00:20:45.134 "traddr": "10.0.0.2", 00:20:45.134 "trsvcid": "4420" 00:20:45.134 }, 00:20:45.134 "peer_address": { 00:20:45.134 "trtype": "TCP", 00:20:45.134 "adrfam": "IPv4", 00:20:45.134 "traddr": "10.0.0.1", 00:20:45.134 "trsvcid": "48308" 00:20:45.134 }, 00:20:45.134 "auth": { 00:20:45.134 "state": "completed", 00:20:45.134 "digest": "sha384", 00:20:45.134 "dhgroup": "ffdhe3072" 00:20:45.134 } 00:20:45.134 } 00:20:45.134 ]' 00:20:45.134 12:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:45.405 12:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:45.405 12:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:45.405 12:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:45.405 12:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:45.405 12:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.405 12:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.405 12:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.662 12:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjZkNTg1ZGVlZGIzZmM3NDc2N2RmMGE1MTllNDhjNzcwMDFkMjJmZjUyMjU3NDFkNmmLaw==: --dhchap-ctrl-secret DHHC-1:01:OGU5ZmM5Y2I5YTEwYjRjMWMwYWU2NjE5OWRlNGFkMGT1IRUP: 00:20:45.662 12:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjZkNTg1ZGVlZGIzZmM3NDc2N2RmMGE1MTllNDhjNzcwMDFkMjJmZjUyMjU3NDFkNmmLaw==: --dhchap-ctrl-secret DHHC-1:01:OGU5ZmM5Y2I5YTEwYjRjMWMwYWU2NjE5OWRlNGFkMGT1IRUP: 00:20:46.228 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.228 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.228 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:46.228 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.228 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.228 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.228 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:46.228 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:46.228 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:46.228 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:20:46.228 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:46.228 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:46.228 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:46.228 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:46.228 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.228 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:20:46.228 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.228 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.228 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.228 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:46.228 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:46.229 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:46.486 00:20:46.486 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:46.486 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:46.486 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.744 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.744 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.744 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.744 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.744 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.744 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:46.744 { 00:20:46.744 "cntlid": 71, 00:20:46.744 "qid": 0, 00:20:46.744 "state": "enabled", 00:20:46.744 "thread": "nvmf_tgt_poll_group_000", 00:20:46.744 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:46.744 "listen_address": { 00:20:46.744 "trtype": "TCP", 00:20:46.744 "adrfam": "IPv4", 00:20:46.744 "traddr": "10.0.0.2", 00:20:46.744 "trsvcid": "4420" 00:20:46.744 }, 00:20:46.744 "peer_address": { 00:20:46.744 "trtype": "TCP", 00:20:46.744 "adrfam": "IPv4", 00:20:46.744 "traddr": "10.0.0.1", 00:20:46.744 "trsvcid": "33410" 00:20:46.744 }, 00:20:46.744 "auth": { 00:20:46.744 "state": "completed", 00:20:46.744 "digest": "sha384", 00:20:46.744 "dhgroup": "ffdhe3072" 00:20:46.744 } 00:20:46.744 } 00:20:46.744 ]' 00:20:46.744 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:46.744 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:46.744 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:47.001 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:47.001 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:47.001 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.001 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.001 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.257 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTcwNjZlY2QwZGI0NDRjMzZmN2ZiZGEyMTJkMzZhNDExMjcwNmJhZWJlOGJhOTFmZDc1ZjE2YjQ0ZjcxYTk1ZKgNbOo=: 00:20:47.257 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MTcwNjZlY2QwZGI0NDRjMzZmN2ZiZGEyMTJkMzZhNDExMjcwNmJhZWJlOGJhOTFmZDc1ZjE2YjQ0ZjcxYTk1ZKgNbOo=: 00:20:47.828 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.828 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.828 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:47.828 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.828 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.828 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.828 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:47.828 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:47.828 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:47.828 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:47.828 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:20:47.828 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:47.828 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:47.828 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:47.828 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:47.828 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:47.828 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:47.828 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.828 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.828 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.828 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:47.828 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:47.828 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.086 00:20:48.344 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:48.344 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:48.344 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.344 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.344 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.344 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.344 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.344 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.344 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:48.344 { 00:20:48.344 "cntlid": 73, 00:20:48.344 "qid": 0, 00:20:48.344 "state": "enabled", 00:20:48.344 "thread": "nvmf_tgt_poll_group_000", 00:20:48.344 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:48.344 "listen_address": { 00:20:48.344 "trtype": "TCP", 00:20:48.344 "adrfam": "IPv4", 00:20:48.344 "traddr": "10.0.0.2", 00:20:48.344 "trsvcid": "4420" 00:20:48.344 }, 00:20:48.344 "peer_address": { 00:20:48.344 "trtype": "TCP", 00:20:48.344 "adrfam": "IPv4", 00:20:48.344 "traddr": "10.0.0.1", 00:20:48.344 "trsvcid": "33442" 00:20:48.344 }, 00:20:48.344 "auth": { 00:20:48.344 "state": "completed", 00:20:48.344 "digest": "sha384", 00:20:48.344 "dhgroup": "ffdhe4096" 00:20:48.344 } 00:20:48.344 } 00:20:48.344 ]' 00:20:48.344 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:48.602 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:48.602 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:48.602 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:48.602 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:48.602 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.603 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.603 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.861 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzBhNTQ2ZDZhMWI1MTVjN2FiOWRiZTc3OTk2MGRlM2M4NGM0NzIxMmRiNjJlMjcyIJ7+pQ==: --dhchap-ctrl-secret DHHC-1:03:YThmMWI5M2Q1NTA2ZGNlNmFiMmE4ZWFjMWQwODk0YmYyNzUzMDU4Yjc3Yjk5OGQwZjRjYWI4NWVhYWViYjFlOQ77tv4=: 00:20:48.861 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MzBhNTQ2ZDZhMWI1MTVjN2FiOWRiZTc3OTk2MGRlM2M4NGM0NzIxMmRiNjJlMjcyIJ7+pQ==: --dhchap-ctrl-secret DHHC-1:03:YThmMWI5M2Q1NTA2ZGNlNmFiMmE4ZWFjMWQwODk0YmYyNzUzMDU4Yjc3Yjk5OGQwZjRjYWI4NWVhYWViYjFlOQ77tv4=: 00:20:49.427 12:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.427 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.427 12:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:49.427 12:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.427 12:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.427 12:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.427 12:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:49.427 12:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:49.427 12:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:49.427 12:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:20:49.427 12:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:49.427 12:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:49.427 12:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:49.427 12:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:49.427 12:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.427 12:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:49.427 12:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.427 12:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.427 12:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.427 12:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:49.427 12:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:49.427 12:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:49.686 00:20:49.944 12:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:49.944 12:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:49.944 12:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.945 12:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.945 12:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.945 12:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.945 12:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.945 12:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.945 12:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:49.945 { 00:20:49.945 "cntlid": 75, 00:20:49.945 "qid": 0, 00:20:49.945 "state": "enabled", 00:20:49.945 "thread": "nvmf_tgt_poll_group_000", 00:20:49.945 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:49.945 "listen_address": { 00:20:49.945 "trtype": "TCP", 00:20:49.945 "adrfam": "IPv4", 00:20:49.945 "traddr": "10.0.0.2", 00:20:49.945 "trsvcid": "4420" 00:20:49.945 }, 00:20:49.945 "peer_address": { 00:20:49.945 "trtype": "TCP", 00:20:49.945 "adrfam": "IPv4", 00:20:49.945 "traddr": "10.0.0.1", 00:20:49.945 "trsvcid": "33482" 00:20:49.945 }, 00:20:49.945 "auth": { 00:20:49.945 "state": "completed", 00:20:49.945 "digest": "sha384", 00:20:49.945 "dhgroup": "ffdhe4096" 00:20:49.945 } 00:20:49.945 } 00:20:49.945 ]' 00:20:49.945 12:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:49.945 12:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:49.945 12:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:50.204 12:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:50.204 12:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:50.204 12:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.204 12:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.204 12:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.462 12:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTVkZWRmNTk5NmJkNjFhMTAwMmQxNWVjNTc1YjI2NWFEY+3Z: --dhchap-ctrl-secret DHHC-1:02:NTJkY2U5NzdiNDZlNzhkNWQ1ZTgyZTIzN2U1ZjQyYjk5MDY0MTU1ZjE1ZTVmM2Exmzo2Ow==: 00:20:50.462 12:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTVkZWRmNTk5NmJkNjFhMTAwMmQxNWVjNTc1YjI2NWFEY+3Z: --dhchap-ctrl-secret DHHC-1:02:NTJkY2U5NzdiNDZlNzhkNWQ1ZTgyZTIzN2U1ZjQyYjk5MDY0MTU1ZjE1ZTVmM2Exmzo2Ow==: 00:20:51.037 12:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.037 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.037 12:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:51.037 12:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.037 12:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.037 12:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.037 12:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:51.037 12:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:51.038 12:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:51.038 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:20:51.038 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:51.038 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:51.038 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:51.038 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:51.038 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.038 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:51.038 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.038 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.038 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.038 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:51.038 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:51.038 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:51.296 00:20:51.296 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:51.296 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:51.296 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.554 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.554 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.554 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.554 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.554 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.554 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:51.554 { 00:20:51.554 "cntlid": 77, 00:20:51.554 "qid": 0, 00:20:51.554 "state": "enabled", 00:20:51.554 "thread": "nvmf_tgt_poll_group_000", 00:20:51.554 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:51.554 "listen_address": { 00:20:51.554 "trtype": "TCP", 00:20:51.554 "adrfam": "IPv4", 00:20:51.554 "traddr": "10.0.0.2", 00:20:51.554 "trsvcid": "4420" 00:20:51.554 }, 00:20:51.554 "peer_address": { 00:20:51.554 "trtype": "TCP", 00:20:51.554 "adrfam": "IPv4", 00:20:51.554 "traddr": "10.0.0.1", 00:20:51.554 "trsvcid": "33504" 00:20:51.554 }, 00:20:51.554 "auth": { 00:20:51.554 "state": "completed", 00:20:51.554 "digest": "sha384", 00:20:51.554 "dhgroup": "ffdhe4096" 00:20:51.554 } 00:20:51.554 } 00:20:51.554 ]' 00:20:51.554 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:51.554 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:51.554 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:51.554 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:51.554 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:51.812 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.812 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.812 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.812 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjZkNTg1ZGVlZGIzZmM3NDc2N2RmMGE1MTllNDhjNzcwMDFkMjJmZjUyMjU3NDFkNmmLaw==: --dhchap-ctrl-secret DHHC-1:01:OGU5ZmM5Y2I5YTEwYjRjMWMwYWU2NjE5OWRlNGFkMGT1IRUP: 00:20:51.812 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjZkNTg1ZGVlZGIzZmM3NDc2N2RmMGE1MTllNDhjNzcwMDFkMjJmZjUyMjU3NDFkNmmLaw==: --dhchap-ctrl-secret DHHC-1:01:OGU5ZmM5Y2I5YTEwYjRjMWMwYWU2NjE5OWRlNGFkMGT1IRUP: 00:20:52.378 12:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.378 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.378 12:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:52.378 12:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.378 12:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.378 12:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.378 12:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:52.378 12:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:52.378 12:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:52.637 12:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:20:52.637 12:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:52.637 12:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:52.637 12:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:52.637 12:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:52.637 12:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.637 12:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:20:52.637 12:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.637 12:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.637 12:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.637 12:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:52.637 12:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:52.637 12:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:52.896 00:20:52.896 12:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:52.896 12:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:52.896 12:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.158 12:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.158 12:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.159 12:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.159 12:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.159 12:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.159 12:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:53.159 { 00:20:53.159 "cntlid": 79, 00:20:53.159 "qid": 0, 00:20:53.159 "state": "enabled", 00:20:53.159 "thread": "nvmf_tgt_poll_group_000", 00:20:53.159 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:53.159 "listen_address": { 00:20:53.159 "trtype": "TCP", 00:20:53.159 "adrfam": "IPv4", 00:20:53.159 "traddr": "10.0.0.2", 00:20:53.159 "trsvcid": "4420" 00:20:53.159 }, 00:20:53.159 "peer_address": { 00:20:53.159 "trtype": "TCP", 00:20:53.159 "adrfam": "IPv4", 00:20:53.159 "traddr": "10.0.0.1", 00:20:53.159 "trsvcid": "33526" 00:20:53.159 }, 00:20:53.159 "auth": { 00:20:53.159 "state": "completed", 00:20:53.159 "digest": "sha384", 00:20:53.159 "dhgroup": "ffdhe4096" 00:20:53.159 } 00:20:53.159 } 00:20:53.159 ]' 00:20:53.159 12:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:53.159 12:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:53.159 12:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:53.159 12:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:53.159 12:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:53.418 12:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.418 12:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.418 12:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.418 12:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTcwNjZlY2QwZGI0NDRjMzZmN2ZiZGEyMTJkMzZhNDExMjcwNmJhZWJlOGJhOTFmZDc1ZjE2YjQ0ZjcxYTk1ZKgNbOo=: 00:20:53.418 12:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MTcwNjZlY2QwZGI0NDRjMzZmN2ZiZGEyMTJkMzZhNDExMjcwNmJhZWJlOGJhOTFmZDc1ZjE2YjQ0ZjcxYTk1ZKgNbOo=: 00:20:53.985 12:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.985 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.985 12:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:53.985 12:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.985 12:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.985 12:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.985 12:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:53.985 12:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:53.985 12:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:53.985 12:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:54.244 12:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:20:54.244 12:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:54.244 12:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:54.244 12:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:54.244 12:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:54.244 12:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:54.244 12:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:54.244 12:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.244 12:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.244 12:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.244 12:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:54.244 12:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:54.244 12:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:54.503 00:20:54.503 12:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:54.503 12:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:54.503 12:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.762 12:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.763 12:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.763 12:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.763 12:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.763 12:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.763 12:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:54.763 { 00:20:54.763 "cntlid": 81, 00:20:54.763 "qid": 0, 00:20:54.763 "state": "enabled", 00:20:54.763 "thread": "nvmf_tgt_poll_group_000", 00:20:54.763 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:54.763 "listen_address": { 00:20:54.763 "trtype": "TCP", 00:20:54.763 "adrfam": "IPv4", 00:20:54.763 "traddr": "10.0.0.2", 00:20:54.763 "trsvcid": "4420" 00:20:54.763 }, 00:20:54.763 "peer_address": { 00:20:54.763 "trtype": "TCP", 00:20:54.763 "adrfam": "IPv4", 00:20:54.763 "traddr": "10.0.0.1", 00:20:54.763 "trsvcid": "33544" 00:20:54.763 }, 00:20:54.763 "auth": { 00:20:54.763 "state": "completed", 00:20:54.763 "digest": "sha384", 00:20:54.763 "dhgroup": "ffdhe6144" 00:20:54.763 } 00:20:54.763 } 00:20:54.763 ]' 00:20:54.763 12:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:54.763 12:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:54.763 12:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:55.022 12:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:55.022 12:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:55.022 12:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.022 12:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.022 12:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.022 12:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzBhNTQ2ZDZhMWI1MTVjN2FiOWRiZTc3OTk2MGRlM2M4NGM0NzIxMmRiNjJlMjcyIJ7+pQ==: --dhchap-ctrl-secret DHHC-1:03:YThmMWI5M2Q1NTA2ZGNlNmFiMmE4ZWFjMWQwODk0YmYyNzUzMDU4Yjc3Yjk5OGQwZjRjYWI4NWVhYWViYjFlOQ77tv4=: 00:20:55.022 12:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MzBhNTQ2ZDZhMWI1MTVjN2FiOWRiZTc3OTk2MGRlM2M4NGM0NzIxMmRiNjJlMjcyIJ7+pQ==: --dhchap-ctrl-secret DHHC-1:03:YThmMWI5M2Q1NTA2ZGNlNmFiMmE4ZWFjMWQwODk0YmYyNzUzMDU4Yjc3Yjk5OGQwZjRjYWI4NWVhYWViYjFlOQ77tv4=: 00:20:55.590 12:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.590 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.590 12:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:55.590 12:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.590 12:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.849 12:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.849 12:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:55.849 12:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:55.849 12:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:55.849 12:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:20:55.849 12:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:55.849 12:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:55.849 12:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:55.849 12:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:55.849 12:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.849 12:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.849 12:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.849 12:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.849 12:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.849 12:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.849 12:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.849 12:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:56.417 00:20:56.417 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:56.417 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:56.417 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.417 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.417 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.417 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.417 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.417 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.417 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:56.417 { 00:20:56.417 "cntlid": 83, 00:20:56.417 "qid": 0, 00:20:56.417 "state": "enabled", 00:20:56.417 "thread": "nvmf_tgt_poll_group_000", 00:20:56.417 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:56.417 "listen_address": { 00:20:56.417 "trtype": "TCP", 00:20:56.417 "adrfam": "IPv4", 00:20:56.417 "traddr": "10.0.0.2", 00:20:56.417 "trsvcid": "4420" 00:20:56.417 }, 00:20:56.417 "peer_address": { 00:20:56.417 "trtype": "TCP", 00:20:56.417 "adrfam": "IPv4", 00:20:56.417 "traddr": "10.0.0.1", 00:20:56.417 "trsvcid": "45858" 00:20:56.417 }, 00:20:56.417 "auth": { 00:20:56.417 "state": "completed", 00:20:56.417 "digest": "sha384", 00:20:56.417 "dhgroup": "ffdhe6144" 00:20:56.417 } 00:20:56.417 } 00:20:56.417 ]' 00:20:56.417 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:56.675 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:56.675 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:56.675 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:56.675 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:56.675 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.675 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.675 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.934 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTVkZWRmNTk5NmJkNjFhMTAwMmQxNWVjNTc1YjI2NWFEY+3Z: --dhchap-ctrl-secret DHHC-1:02:NTJkY2U5NzdiNDZlNzhkNWQ1ZTgyZTIzN2U1ZjQyYjk5MDY0MTU1ZjE1ZTVmM2Exmzo2Ow==: 00:20:56.934 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTVkZWRmNTk5NmJkNjFhMTAwMmQxNWVjNTc1YjI2NWFEY+3Z: --dhchap-ctrl-secret DHHC-1:02:NTJkY2U5NzdiNDZlNzhkNWQ1ZTgyZTIzN2U1ZjQyYjk5MDY0MTU1ZjE1ZTVmM2Exmzo2Ow==: 00:20:57.502 12:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.502 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.502 12:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:57.502 12:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.502 12:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.502 12:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.502 12:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:57.502 12:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:57.502 12:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:57.502 12:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:20:57.502 12:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:57.502 12:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:57.502 12:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:57.502 12:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:57.502 12:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.502 12:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.502 12:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.502 12:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.502 12:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.502 12:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.502 12:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.502 12:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:58.070 00:20:58.070 12:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:58.070 12:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:58.070 12:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.070 12:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.070 12:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.070 12:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.070 12:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.070 12:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.070 12:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:58.070 { 00:20:58.070 "cntlid": 85, 00:20:58.070 "qid": 0, 00:20:58.070 "state": "enabled", 00:20:58.070 "thread": "nvmf_tgt_poll_group_000", 00:20:58.070 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:58.070 "listen_address": { 00:20:58.070 "trtype": "TCP", 00:20:58.070 "adrfam": "IPv4", 00:20:58.070 "traddr": "10.0.0.2", 00:20:58.070 "trsvcid": "4420" 00:20:58.070 }, 00:20:58.070 "peer_address": { 00:20:58.070 "trtype": "TCP", 00:20:58.070 "adrfam": "IPv4", 00:20:58.070 "traddr": "10.0.0.1", 00:20:58.070 "trsvcid": "45884" 00:20:58.070 }, 00:20:58.070 "auth": { 00:20:58.070 "state": "completed", 00:20:58.070 "digest": "sha384", 00:20:58.070 "dhgroup": "ffdhe6144" 00:20:58.070 } 00:20:58.070 } 00:20:58.070 ]' 00:20:58.070 12:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:58.070 12:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:58.070 12:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:58.329 12:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:58.329 12:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:58.329 12:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.329 12:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.329 12:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.588 12:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjZkNTg1ZGVlZGIzZmM3NDc2N2RmMGE1MTllNDhjNzcwMDFkMjJmZjUyMjU3NDFkNmmLaw==: --dhchap-ctrl-secret DHHC-1:01:OGU5ZmM5Y2I5YTEwYjRjMWMwYWU2NjE5OWRlNGFkMGT1IRUP: 00:20:58.588 12:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjZkNTg1ZGVlZGIzZmM3NDc2N2RmMGE1MTllNDhjNzcwMDFkMjJmZjUyMjU3NDFkNmmLaw==: --dhchap-ctrl-secret DHHC-1:01:OGU5ZmM5Y2I5YTEwYjRjMWMwYWU2NjE5OWRlNGFkMGT1IRUP: 00:20:59.156 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.156 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.156 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:59.156 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.156 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.156 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.156 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:59.156 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:59.156 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:59.156 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:20:59.156 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:59.156 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:59.156 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:59.156 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:59.156 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.156 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:20:59.156 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.156 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.156 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.156 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:59.156 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:59.156 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:59.723 00:20:59.723 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:59.723 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:59.723 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.723 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.723 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.723 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.723 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.724 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.724 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:59.724 { 00:20:59.724 "cntlid": 87, 00:20:59.724 "qid": 0, 00:20:59.724 "state": "enabled", 00:20:59.724 "thread": "nvmf_tgt_poll_group_000", 00:20:59.724 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:59.724 "listen_address": { 00:20:59.724 "trtype": "TCP", 00:20:59.724 "adrfam": "IPv4", 00:20:59.724 "traddr": "10.0.0.2", 00:20:59.724 "trsvcid": "4420" 00:20:59.724 }, 00:20:59.724 "peer_address": { 00:20:59.724 "trtype": "TCP", 00:20:59.724 "adrfam": "IPv4", 00:20:59.724 "traddr": "10.0.0.1", 00:20:59.724 "trsvcid": "45914" 00:20:59.724 }, 00:20:59.724 "auth": { 00:20:59.724 "state": "completed", 00:20:59.724 "digest": "sha384", 00:20:59.724 "dhgroup": "ffdhe6144" 00:20:59.724 } 00:20:59.724 } 00:20:59.724 ]' 00:20:59.724 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:59.724 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:59.724 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:59.982 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:59.982 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:59.982 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.982 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.982 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.241 12:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTcwNjZlY2QwZGI0NDRjMzZmN2ZiZGEyMTJkMzZhNDExMjcwNmJhZWJlOGJhOTFmZDc1ZjE2YjQ0ZjcxYTk1ZKgNbOo=: 00:21:00.241 12:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MTcwNjZlY2QwZGI0NDRjMzZmN2ZiZGEyMTJkMzZhNDExMjcwNmJhZWJlOGJhOTFmZDc1ZjE2YjQ0ZjcxYTk1ZKgNbOo=: 00:21:00.808 12:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.808 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.808 12:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:00.808 12:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.808 12:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.808 12:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.808 12:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:00.808 12:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:00.808 12:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:00.808 12:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:00.808 12:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:21:00.808 12:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:00.808 12:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:00.808 12:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:00.808 12:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:00.808 12:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:00.809 12:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:00.809 12:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.809 12:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.809 12:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.809 12:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:00.809 12:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:00.809 12:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.388 00:21:01.388 12:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:01.388 12:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:01.388 12:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.646 12:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.647 12:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.647 12:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.647 12:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.647 12:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.647 12:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:01.647 { 00:21:01.647 "cntlid": 89, 00:21:01.647 "qid": 0, 00:21:01.647 "state": "enabled", 00:21:01.647 "thread": "nvmf_tgt_poll_group_000", 00:21:01.647 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:01.647 "listen_address": { 00:21:01.647 "trtype": "TCP", 00:21:01.647 "adrfam": "IPv4", 00:21:01.647 "traddr": "10.0.0.2", 00:21:01.647 "trsvcid": "4420" 00:21:01.647 }, 00:21:01.647 "peer_address": { 00:21:01.647 "trtype": "TCP", 00:21:01.647 "adrfam": "IPv4", 00:21:01.647 "traddr": "10.0.0.1", 00:21:01.647 "trsvcid": "45934" 00:21:01.647 }, 00:21:01.647 "auth": { 00:21:01.647 "state": "completed", 00:21:01.647 "digest": "sha384", 00:21:01.647 "dhgroup": "ffdhe8192" 00:21:01.647 } 00:21:01.647 } 00:21:01.647 ]' 00:21:01.647 12:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:01.647 12:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:01.647 12:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:01.647 12:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:01.647 12:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:01.647 12:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.647 12:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.647 12:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.905 12:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzBhNTQ2ZDZhMWI1MTVjN2FiOWRiZTc3OTk2MGRlM2M4NGM0NzIxMmRiNjJlMjcyIJ7+pQ==: --dhchap-ctrl-secret DHHC-1:03:YThmMWI5M2Q1NTA2ZGNlNmFiMmE4ZWFjMWQwODk0YmYyNzUzMDU4Yjc3Yjk5OGQwZjRjYWI4NWVhYWViYjFlOQ77tv4=: 00:21:01.905 12:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MzBhNTQ2ZDZhMWI1MTVjN2FiOWRiZTc3OTk2MGRlM2M4NGM0NzIxMmRiNjJlMjcyIJ7+pQ==: --dhchap-ctrl-secret DHHC-1:03:YThmMWI5M2Q1NTA2ZGNlNmFiMmE4ZWFjMWQwODk0YmYyNzUzMDU4Yjc3Yjk5OGQwZjRjYWI4NWVhYWViYjFlOQ77tv4=: 00:21:02.474 12:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.474 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.474 12:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:02.474 12:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.474 12:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.474 12:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.474 12:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:02.474 12:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:02.474 12:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:02.733 12:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:21:02.733 12:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:02.733 12:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:02.733 12:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:02.733 12:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:02.733 12:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.733 12:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:02.733 12:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.733 12:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.733 12:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.733 12:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:02.733 12:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:02.733 12:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.302 00:21:03.302 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:03.302 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:03.302 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.302 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.302 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.302 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.302 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.302 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.302 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:03.302 { 00:21:03.302 "cntlid": 91, 00:21:03.302 "qid": 0, 00:21:03.302 "state": "enabled", 00:21:03.302 "thread": "nvmf_tgt_poll_group_000", 00:21:03.302 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:03.302 "listen_address": { 00:21:03.302 "trtype": "TCP", 00:21:03.302 "adrfam": "IPv4", 00:21:03.302 "traddr": "10.0.0.2", 00:21:03.302 "trsvcid": "4420" 00:21:03.302 }, 00:21:03.302 "peer_address": { 00:21:03.302 "trtype": "TCP", 00:21:03.302 "adrfam": "IPv4", 00:21:03.302 "traddr": "10.0.0.1", 00:21:03.302 "trsvcid": "45958" 00:21:03.302 }, 00:21:03.302 "auth": { 00:21:03.302 "state": "completed", 00:21:03.302 "digest": "sha384", 00:21:03.302 "dhgroup": "ffdhe8192" 00:21:03.302 } 00:21:03.302 } 00:21:03.302 ]' 00:21:03.302 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:03.302 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:03.302 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:03.560 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:03.560 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:03.560 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.560 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.560 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.818 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTVkZWRmNTk5NmJkNjFhMTAwMmQxNWVjNTc1YjI2NWFEY+3Z: --dhchap-ctrl-secret DHHC-1:02:NTJkY2U5NzdiNDZlNzhkNWQ1ZTgyZTIzN2U1ZjQyYjk5MDY0MTU1ZjE1ZTVmM2Exmzo2Ow==: 00:21:03.818 12:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTVkZWRmNTk5NmJkNjFhMTAwMmQxNWVjNTc1YjI2NWFEY+3Z: --dhchap-ctrl-secret DHHC-1:02:NTJkY2U5NzdiNDZlNzhkNWQ1ZTgyZTIzN2U1ZjQyYjk5MDY0MTU1ZjE1ZTVmM2Exmzo2Ow==: 00:21:04.385 12:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.385 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.385 12:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:04.385 12:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.385 12:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.385 12:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.385 12:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:04.385 12:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:04.385 12:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:04.385 12:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:21:04.385 12:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:04.385 12:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:04.385 12:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:04.385 12:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:04.385 12:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.385 12:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:04.385 12:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.385 12:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.385 12:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.385 12:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:04.385 12:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:04.385 12:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:04.953 00:21:04.953 12:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:04.953 12:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:04.953 12:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.211 12:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.211 12:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:05.211 12:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.211 12:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.211 12:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.211 12:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:05.211 { 00:21:05.211 "cntlid": 93, 00:21:05.211 "qid": 0, 00:21:05.211 "state": "enabled", 00:21:05.211 "thread": "nvmf_tgt_poll_group_000", 00:21:05.211 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:05.211 "listen_address": { 00:21:05.211 "trtype": "TCP", 00:21:05.211 "adrfam": "IPv4", 00:21:05.211 "traddr": "10.0.0.2", 00:21:05.211 "trsvcid": "4420" 00:21:05.211 }, 00:21:05.211 "peer_address": { 00:21:05.211 "trtype": "TCP", 00:21:05.211 "adrfam": "IPv4", 00:21:05.211 "traddr": "10.0.0.1", 00:21:05.211 "trsvcid": "45972" 00:21:05.211 }, 00:21:05.211 "auth": { 00:21:05.211 "state": "completed", 00:21:05.211 "digest": "sha384", 00:21:05.211 "dhgroup": "ffdhe8192" 00:21:05.211 } 00:21:05.211 } 00:21:05.211 ]' 00:21:05.211 12:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:05.211 12:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:05.211 12:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:05.211 12:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:05.211 12:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:05.211 12:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.211 12:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.211 12:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.470 12:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjZkNTg1ZGVlZGIzZmM3NDc2N2RmMGE1MTllNDhjNzcwMDFkMjJmZjUyMjU3NDFkNmmLaw==: --dhchap-ctrl-secret DHHC-1:01:OGU5ZmM5Y2I5YTEwYjRjMWMwYWU2NjE5OWRlNGFkMGT1IRUP: 00:21:05.470 12:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjZkNTg1ZGVlZGIzZmM3NDc2N2RmMGE1MTllNDhjNzcwMDFkMjJmZjUyMjU3NDFkNmmLaw==: --dhchap-ctrl-secret DHHC-1:01:OGU5ZmM5Y2I5YTEwYjRjMWMwYWU2NjE5OWRlNGFkMGT1IRUP: 00:21:06.039 12:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.039 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.039 12:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:06.039 12:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.039 12:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.039 12:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.039 12:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:06.039 12:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:06.039 12:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:06.297 12:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:21:06.297 12:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:06.297 12:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:06.297 12:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:06.297 12:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:06.297 12:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.297 12:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:21:06.297 12:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.297 12:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.297 12:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.297 12:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:06.297 12:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:06.297 12:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:06.872 00:21:06.872 12:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:06.872 12:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:06.872 12:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.872 12:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.872 12:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.872 12:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.872 12:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.872 12:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.872 12:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:06.872 { 00:21:06.872 "cntlid": 95, 00:21:06.872 "qid": 0, 00:21:06.872 "state": "enabled", 00:21:06.872 "thread": "nvmf_tgt_poll_group_000", 00:21:06.872 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:06.872 "listen_address": { 00:21:06.872 "trtype": "TCP", 00:21:06.872 "adrfam": "IPv4", 00:21:06.872 "traddr": "10.0.0.2", 00:21:06.872 "trsvcid": "4420" 00:21:06.872 }, 00:21:06.872 "peer_address": { 00:21:06.872 "trtype": "TCP", 00:21:06.872 "adrfam": "IPv4", 00:21:06.872 "traddr": "10.0.0.1", 00:21:06.872 "trsvcid": "51506" 00:21:06.872 }, 00:21:06.872 "auth": { 00:21:06.872 "state": "completed", 00:21:06.872 "digest": "sha384", 00:21:06.872 "dhgroup": "ffdhe8192" 00:21:06.872 } 00:21:06.872 } 00:21:06.872 ]' 00:21:06.872 12:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:06.872 12:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:06.872 12:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:07.131 12:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:07.131 12:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:07.131 12:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.131 12:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.131 12:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.390 12:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTcwNjZlY2QwZGI0NDRjMzZmN2ZiZGEyMTJkMzZhNDExMjcwNmJhZWJlOGJhOTFmZDc1ZjE2YjQ0ZjcxYTk1ZKgNbOo=: 00:21:07.390 12:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MTcwNjZlY2QwZGI0NDRjMzZmN2ZiZGEyMTJkMzZhNDExMjcwNmJhZWJlOGJhOTFmZDc1ZjE2YjQ0ZjcxYTk1ZKgNbOo=: 00:21:07.958 12:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.958 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.958 12:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:07.958 12:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.958 12:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.958 12:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.958 12:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:07.958 12:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:07.958 12:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:07.958 12:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:07.958 12:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:07.958 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:21:07.958 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:07.958 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:07.958 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:07.958 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:07.959 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:07.959 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:07.959 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.959 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.959 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.959 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:07.959 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:07.959 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.218 00:21:08.218 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:08.218 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:08.218 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.477 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.477 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.477 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.477 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.477 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.477 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:08.477 { 00:21:08.477 "cntlid": 97, 00:21:08.477 "qid": 0, 00:21:08.477 "state": "enabled", 00:21:08.477 "thread": "nvmf_tgt_poll_group_000", 00:21:08.477 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:08.477 "listen_address": { 00:21:08.477 "trtype": "TCP", 00:21:08.477 "adrfam": "IPv4", 00:21:08.477 "traddr": "10.0.0.2", 00:21:08.477 "trsvcid": "4420" 00:21:08.477 }, 00:21:08.477 "peer_address": { 00:21:08.477 "trtype": "TCP", 00:21:08.477 "adrfam": "IPv4", 00:21:08.477 "traddr": "10.0.0.1", 00:21:08.477 "trsvcid": "51514" 00:21:08.477 }, 00:21:08.477 "auth": { 00:21:08.477 "state": "completed", 00:21:08.477 "digest": "sha512", 00:21:08.477 "dhgroup": "null" 00:21:08.477 } 00:21:08.477 } 00:21:08.477 ]' 00:21:08.477 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:08.477 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:08.477 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:08.477 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:08.477 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:08.736 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.736 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.736 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:08.736 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzBhNTQ2ZDZhMWI1MTVjN2FiOWRiZTc3OTk2MGRlM2M4NGM0NzIxMmRiNjJlMjcyIJ7+pQ==: --dhchap-ctrl-secret DHHC-1:03:YThmMWI5M2Q1NTA2ZGNlNmFiMmE4ZWFjMWQwODk0YmYyNzUzMDU4Yjc3Yjk5OGQwZjRjYWI4NWVhYWViYjFlOQ77tv4=: 00:21:08.736 12:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MzBhNTQ2ZDZhMWI1MTVjN2FiOWRiZTc3OTk2MGRlM2M4NGM0NzIxMmRiNjJlMjcyIJ7+pQ==: --dhchap-ctrl-secret DHHC-1:03:YThmMWI5M2Q1NTA2ZGNlNmFiMmE4ZWFjMWQwODk0YmYyNzUzMDU4Yjc3Yjk5OGQwZjRjYWI4NWVhYWViYjFlOQ77tv4=: 00:21:09.304 12:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:09.305 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:09.305 12:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:09.305 12:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.305 12:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.305 12:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.305 12:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:09.305 12:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:09.305 12:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:09.564 12:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:21:09.564 12:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:09.564 12:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:09.564 12:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:09.564 12:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:09.564 12:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.564 12:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.564 12:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.564 12:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.564 12:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.564 12:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.564 12:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.564 12:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.823 00:21:09.823 12:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:09.823 12:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:09.823 12:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.082 12:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.082 12:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.082 12:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.082 12:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.082 12:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.082 12:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:10.082 { 00:21:10.082 "cntlid": 99, 00:21:10.082 "qid": 0, 00:21:10.082 "state": "enabled", 00:21:10.082 "thread": "nvmf_tgt_poll_group_000", 00:21:10.082 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:10.082 "listen_address": { 00:21:10.082 "trtype": "TCP", 00:21:10.082 "adrfam": "IPv4", 00:21:10.082 "traddr": "10.0.0.2", 00:21:10.082 "trsvcid": "4420" 00:21:10.082 }, 00:21:10.082 "peer_address": { 00:21:10.082 "trtype": "TCP", 00:21:10.082 "adrfam": "IPv4", 00:21:10.082 "traddr": "10.0.0.1", 00:21:10.082 "trsvcid": "51546" 00:21:10.082 }, 00:21:10.082 "auth": { 00:21:10.082 "state": "completed", 00:21:10.082 "digest": "sha512", 00:21:10.082 "dhgroup": "null" 00:21:10.082 } 00:21:10.082 } 00:21:10.082 ]' 00:21:10.082 12:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:10.082 12:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:10.082 12:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:10.082 12:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:10.082 12:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:10.082 12:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.082 12:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.082 12:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.361 12:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTVkZWRmNTk5NmJkNjFhMTAwMmQxNWVjNTc1YjI2NWFEY+3Z: --dhchap-ctrl-secret DHHC-1:02:NTJkY2U5NzdiNDZlNzhkNWQ1ZTgyZTIzN2U1ZjQyYjk5MDY0MTU1ZjE1ZTVmM2Exmzo2Ow==: 00:21:10.361 12:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTVkZWRmNTk5NmJkNjFhMTAwMmQxNWVjNTc1YjI2NWFEY+3Z: --dhchap-ctrl-secret DHHC-1:02:NTJkY2U5NzdiNDZlNzhkNWQ1ZTgyZTIzN2U1ZjQyYjk5MDY0MTU1ZjE1ZTVmM2Exmzo2Ow==: 00:21:11.053 12:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.053 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.053 12:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:11.053 12:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.053 12:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.053 12:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.053 12:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:11.053 12:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:11.053 12:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:11.053 12:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:21:11.053 12:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:11.053 12:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:11.053 12:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:11.053 12:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:11.053 12:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.053 12:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.053 12:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.053 12:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.053 12:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.053 12:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.053 12:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.053 12:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.310 00:21:11.310 12:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:11.310 12:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:11.311 12:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.568 12:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.568 12:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.568 12:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.568 12:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.568 12:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.568 12:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:11.568 { 00:21:11.568 "cntlid": 101, 00:21:11.568 "qid": 0, 00:21:11.568 "state": "enabled", 00:21:11.568 "thread": "nvmf_tgt_poll_group_000", 00:21:11.568 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:11.568 "listen_address": { 00:21:11.568 "trtype": "TCP", 00:21:11.568 "adrfam": "IPv4", 00:21:11.568 "traddr": "10.0.0.2", 00:21:11.568 "trsvcid": "4420" 00:21:11.568 }, 00:21:11.568 "peer_address": { 00:21:11.568 "trtype": "TCP", 00:21:11.568 "adrfam": "IPv4", 00:21:11.568 "traddr": "10.0.0.1", 00:21:11.568 "trsvcid": "51572" 00:21:11.568 }, 00:21:11.568 "auth": { 00:21:11.568 "state": "completed", 00:21:11.568 "digest": "sha512", 00:21:11.568 "dhgroup": "null" 00:21:11.568 } 00:21:11.568 } 00:21:11.568 ]' 00:21:11.568 12:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:11.568 12:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:11.568 12:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:11.568 12:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:11.568 12:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:11.827 12:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.827 12:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.827 12:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:11.827 12:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjZkNTg1ZGVlZGIzZmM3NDc2N2RmMGE1MTllNDhjNzcwMDFkMjJmZjUyMjU3NDFkNmmLaw==: --dhchap-ctrl-secret DHHC-1:01:OGU5ZmM5Y2I5YTEwYjRjMWMwYWU2NjE5OWRlNGFkMGT1IRUP: 00:21:11.827 12:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjZkNTg1ZGVlZGIzZmM3NDc2N2RmMGE1MTllNDhjNzcwMDFkMjJmZjUyMjU3NDFkNmmLaw==: --dhchap-ctrl-secret DHHC-1:01:OGU5ZmM5Y2I5YTEwYjRjMWMwYWU2NjE5OWRlNGFkMGT1IRUP: 00:21:12.394 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.394 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.394 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:12.394 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.394 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.394 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.394 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:12.394 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:12.394 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:12.653 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:21:12.653 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:12.653 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:12.653 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:12.653 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:12.653 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:12.653 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:21:12.653 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.653 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.653 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.653 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:12.653 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:12.653 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:12.911 00:21:12.911 12:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:12.911 12:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:12.911 12:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.169 12:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.169 12:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.169 12:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.169 12:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.169 12:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.169 12:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:13.169 { 00:21:13.169 "cntlid": 103, 00:21:13.169 "qid": 0, 00:21:13.169 "state": "enabled", 00:21:13.169 "thread": "nvmf_tgt_poll_group_000", 00:21:13.169 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:13.169 "listen_address": { 00:21:13.169 "trtype": "TCP", 00:21:13.169 "adrfam": "IPv4", 00:21:13.169 "traddr": "10.0.0.2", 00:21:13.169 "trsvcid": "4420" 00:21:13.169 }, 00:21:13.169 "peer_address": { 00:21:13.169 "trtype": "TCP", 00:21:13.169 "adrfam": "IPv4", 00:21:13.169 "traddr": "10.0.0.1", 00:21:13.169 "trsvcid": "51588" 00:21:13.169 }, 00:21:13.169 "auth": { 00:21:13.169 "state": "completed", 00:21:13.169 "digest": "sha512", 00:21:13.169 "dhgroup": "null" 00:21:13.169 } 00:21:13.169 } 00:21:13.169 ]' 00:21:13.169 12:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:13.169 12:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:13.169 12:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:13.169 12:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:13.169 12:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:13.169 12:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.169 12:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.169 12:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.427 12:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTcwNjZlY2QwZGI0NDRjMzZmN2ZiZGEyMTJkMzZhNDExMjcwNmJhZWJlOGJhOTFmZDc1ZjE2YjQ0ZjcxYTk1ZKgNbOo=: 00:21:13.427 12:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MTcwNjZlY2QwZGI0NDRjMzZmN2ZiZGEyMTJkMzZhNDExMjcwNmJhZWJlOGJhOTFmZDc1ZjE2YjQ0ZjcxYTk1ZKgNbOo=: 00:21:13.991 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.991 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.991 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:13.991 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.991 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.991 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.991 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:13.991 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:13.991 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:13.991 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:14.250 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:21:14.250 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:14.250 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:14.250 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:14.250 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:14.250 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:14.250 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:14.250 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.250 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.250 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.250 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:14.250 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:14.250 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:14.508 00:21:14.508 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:14.508 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:14.508 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.767 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.767 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:14.767 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.767 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.767 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.767 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:14.767 { 00:21:14.767 "cntlid": 105, 00:21:14.767 "qid": 0, 00:21:14.767 "state": "enabled", 00:21:14.767 "thread": "nvmf_tgt_poll_group_000", 00:21:14.767 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:14.767 "listen_address": { 00:21:14.767 "trtype": "TCP", 00:21:14.767 "adrfam": "IPv4", 00:21:14.767 "traddr": "10.0.0.2", 00:21:14.767 "trsvcid": "4420" 00:21:14.767 }, 00:21:14.767 "peer_address": { 00:21:14.767 "trtype": "TCP", 00:21:14.767 "adrfam": "IPv4", 00:21:14.767 "traddr": "10.0.0.1", 00:21:14.767 "trsvcid": "51610" 00:21:14.767 }, 00:21:14.767 "auth": { 00:21:14.767 "state": "completed", 00:21:14.767 "digest": "sha512", 00:21:14.767 "dhgroup": "ffdhe2048" 00:21:14.767 } 00:21:14.767 } 00:21:14.767 ]' 00:21:14.767 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:14.767 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:14.767 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:14.767 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:14.767 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:14.767 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:14.767 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:14.767 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.025 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzBhNTQ2ZDZhMWI1MTVjN2FiOWRiZTc3OTk2MGRlM2M4NGM0NzIxMmRiNjJlMjcyIJ7+pQ==: --dhchap-ctrl-secret DHHC-1:03:YThmMWI5M2Q1NTA2ZGNlNmFiMmE4ZWFjMWQwODk0YmYyNzUzMDU4Yjc3Yjk5OGQwZjRjYWI4NWVhYWViYjFlOQ77tv4=: 00:21:15.025 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MzBhNTQ2ZDZhMWI1MTVjN2FiOWRiZTc3OTk2MGRlM2M4NGM0NzIxMmRiNjJlMjcyIJ7+pQ==: --dhchap-ctrl-secret DHHC-1:03:YThmMWI5M2Q1NTA2ZGNlNmFiMmE4ZWFjMWQwODk0YmYyNzUzMDU4Yjc3Yjk5OGQwZjRjYWI4NWVhYWViYjFlOQ77tv4=: 00:21:15.593 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.593 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.593 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:15.593 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.593 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.593 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.593 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:15.593 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:15.593 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:15.853 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:21:15.853 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:15.853 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:15.853 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:15.853 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:15.853 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.853 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.853 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.853 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.853 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.853 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.853 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.853 12:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:16.112 00:21:16.112 12:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:16.112 12:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:16.112 12:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.371 12:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.371 12:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.371 12:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.371 12:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.371 12:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.371 12:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:16.371 { 00:21:16.371 "cntlid": 107, 00:21:16.371 "qid": 0, 00:21:16.371 "state": "enabled", 00:21:16.371 "thread": "nvmf_tgt_poll_group_000", 00:21:16.371 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:16.371 "listen_address": { 00:21:16.371 "trtype": "TCP", 00:21:16.371 "adrfam": "IPv4", 00:21:16.371 "traddr": "10.0.0.2", 00:21:16.371 "trsvcid": "4420" 00:21:16.371 }, 00:21:16.371 "peer_address": { 00:21:16.371 "trtype": "TCP", 00:21:16.371 "adrfam": "IPv4", 00:21:16.371 "traddr": "10.0.0.1", 00:21:16.371 "trsvcid": "51646" 00:21:16.371 }, 00:21:16.371 "auth": { 00:21:16.371 "state": "completed", 00:21:16.371 "digest": "sha512", 00:21:16.371 "dhgroup": "ffdhe2048" 00:21:16.371 } 00:21:16.371 } 00:21:16.371 ]' 00:21:16.371 12:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:16.371 12:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:16.371 12:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:16.371 12:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:16.371 12:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:16.371 12:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.371 12:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.371 12:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.630 12:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTVkZWRmNTk5NmJkNjFhMTAwMmQxNWVjNTc1YjI2NWFEY+3Z: --dhchap-ctrl-secret DHHC-1:02:NTJkY2U5NzdiNDZlNzhkNWQ1ZTgyZTIzN2U1ZjQyYjk5MDY0MTU1ZjE1ZTVmM2Exmzo2Ow==: 00:21:16.630 12:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTVkZWRmNTk5NmJkNjFhMTAwMmQxNWVjNTc1YjI2NWFEY+3Z: --dhchap-ctrl-secret DHHC-1:02:NTJkY2U5NzdiNDZlNzhkNWQ1ZTgyZTIzN2U1ZjQyYjk5MDY0MTU1ZjE1ZTVmM2Exmzo2Ow==: 00:21:17.198 12:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.198 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.198 12:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:17.198 12:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.198 12:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.198 12:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.198 12:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:17.198 12:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:17.198 12:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:17.457 12:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:21:17.457 12:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:17.457 12:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:17.457 12:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:17.457 12:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:17.457 12:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.457 12:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:17.457 12:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.457 12:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.457 12:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.457 12:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:17.457 12:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:17.457 12:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:17.716 00:21:17.716 12:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:17.716 12:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:17.716 12:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.716 12:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.716 12:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.716 12:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.716 12:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.975 12:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.975 12:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:17.975 { 00:21:17.975 "cntlid": 109, 00:21:17.975 "qid": 0, 00:21:17.975 "state": "enabled", 00:21:17.975 "thread": "nvmf_tgt_poll_group_000", 00:21:17.975 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:17.975 "listen_address": { 00:21:17.975 "trtype": "TCP", 00:21:17.975 "adrfam": "IPv4", 00:21:17.975 "traddr": "10.0.0.2", 00:21:17.975 "trsvcid": "4420" 00:21:17.975 }, 00:21:17.975 "peer_address": { 00:21:17.975 "trtype": "TCP", 00:21:17.975 "adrfam": "IPv4", 00:21:17.975 "traddr": "10.0.0.1", 00:21:17.975 "trsvcid": "45368" 00:21:17.975 }, 00:21:17.975 "auth": { 00:21:17.975 "state": "completed", 00:21:17.975 "digest": "sha512", 00:21:17.975 "dhgroup": "ffdhe2048" 00:21:17.975 } 00:21:17.975 } 00:21:17.975 ]' 00:21:17.975 12:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:17.975 12:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:17.975 12:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:17.975 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:17.975 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:17.975 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:17.975 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:17.975 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.233 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjZkNTg1ZGVlZGIzZmM3NDc2N2RmMGE1MTllNDhjNzcwMDFkMjJmZjUyMjU3NDFkNmmLaw==: --dhchap-ctrl-secret DHHC-1:01:OGU5ZmM5Y2I5YTEwYjRjMWMwYWU2NjE5OWRlNGFkMGT1IRUP: 00:21:18.233 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjZkNTg1ZGVlZGIzZmM3NDc2N2RmMGE1MTllNDhjNzcwMDFkMjJmZjUyMjU3NDFkNmmLaw==: --dhchap-ctrl-secret DHHC-1:01:OGU5ZmM5Y2I5YTEwYjRjMWMwYWU2NjE5OWRlNGFkMGT1IRUP: 00:21:18.801 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:18.801 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:18.801 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:18.801 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.801 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.801 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.801 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:18.801 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:18.801 12:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:19.060 12:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:21:19.060 12:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:19.060 12:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:19.060 12:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:19.060 12:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:19.060 12:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.060 12:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:21:19.060 12:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.060 12:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.060 12:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.060 12:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:19.060 12:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:19.060 12:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:19.060 00:21:19.319 12:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:19.319 12:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:19.319 12:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.319 12:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.319 12:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.319 12:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.319 12:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.319 12:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.319 12:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:19.319 { 00:21:19.319 "cntlid": 111, 00:21:19.319 "qid": 0, 00:21:19.319 "state": "enabled", 00:21:19.319 "thread": "nvmf_tgt_poll_group_000", 00:21:19.319 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:19.319 "listen_address": { 00:21:19.319 "trtype": "TCP", 00:21:19.319 "adrfam": "IPv4", 00:21:19.319 "traddr": "10.0.0.2", 00:21:19.319 "trsvcid": "4420" 00:21:19.319 }, 00:21:19.319 "peer_address": { 00:21:19.319 "trtype": "TCP", 00:21:19.319 "adrfam": "IPv4", 00:21:19.319 "traddr": "10.0.0.1", 00:21:19.319 "trsvcid": "45390" 00:21:19.319 }, 00:21:19.319 "auth": { 00:21:19.319 "state": "completed", 00:21:19.319 "digest": "sha512", 00:21:19.319 "dhgroup": "ffdhe2048" 00:21:19.319 } 00:21:19.319 } 00:21:19.319 ]' 00:21:19.319 12:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:19.578 12:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:19.578 12:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:19.578 12:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:19.578 12:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:19.578 12:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.578 12:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.578 12:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.837 12:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTcwNjZlY2QwZGI0NDRjMzZmN2ZiZGEyMTJkMzZhNDExMjcwNmJhZWJlOGJhOTFmZDc1ZjE2YjQ0ZjcxYTk1ZKgNbOo=: 00:21:19.837 12:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MTcwNjZlY2QwZGI0NDRjMzZmN2ZiZGEyMTJkMzZhNDExMjcwNmJhZWJlOGJhOTFmZDc1ZjE2YjQ0ZjcxYTk1ZKgNbOo=: 00:21:20.405 12:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.405 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.405 12:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:20.405 12:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.405 12:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.405 12:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.405 12:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:20.405 12:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:20.405 12:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:20.405 12:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:20.405 12:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:21:20.405 12:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:20.405 12:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:20.405 12:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:20.405 12:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:20.405 12:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:20.405 12:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:20.405 12:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.405 12:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.405 12:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.405 12:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:20.405 12:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:20.405 12:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:20.665 00:21:20.665 12:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:20.665 12:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:20.665 12:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:20.924 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.924 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:20.924 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.924 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.924 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.924 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:20.924 { 00:21:20.924 "cntlid": 113, 00:21:20.924 "qid": 0, 00:21:20.924 "state": "enabled", 00:21:20.924 "thread": "nvmf_tgt_poll_group_000", 00:21:20.924 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:20.924 "listen_address": { 00:21:20.924 "trtype": "TCP", 00:21:20.924 "adrfam": "IPv4", 00:21:20.924 "traddr": "10.0.0.2", 00:21:20.924 "trsvcid": "4420" 00:21:20.924 }, 00:21:20.924 "peer_address": { 00:21:20.924 "trtype": "TCP", 00:21:20.924 "adrfam": "IPv4", 00:21:20.924 "traddr": "10.0.0.1", 00:21:20.924 "trsvcid": "45420" 00:21:20.924 }, 00:21:20.924 "auth": { 00:21:20.924 "state": "completed", 00:21:20.924 "digest": "sha512", 00:21:20.924 "dhgroup": "ffdhe3072" 00:21:20.924 } 00:21:20.924 } 00:21:20.924 ]' 00:21:20.924 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:20.924 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:20.924 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:21.182 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:21.182 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:21.182 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.182 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.182 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.440 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzBhNTQ2ZDZhMWI1MTVjN2FiOWRiZTc3OTk2MGRlM2M4NGM0NzIxMmRiNjJlMjcyIJ7+pQ==: --dhchap-ctrl-secret DHHC-1:03:YThmMWI5M2Q1NTA2ZGNlNmFiMmE4ZWFjMWQwODk0YmYyNzUzMDU4Yjc3Yjk5OGQwZjRjYWI4NWVhYWViYjFlOQ77tv4=: 00:21:21.440 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MzBhNTQ2ZDZhMWI1MTVjN2FiOWRiZTc3OTk2MGRlM2M4NGM0NzIxMmRiNjJlMjcyIJ7+pQ==: --dhchap-ctrl-secret DHHC-1:03:YThmMWI5M2Q1NTA2ZGNlNmFiMmE4ZWFjMWQwODk0YmYyNzUzMDU4Yjc3Yjk5OGQwZjRjYWI4NWVhYWViYjFlOQ77tv4=: 00:21:22.008 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:22.008 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:22.008 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:22.008 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.008 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.008 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.008 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:22.008 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:22.008 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:22.008 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:21:22.008 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:22.008 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:22.008 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:22.008 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:22.008 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.008 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.008 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.008 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.008 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.008 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.008 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.008 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.266 00:21:22.266 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:22.266 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:22.266 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:22.525 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.525 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.525 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.525 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.525 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.525 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:22.525 { 00:21:22.525 "cntlid": 115, 00:21:22.525 "qid": 0, 00:21:22.525 "state": "enabled", 00:21:22.525 "thread": "nvmf_tgt_poll_group_000", 00:21:22.525 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:22.525 "listen_address": { 00:21:22.525 "trtype": "TCP", 00:21:22.525 "adrfam": "IPv4", 00:21:22.525 "traddr": "10.0.0.2", 00:21:22.525 "trsvcid": "4420" 00:21:22.525 }, 00:21:22.525 "peer_address": { 00:21:22.525 "trtype": "TCP", 00:21:22.525 "adrfam": "IPv4", 00:21:22.525 "traddr": "10.0.0.1", 00:21:22.525 "trsvcid": "45454" 00:21:22.525 }, 00:21:22.525 "auth": { 00:21:22.525 "state": "completed", 00:21:22.525 "digest": "sha512", 00:21:22.525 "dhgroup": "ffdhe3072" 00:21:22.525 } 00:21:22.525 } 00:21:22.525 ]' 00:21:22.525 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:22.525 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:22.525 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:22.526 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:22.526 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:22.784 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.784 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.784 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.785 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTVkZWRmNTk5NmJkNjFhMTAwMmQxNWVjNTc1YjI2NWFEY+3Z: --dhchap-ctrl-secret DHHC-1:02:NTJkY2U5NzdiNDZlNzhkNWQ1ZTgyZTIzN2U1ZjQyYjk5MDY0MTU1ZjE1ZTVmM2Exmzo2Ow==: 00:21:22.785 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTVkZWRmNTk5NmJkNjFhMTAwMmQxNWVjNTc1YjI2NWFEY+3Z: --dhchap-ctrl-secret DHHC-1:02:NTJkY2U5NzdiNDZlNzhkNWQ1ZTgyZTIzN2U1ZjQyYjk5MDY0MTU1ZjE1ZTVmM2Exmzo2Ow==: 00:21:23.352 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.353 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.353 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:23.353 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.353 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.353 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.353 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:23.353 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:23.353 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:23.611 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:21:23.611 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:23.611 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:23.611 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:23.611 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:23.611 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:23.611 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.611 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.611 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.611 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.611 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.611 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.611 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.868 00:21:23.868 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:23.868 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:23.868 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.126 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.126 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.126 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.126 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.126 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.126 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:24.126 { 00:21:24.126 "cntlid": 117, 00:21:24.126 "qid": 0, 00:21:24.126 "state": "enabled", 00:21:24.126 "thread": "nvmf_tgt_poll_group_000", 00:21:24.126 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:24.126 "listen_address": { 00:21:24.126 "trtype": "TCP", 00:21:24.126 "adrfam": "IPv4", 00:21:24.126 "traddr": "10.0.0.2", 00:21:24.126 "trsvcid": "4420" 00:21:24.126 }, 00:21:24.126 "peer_address": { 00:21:24.126 "trtype": "TCP", 00:21:24.126 "adrfam": "IPv4", 00:21:24.126 "traddr": "10.0.0.1", 00:21:24.126 "trsvcid": "45480" 00:21:24.126 }, 00:21:24.126 "auth": { 00:21:24.126 "state": "completed", 00:21:24.126 "digest": "sha512", 00:21:24.126 "dhgroup": "ffdhe3072" 00:21:24.126 } 00:21:24.126 } 00:21:24.126 ]' 00:21:24.126 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:24.126 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:24.126 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:24.126 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:24.126 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:24.126 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.126 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.126 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:24.384 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjZkNTg1ZGVlZGIzZmM3NDc2N2RmMGE1MTllNDhjNzcwMDFkMjJmZjUyMjU3NDFkNmmLaw==: --dhchap-ctrl-secret DHHC-1:01:OGU5ZmM5Y2I5YTEwYjRjMWMwYWU2NjE5OWRlNGFkMGT1IRUP: 00:21:24.384 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjZkNTg1ZGVlZGIzZmM3NDc2N2RmMGE1MTllNDhjNzcwMDFkMjJmZjUyMjU3NDFkNmmLaw==: --dhchap-ctrl-secret DHHC-1:01:OGU5ZmM5Y2I5YTEwYjRjMWMwYWU2NjE5OWRlNGFkMGT1IRUP: 00:21:24.951 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:24.951 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:24.951 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:24.951 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.951 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.951 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.951 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:24.951 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:24.951 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:25.210 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:21:25.210 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:25.210 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:25.210 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:25.210 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:25.210 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:25.210 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:21:25.210 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.210 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.210 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.210 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:25.210 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:25.210 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:25.468 00:21:25.468 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:25.468 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:25.468 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:25.725 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.725 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:25.725 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.725 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.726 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.726 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:25.726 { 00:21:25.726 "cntlid": 119, 00:21:25.726 "qid": 0, 00:21:25.726 "state": "enabled", 00:21:25.726 "thread": "nvmf_tgt_poll_group_000", 00:21:25.726 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:25.726 "listen_address": { 00:21:25.726 "trtype": "TCP", 00:21:25.726 "adrfam": "IPv4", 00:21:25.726 "traddr": "10.0.0.2", 00:21:25.726 "trsvcid": "4420" 00:21:25.726 }, 00:21:25.726 "peer_address": { 00:21:25.726 "trtype": "TCP", 00:21:25.726 "adrfam": "IPv4", 00:21:25.726 "traddr": "10.0.0.1", 00:21:25.726 "trsvcid": "45502" 00:21:25.726 }, 00:21:25.726 "auth": { 00:21:25.726 "state": "completed", 00:21:25.726 "digest": "sha512", 00:21:25.726 "dhgroup": "ffdhe3072" 00:21:25.726 } 00:21:25.726 } 00:21:25.726 ]' 00:21:25.726 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:25.726 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:25.726 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:25.726 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:25.726 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:25.726 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:25.726 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.726 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.984 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTcwNjZlY2QwZGI0NDRjMzZmN2ZiZGEyMTJkMzZhNDExMjcwNmJhZWJlOGJhOTFmZDc1ZjE2YjQ0ZjcxYTk1ZKgNbOo=: 00:21:25.984 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MTcwNjZlY2QwZGI0NDRjMzZmN2ZiZGEyMTJkMzZhNDExMjcwNmJhZWJlOGJhOTFmZDc1ZjE2YjQ0ZjcxYTk1ZKgNbOo=: 00:21:26.550 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.550 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.550 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:26.550 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.550 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.550 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.550 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:26.550 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:26.550 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:26.550 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:26.809 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:21:26.809 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:26.809 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:26.809 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:26.809 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:26.809 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.809 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:26.809 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.809 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.809 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.809 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:26.809 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:26.809 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:27.067 00:21:27.067 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:27.067 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:27.067 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.324 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.324 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.324 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.324 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.324 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.324 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:27.324 { 00:21:27.324 "cntlid": 121, 00:21:27.324 "qid": 0, 00:21:27.324 "state": "enabled", 00:21:27.324 "thread": "nvmf_tgt_poll_group_000", 00:21:27.324 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:27.324 "listen_address": { 00:21:27.324 "trtype": "TCP", 00:21:27.324 "adrfam": "IPv4", 00:21:27.324 "traddr": "10.0.0.2", 00:21:27.324 "trsvcid": "4420" 00:21:27.324 }, 00:21:27.324 "peer_address": { 00:21:27.324 "trtype": "TCP", 00:21:27.324 "adrfam": "IPv4", 00:21:27.324 "traddr": "10.0.0.1", 00:21:27.324 "trsvcid": "37464" 00:21:27.324 }, 00:21:27.324 "auth": { 00:21:27.324 "state": "completed", 00:21:27.324 "digest": "sha512", 00:21:27.324 "dhgroup": "ffdhe4096" 00:21:27.324 } 00:21:27.324 } 00:21:27.324 ]' 00:21:27.324 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:27.324 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:27.324 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:27.324 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:27.324 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:27.324 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.324 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.324 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.583 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzBhNTQ2ZDZhMWI1MTVjN2FiOWRiZTc3OTk2MGRlM2M4NGM0NzIxMmRiNjJlMjcyIJ7+pQ==: --dhchap-ctrl-secret DHHC-1:03:YThmMWI5M2Q1NTA2ZGNlNmFiMmE4ZWFjMWQwODk0YmYyNzUzMDU4Yjc3Yjk5OGQwZjRjYWI4NWVhYWViYjFlOQ77tv4=: 00:21:27.583 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MzBhNTQ2ZDZhMWI1MTVjN2FiOWRiZTc3OTk2MGRlM2M4NGM0NzIxMmRiNjJlMjcyIJ7+pQ==: --dhchap-ctrl-secret DHHC-1:03:YThmMWI5M2Q1NTA2ZGNlNmFiMmE4ZWFjMWQwODk0YmYyNzUzMDU4Yjc3Yjk5OGQwZjRjYWI4NWVhYWViYjFlOQ77tv4=: 00:21:28.149 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.149 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.149 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:28.149 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.149 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.149 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.149 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:28.149 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:28.149 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:28.407 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:21:28.407 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:28.407 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:28.407 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:28.407 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:28.407 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.407 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.407 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.407 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.407 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.407 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.407 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.407 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.665 00:21:28.665 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:28.665 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:28.665 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:28.924 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.924 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:28.924 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.924 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.924 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.924 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:28.924 { 00:21:28.925 "cntlid": 123, 00:21:28.925 "qid": 0, 00:21:28.925 "state": "enabled", 00:21:28.925 "thread": "nvmf_tgt_poll_group_000", 00:21:28.925 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:28.925 "listen_address": { 00:21:28.925 "trtype": "TCP", 00:21:28.925 "adrfam": "IPv4", 00:21:28.925 "traddr": "10.0.0.2", 00:21:28.925 "trsvcid": "4420" 00:21:28.925 }, 00:21:28.925 "peer_address": { 00:21:28.925 "trtype": "TCP", 00:21:28.925 "adrfam": "IPv4", 00:21:28.925 "traddr": "10.0.0.1", 00:21:28.925 "trsvcid": "37494" 00:21:28.925 }, 00:21:28.925 "auth": { 00:21:28.925 "state": "completed", 00:21:28.925 "digest": "sha512", 00:21:28.925 "dhgroup": "ffdhe4096" 00:21:28.925 } 00:21:28.925 } 00:21:28.925 ]' 00:21:28.925 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:28.925 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:28.925 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:28.925 12:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:28.925 12:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:28.925 12:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:28.925 12:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:28.925 12:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.184 12:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTVkZWRmNTk5NmJkNjFhMTAwMmQxNWVjNTc1YjI2NWFEY+3Z: --dhchap-ctrl-secret DHHC-1:02:NTJkY2U5NzdiNDZlNzhkNWQ1ZTgyZTIzN2U1ZjQyYjk5MDY0MTU1ZjE1ZTVmM2Exmzo2Ow==: 00:21:29.184 12:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTVkZWRmNTk5NmJkNjFhMTAwMmQxNWVjNTc1YjI2NWFEY+3Z: --dhchap-ctrl-secret DHHC-1:02:NTJkY2U5NzdiNDZlNzhkNWQ1ZTgyZTIzN2U1ZjQyYjk5MDY0MTU1ZjE1ZTVmM2Exmzo2Ow==: 00:21:29.751 12:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:29.751 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:29.751 12:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:29.751 12:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.751 12:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.751 12:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.751 12:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:29.751 12:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:29.751 12:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:30.010 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:21:30.010 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:30.010 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:30.010 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:30.010 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:30.010 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.010 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:30.010 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.010 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.010 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.010 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:30.010 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:30.010 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:30.270 00:21:30.270 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:30.270 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:30.270 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:30.529 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:30.529 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:30.529 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.529 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.529 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.529 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:30.529 { 00:21:30.529 "cntlid": 125, 00:21:30.529 "qid": 0, 00:21:30.529 "state": "enabled", 00:21:30.529 "thread": "nvmf_tgt_poll_group_000", 00:21:30.529 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:30.529 "listen_address": { 00:21:30.529 "trtype": "TCP", 00:21:30.529 "adrfam": "IPv4", 00:21:30.529 "traddr": "10.0.0.2", 00:21:30.529 "trsvcid": "4420" 00:21:30.529 }, 00:21:30.529 "peer_address": { 00:21:30.529 "trtype": "TCP", 00:21:30.529 "adrfam": "IPv4", 00:21:30.529 "traddr": "10.0.0.1", 00:21:30.529 "trsvcid": "37516" 00:21:30.529 }, 00:21:30.529 "auth": { 00:21:30.529 "state": "completed", 00:21:30.529 "digest": "sha512", 00:21:30.529 "dhgroup": "ffdhe4096" 00:21:30.529 } 00:21:30.529 } 00:21:30.529 ]' 00:21:30.529 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:30.529 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:30.529 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:30.529 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:30.529 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:30.529 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:30.529 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:30.530 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:30.788 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjZkNTg1ZGVlZGIzZmM3NDc2N2RmMGE1MTllNDhjNzcwMDFkMjJmZjUyMjU3NDFkNmmLaw==: --dhchap-ctrl-secret DHHC-1:01:OGU5ZmM5Y2I5YTEwYjRjMWMwYWU2NjE5OWRlNGFkMGT1IRUP: 00:21:30.788 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjZkNTg1ZGVlZGIzZmM3NDc2N2RmMGE1MTllNDhjNzcwMDFkMjJmZjUyMjU3NDFkNmmLaw==: --dhchap-ctrl-secret DHHC-1:01:OGU5ZmM5Y2I5YTEwYjRjMWMwYWU2NjE5OWRlNGFkMGT1IRUP: 00:21:31.358 12:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:31.358 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:31.358 12:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:31.358 12:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.358 12:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.358 12:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.358 12:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:31.358 12:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:31.358 12:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:31.617 12:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:21:31.617 12:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:31.617 12:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:31.617 12:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:31.617 12:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:31.617 12:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.617 12:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:21:31.617 12:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.617 12:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.617 12:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.617 12:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:31.617 12:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:31.618 12:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:31.876 00:21:31.876 12:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:31.876 12:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:31.876 12:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:32.135 12:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.135 12:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:32.135 12:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.135 12:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.135 12:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.135 12:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:32.135 { 00:21:32.135 "cntlid": 127, 00:21:32.135 "qid": 0, 00:21:32.135 "state": "enabled", 00:21:32.135 "thread": "nvmf_tgt_poll_group_000", 00:21:32.135 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:32.135 "listen_address": { 00:21:32.135 "trtype": "TCP", 00:21:32.135 "adrfam": "IPv4", 00:21:32.135 "traddr": "10.0.0.2", 00:21:32.135 "trsvcid": "4420" 00:21:32.135 }, 00:21:32.135 "peer_address": { 00:21:32.135 "trtype": "TCP", 00:21:32.135 "adrfam": "IPv4", 00:21:32.135 "traddr": "10.0.0.1", 00:21:32.135 "trsvcid": "37528" 00:21:32.135 }, 00:21:32.135 "auth": { 00:21:32.135 "state": "completed", 00:21:32.135 "digest": "sha512", 00:21:32.135 "dhgroup": "ffdhe4096" 00:21:32.135 } 00:21:32.135 } 00:21:32.135 ]' 00:21:32.135 12:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:32.135 12:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:32.135 12:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:32.135 12:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:32.135 12:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:32.135 12:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:32.135 12:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:32.135 12:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.394 12:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTcwNjZlY2QwZGI0NDRjMzZmN2ZiZGEyMTJkMzZhNDExMjcwNmJhZWJlOGJhOTFmZDc1ZjE2YjQ0ZjcxYTk1ZKgNbOo=: 00:21:32.394 12:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MTcwNjZlY2QwZGI0NDRjMzZmN2ZiZGEyMTJkMzZhNDExMjcwNmJhZWJlOGJhOTFmZDc1ZjE2YjQ0ZjcxYTk1ZKgNbOo=: 00:21:32.972 12:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:32.972 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:32.972 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:32.972 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.972 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.972 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.972 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:32.972 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:32.972 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:32.972 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:33.232 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:21:33.232 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:33.232 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:33.232 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:33.232 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:33.232 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.232 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:33.232 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.232 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.232 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.232 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:33.232 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:33.232 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:33.492 00:21:33.492 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:33.492 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:33.492 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.751 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.751 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.751 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.751 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.751 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.751 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:33.751 { 00:21:33.751 "cntlid": 129, 00:21:33.751 "qid": 0, 00:21:33.751 "state": "enabled", 00:21:33.751 "thread": "nvmf_tgt_poll_group_000", 00:21:33.751 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:33.751 "listen_address": { 00:21:33.751 "trtype": "TCP", 00:21:33.751 "adrfam": "IPv4", 00:21:33.751 "traddr": "10.0.0.2", 00:21:33.751 "trsvcid": "4420" 00:21:33.751 }, 00:21:33.751 "peer_address": { 00:21:33.751 "trtype": "TCP", 00:21:33.751 "adrfam": "IPv4", 00:21:33.751 "traddr": "10.0.0.1", 00:21:33.751 "trsvcid": "37562" 00:21:33.751 }, 00:21:33.751 "auth": { 00:21:33.751 "state": "completed", 00:21:33.751 "digest": "sha512", 00:21:33.751 "dhgroup": "ffdhe6144" 00:21:33.751 } 00:21:33.751 } 00:21:33.751 ]' 00:21:33.751 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:33.751 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:33.751 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:33.751 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:33.751 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:33.751 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.751 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.751 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:34.010 12:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzBhNTQ2ZDZhMWI1MTVjN2FiOWRiZTc3OTk2MGRlM2M4NGM0NzIxMmRiNjJlMjcyIJ7+pQ==: --dhchap-ctrl-secret DHHC-1:03:YThmMWI5M2Q1NTA2ZGNlNmFiMmE4ZWFjMWQwODk0YmYyNzUzMDU4Yjc3Yjk5OGQwZjRjYWI4NWVhYWViYjFlOQ77tv4=: 00:21:34.010 12:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MzBhNTQ2ZDZhMWI1MTVjN2FiOWRiZTc3OTk2MGRlM2M4NGM0NzIxMmRiNjJlMjcyIJ7+pQ==: --dhchap-ctrl-secret DHHC-1:03:YThmMWI5M2Q1NTA2ZGNlNmFiMmE4ZWFjMWQwODk0YmYyNzUzMDU4Yjc3Yjk5OGQwZjRjYWI4NWVhYWViYjFlOQ77tv4=: 00:21:34.577 12:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:34.577 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:34.577 12:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:34.577 12:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.577 12:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.577 12:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.577 12:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:34.577 12:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:34.577 12:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:34.836 12:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:21:34.836 12:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:34.836 12:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:34.836 12:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:34.836 12:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:34.836 12:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:34.836 12:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:34.836 12:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.836 12:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.836 12:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.836 12:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:34.836 12:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:34.836 12:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:35.095 00:21:35.095 12:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:35.095 12:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:35.095 12:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.354 12:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.354 12:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:35.354 12:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.354 12:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.354 12:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.354 12:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:35.354 { 00:21:35.354 "cntlid": 131, 00:21:35.354 "qid": 0, 00:21:35.354 "state": "enabled", 00:21:35.354 "thread": "nvmf_tgt_poll_group_000", 00:21:35.354 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:35.354 "listen_address": { 00:21:35.354 "trtype": "TCP", 00:21:35.354 "adrfam": "IPv4", 00:21:35.354 "traddr": "10.0.0.2", 00:21:35.354 "trsvcid": "4420" 00:21:35.354 }, 00:21:35.354 "peer_address": { 00:21:35.354 "trtype": "TCP", 00:21:35.354 "adrfam": "IPv4", 00:21:35.354 "traddr": "10.0.0.1", 00:21:35.354 "trsvcid": "37586" 00:21:35.354 }, 00:21:35.354 "auth": { 00:21:35.354 "state": "completed", 00:21:35.354 "digest": "sha512", 00:21:35.354 "dhgroup": "ffdhe6144" 00:21:35.354 } 00:21:35.354 } 00:21:35.354 ]' 00:21:35.354 12:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:35.354 12:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:35.354 12:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:35.354 12:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:35.354 12:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:35.613 12:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.613 12:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.613 12:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.613 12:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTVkZWRmNTk5NmJkNjFhMTAwMmQxNWVjNTc1YjI2NWFEY+3Z: --dhchap-ctrl-secret DHHC-1:02:NTJkY2U5NzdiNDZlNzhkNWQ1ZTgyZTIzN2U1ZjQyYjk5MDY0MTU1ZjE1ZTVmM2Exmzo2Ow==: 00:21:35.613 12:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTVkZWRmNTk5NmJkNjFhMTAwMmQxNWVjNTc1YjI2NWFEY+3Z: --dhchap-ctrl-secret DHHC-1:02:NTJkY2U5NzdiNDZlNzhkNWQ1ZTgyZTIzN2U1ZjQyYjk5MDY0MTU1ZjE1ZTVmM2Exmzo2Ow==: 00:21:36.182 12:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:36.182 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:36.442 12:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:36.442 12:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.442 12:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.442 12:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.442 12:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:36.442 12:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:36.442 12:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:36.442 12:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:21:36.442 12:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:36.442 12:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:36.442 12:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:36.442 12:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:36.442 12:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:36.442 12:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:36.442 12:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.442 12:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.442 12:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.442 12:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:36.442 12:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:36.442 12:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.010 00:21:37.010 12:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:37.010 12:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.010 12:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:37.010 12:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.010 12:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:37.010 12:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.010 12:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.010 12:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.010 12:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:37.010 { 00:21:37.010 "cntlid": 133, 00:21:37.010 "qid": 0, 00:21:37.010 "state": "enabled", 00:21:37.010 "thread": "nvmf_tgt_poll_group_000", 00:21:37.010 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:37.010 "listen_address": { 00:21:37.010 "trtype": "TCP", 00:21:37.010 "adrfam": "IPv4", 00:21:37.010 "traddr": "10.0.0.2", 00:21:37.010 "trsvcid": "4420" 00:21:37.010 }, 00:21:37.010 "peer_address": { 00:21:37.010 "trtype": "TCP", 00:21:37.010 "adrfam": "IPv4", 00:21:37.010 "traddr": "10.0.0.1", 00:21:37.010 "trsvcid": "39370" 00:21:37.010 }, 00:21:37.010 "auth": { 00:21:37.010 "state": "completed", 00:21:37.010 "digest": "sha512", 00:21:37.010 "dhgroup": "ffdhe6144" 00:21:37.010 } 00:21:37.010 } 00:21:37.010 ]' 00:21:37.010 12:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:37.010 12:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:37.010 12:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:37.269 12:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:37.269 12:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:37.269 12:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:37.269 12:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:37.269 12:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:37.528 12:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjZkNTg1ZGVlZGIzZmM3NDc2N2RmMGE1MTllNDhjNzcwMDFkMjJmZjUyMjU3NDFkNmmLaw==: --dhchap-ctrl-secret DHHC-1:01:OGU5ZmM5Y2I5YTEwYjRjMWMwYWU2NjE5OWRlNGFkMGT1IRUP: 00:21:37.528 12:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjZkNTg1ZGVlZGIzZmM3NDc2N2RmMGE1MTllNDhjNzcwMDFkMjJmZjUyMjU3NDFkNmmLaw==: --dhchap-ctrl-secret DHHC-1:01:OGU5ZmM5Y2I5YTEwYjRjMWMwYWU2NjE5OWRlNGFkMGT1IRUP: 00:21:38.096 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:38.096 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:38.096 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:38.096 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.096 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.096 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.096 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:38.096 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:38.096 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:38.096 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:21:38.096 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:38.096 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:38.096 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:38.096 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:38.096 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:38.096 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:21:38.096 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.096 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.096 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.096 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:38.096 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:38.096 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:38.665 00:21:38.665 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:38.665 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:38.665 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.665 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.665 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.665 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.665 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.665 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.665 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:38.665 { 00:21:38.665 "cntlid": 135, 00:21:38.665 "qid": 0, 00:21:38.665 "state": "enabled", 00:21:38.665 "thread": "nvmf_tgt_poll_group_000", 00:21:38.665 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:38.665 "listen_address": { 00:21:38.665 "trtype": "TCP", 00:21:38.665 "adrfam": "IPv4", 00:21:38.665 "traddr": "10.0.0.2", 00:21:38.665 "trsvcid": "4420" 00:21:38.665 }, 00:21:38.665 "peer_address": { 00:21:38.665 "trtype": "TCP", 00:21:38.665 "adrfam": "IPv4", 00:21:38.665 "traddr": "10.0.0.1", 00:21:38.665 "trsvcid": "39418" 00:21:38.665 }, 00:21:38.665 "auth": { 00:21:38.665 "state": "completed", 00:21:38.665 "digest": "sha512", 00:21:38.665 "dhgroup": "ffdhe6144" 00:21:38.665 } 00:21:38.665 } 00:21:38.665 ]' 00:21:38.665 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:38.665 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:38.665 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:38.924 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:38.924 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:38.924 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.924 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.924 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.924 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTcwNjZlY2QwZGI0NDRjMzZmN2ZiZGEyMTJkMzZhNDExMjcwNmJhZWJlOGJhOTFmZDc1ZjE2YjQ0ZjcxYTk1ZKgNbOo=: 00:21:38.924 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MTcwNjZlY2QwZGI0NDRjMzZmN2ZiZGEyMTJkMzZhNDExMjcwNmJhZWJlOGJhOTFmZDc1ZjE2YjQ0ZjcxYTk1ZKgNbOo=: 00:21:39.490 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.748 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.748 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:39.748 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.748 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.748 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.748 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:39.748 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:39.748 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:39.748 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:39.748 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:21:39.748 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:39.748 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:39.748 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:39.748 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:39.748 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:39.748 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:39.748 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.748 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.748 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.748 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:39.748 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:39.748 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:40.314 00:21:40.314 12:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:40.314 12:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.314 12:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:40.572 12:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.572 12:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:40.572 12:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.572 12:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.572 12:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.572 12:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:40.572 { 00:21:40.572 "cntlid": 137, 00:21:40.572 "qid": 0, 00:21:40.572 "state": "enabled", 00:21:40.572 "thread": "nvmf_tgt_poll_group_000", 00:21:40.572 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:40.572 "listen_address": { 00:21:40.572 "trtype": "TCP", 00:21:40.572 "adrfam": "IPv4", 00:21:40.572 "traddr": "10.0.0.2", 00:21:40.572 "trsvcid": "4420" 00:21:40.572 }, 00:21:40.572 "peer_address": { 00:21:40.572 "trtype": "TCP", 00:21:40.572 "adrfam": "IPv4", 00:21:40.572 "traddr": "10.0.0.1", 00:21:40.572 "trsvcid": "39442" 00:21:40.572 }, 00:21:40.572 "auth": { 00:21:40.572 "state": "completed", 00:21:40.572 "digest": "sha512", 00:21:40.572 "dhgroup": "ffdhe8192" 00:21:40.572 } 00:21:40.572 } 00:21:40.572 ]' 00:21:40.572 12:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:40.572 12:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:40.572 12:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:40.572 12:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:40.572 12:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:40.572 12:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:40.572 12:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:40.572 12:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:40.831 12:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzBhNTQ2ZDZhMWI1MTVjN2FiOWRiZTc3OTk2MGRlM2M4NGM0NzIxMmRiNjJlMjcyIJ7+pQ==: --dhchap-ctrl-secret DHHC-1:03:YThmMWI5M2Q1NTA2ZGNlNmFiMmE4ZWFjMWQwODk0YmYyNzUzMDU4Yjc3Yjk5OGQwZjRjYWI4NWVhYWViYjFlOQ77tv4=: 00:21:40.831 12:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MzBhNTQ2ZDZhMWI1MTVjN2FiOWRiZTc3OTk2MGRlM2M4NGM0NzIxMmRiNjJlMjcyIJ7+pQ==: --dhchap-ctrl-secret DHHC-1:03:YThmMWI5M2Q1NTA2ZGNlNmFiMmE4ZWFjMWQwODk0YmYyNzUzMDU4Yjc3Yjk5OGQwZjRjYWI4NWVhYWViYjFlOQ77tv4=: 00:21:41.402 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:41.402 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:41.402 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:41.402 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.402 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.402 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.402 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:41.402 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:41.402 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:41.661 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:21:41.661 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:41.661 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:41.661 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:41.661 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:41.661 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:41.661 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:41.661 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.661 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.661 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.661 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:41.661 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:41.661 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.225 00:21:42.225 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:42.225 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:42.225 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:42.225 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.225 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:42.225 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.225 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.225 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.226 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:42.226 { 00:21:42.226 "cntlid": 139, 00:21:42.226 "qid": 0, 00:21:42.226 "state": "enabled", 00:21:42.226 "thread": "nvmf_tgt_poll_group_000", 00:21:42.226 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:42.226 "listen_address": { 00:21:42.226 "trtype": "TCP", 00:21:42.226 "adrfam": "IPv4", 00:21:42.226 "traddr": "10.0.0.2", 00:21:42.226 "trsvcid": "4420" 00:21:42.226 }, 00:21:42.226 "peer_address": { 00:21:42.226 "trtype": "TCP", 00:21:42.226 "adrfam": "IPv4", 00:21:42.226 "traddr": "10.0.0.1", 00:21:42.226 "trsvcid": "39462" 00:21:42.226 }, 00:21:42.226 "auth": { 00:21:42.226 "state": "completed", 00:21:42.226 "digest": "sha512", 00:21:42.226 "dhgroup": "ffdhe8192" 00:21:42.226 } 00:21:42.226 } 00:21:42.226 ]' 00:21:42.226 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:42.483 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:42.483 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:42.483 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:42.483 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:42.483 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:42.483 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:42.483 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:42.741 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTVkZWRmNTk5NmJkNjFhMTAwMmQxNWVjNTc1YjI2NWFEY+3Z: --dhchap-ctrl-secret DHHC-1:02:NTJkY2U5NzdiNDZlNzhkNWQ1ZTgyZTIzN2U1ZjQyYjk5MDY0MTU1ZjE1ZTVmM2Exmzo2Ow==: 00:21:42.741 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTVkZWRmNTk5NmJkNjFhMTAwMmQxNWVjNTc1YjI2NWFEY+3Z: --dhchap-ctrl-secret DHHC-1:02:NTJkY2U5NzdiNDZlNzhkNWQ1ZTgyZTIzN2U1ZjQyYjk5MDY0MTU1ZjE1ZTVmM2Exmzo2Ow==: 00:21:43.305 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:43.305 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:43.305 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:43.305 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.305 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.305 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.305 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:43.305 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:43.305 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:43.564 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:21:43.564 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:43.564 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:43.564 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:43.564 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:43.564 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:43.564 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:43.564 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.564 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.564 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.564 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:43.564 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:43.564 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:43.822 00:21:43.822 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:43.822 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:43.822 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.080 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.080 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:44.080 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.080 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.080 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.080 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:44.080 { 00:21:44.080 "cntlid": 141, 00:21:44.080 "qid": 0, 00:21:44.080 "state": "enabled", 00:21:44.080 "thread": "nvmf_tgt_poll_group_000", 00:21:44.080 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:44.080 "listen_address": { 00:21:44.080 "trtype": "TCP", 00:21:44.080 "adrfam": "IPv4", 00:21:44.080 "traddr": "10.0.0.2", 00:21:44.080 "trsvcid": "4420" 00:21:44.080 }, 00:21:44.080 "peer_address": { 00:21:44.080 "trtype": "TCP", 00:21:44.080 "adrfam": "IPv4", 00:21:44.080 "traddr": "10.0.0.1", 00:21:44.080 "trsvcid": "39490" 00:21:44.080 }, 00:21:44.080 "auth": { 00:21:44.080 "state": "completed", 00:21:44.080 "digest": "sha512", 00:21:44.080 "dhgroup": "ffdhe8192" 00:21:44.080 } 00:21:44.080 } 00:21:44.080 ]' 00:21:44.080 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:44.080 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:44.080 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:44.339 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:44.339 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:44.339 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:44.339 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:44.339 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:44.339 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjZkNTg1ZGVlZGIzZmM3NDc2N2RmMGE1MTllNDhjNzcwMDFkMjJmZjUyMjU3NDFkNmmLaw==: --dhchap-ctrl-secret DHHC-1:01:OGU5ZmM5Y2I5YTEwYjRjMWMwYWU2NjE5OWRlNGFkMGT1IRUP: 00:21:44.339 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjZkNTg1ZGVlZGIzZmM3NDc2N2RmMGE1MTllNDhjNzcwMDFkMjJmZjUyMjU3NDFkNmmLaw==: --dhchap-ctrl-secret DHHC-1:01:OGU5ZmM5Y2I5YTEwYjRjMWMwYWU2NjE5OWRlNGFkMGT1IRUP: 00:21:44.907 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:44.907 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:44.907 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:44.907 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.907 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.907 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.907 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:44.907 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:44.907 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:45.166 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:21:45.166 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:45.166 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:45.166 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:45.166 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:45.166 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:45.166 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:21:45.166 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.166 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.166 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.166 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:45.167 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:45.167 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:45.733 00:21:45.733 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:45.733 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:45.733 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.992 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.992 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:45.992 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.992 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.992 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.992 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:45.992 { 00:21:45.992 "cntlid": 143, 00:21:45.992 "qid": 0, 00:21:45.992 "state": "enabled", 00:21:45.992 "thread": "nvmf_tgt_poll_group_000", 00:21:45.992 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:45.992 "listen_address": { 00:21:45.992 "trtype": "TCP", 00:21:45.992 "adrfam": "IPv4", 00:21:45.992 "traddr": "10.0.0.2", 00:21:45.992 "trsvcid": "4420" 00:21:45.992 }, 00:21:45.992 "peer_address": { 00:21:45.992 "trtype": "TCP", 00:21:45.992 "adrfam": "IPv4", 00:21:45.992 "traddr": "10.0.0.1", 00:21:45.992 "trsvcid": "39518" 00:21:45.992 }, 00:21:45.992 "auth": { 00:21:45.992 "state": "completed", 00:21:45.992 "digest": "sha512", 00:21:45.992 "dhgroup": "ffdhe8192" 00:21:45.992 } 00:21:45.992 } 00:21:45.992 ]' 00:21:45.992 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:45.992 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:45.992 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:45.992 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:45.992 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:45.992 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:45.992 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.992 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:46.250 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTcwNjZlY2QwZGI0NDRjMzZmN2ZiZGEyMTJkMzZhNDExMjcwNmJhZWJlOGJhOTFmZDc1ZjE2YjQ0ZjcxYTk1ZKgNbOo=: 00:21:46.250 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MTcwNjZlY2QwZGI0NDRjMzZmN2ZiZGEyMTJkMzZhNDExMjcwNmJhZWJlOGJhOTFmZDc1ZjE2YjQ0ZjcxYTk1ZKgNbOo=: 00:21:46.818 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:46.818 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:46.818 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:46.818 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.818 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.818 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.818 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:46.818 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:21:46.818 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:46.818 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:46.818 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:46.818 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:47.077 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:21:47.077 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:47.077 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:47.077 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:47.077 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:47.077 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:47.077 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.077 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.077 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.077 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.077 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.077 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.077 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.646 00:21:47.646 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:47.646 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:47.646 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.646 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.646 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:47.646 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.646 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.646 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.646 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:47.646 { 00:21:47.646 "cntlid": 145, 00:21:47.646 "qid": 0, 00:21:47.646 "state": "enabled", 00:21:47.646 "thread": "nvmf_tgt_poll_group_000", 00:21:47.646 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:47.646 "listen_address": { 00:21:47.646 "trtype": "TCP", 00:21:47.646 "adrfam": "IPv4", 00:21:47.646 "traddr": "10.0.0.2", 00:21:47.646 "trsvcid": "4420" 00:21:47.646 }, 00:21:47.646 "peer_address": { 00:21:47.646 "trtype": "TCP", 00:21:47.646 "adrfam": "IPv4", 00:21:47.646 "traddr": "10.0.0.1", 00:21:47.646 "trsvcid": "40624" 00:21:47.646 }, 00:21:47.646 "auth": { 00:21:47.646 "state": "completed", 00:21:47.646 "digest": "sha512", 00:21:47.646 "dhgroup": "ffdhe8192" 00:21:47.646 } 00:21:47.646 } 00:21:47.646 ]' 00:21:47.646 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:47.906 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:47.906 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:47.906 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:47.906 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:47.906 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:47.906 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.906 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:48.165 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzBhNTQ2ZDZhMWI1MTVjN2FiOWRiZTc3OTk2MGRlM2M4NGM0NzIxMmRiNjJlMjcyIJ7+pQ==: --dhchap-ctrl-secret DHHC-1:03:YThmMWI5M2Q1NTA2ZGNlNmFiMmE4ZWFjMWQwODk0YmYyNzUzMDU4Yjc3Yjk5OGQwZjRjYWI4NWVhYWViYjFlOQ77tv4=: 00:21:48.165 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MzBhNTQ2ZDZhMWI1MTVjN2FiOWRiZTc3OTk2MGRlM2M4NGM0NzIxMmRiNjJlMjcyIJ7+pQ==: --dhchap-ctrl-secret DHHC-1:03:YThmMWI5M2Q1NTA2ZGNlNmFiMmE4ZWFjMWQwODk0YmYyNzUzMDU4Yjc3Yjk5OGQwZjRjYWI4NWVhYWViYjFlOQ77tv4=: 00:21:48.857 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:48.857 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:48.857 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:48.857 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.857 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.857 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.857 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:21:48.857 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.857 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.857 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.857 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:21:48.857 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:48.857 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:21:48.857 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:48.857 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:48.857 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:48.857 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:48.857 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:21:48.857 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:48.857 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:49.116 request: 00:21:49.116 { 00:21:49.116 "name": "nvme0", 00:21:49.116 "trtype": "tcp", 00:21:49.116 "traddr": "10.0.0.2", 00:21:49.116 "adrfam": "ipv4", 00:21:49.116 "trsvcid": "4420", 00:21:49.116 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:49.116 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:49.116 "prchk_reftag": false, 00:21:49.116 "prchk_guard": false, 00:21:49.116 "hdgst": false, 00:21:49.116 "ddgst": false, 00:21:49.116 "dhchap_key": "key2", 00:21:49.116 "allow_unrecognized_csi": false, 00:21:49.116 "method": "bdev_nvme_attach_controller", 00:21:49.116 "req_id": 1 00:21:49.116 } 00:21:49.116 Got JSON-RPC error response 00:21:49.116 response: 00:21:49.116 { 00:21:49.116 "code": -5, 00:21:49.116 "message": "Input/output error" 00:21:49.116 } 00:21:49.116 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:49.116 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:49.116 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:49.116 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:49.116 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:49.116 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.116 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.116 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.116 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.116 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.116 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.116 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.116 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:49.116 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:49.116 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:49.116 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:49.116 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:49.116 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:49.116 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:49.116 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:49.116 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:49.116 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:49.684 request: 00:21:49.684 { 00:21:49.684 "name": "nvme0", 00:21:49.684 "trtype": "tcp", 00:21:49.684 "traddr": "10.0.0.2", 00:21:49.684 "adrfam": "ipv4", 00:21:49.684 "trsvcid": "4420", 00:21:49.684 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:49.684 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:49.684 "prchk_reftag": false, 00:21:49.684 "prchk_guard": false, 00:21:49.684 "hdgst": false, 00:21:49.684 "ddgst": false, 00:21:49.684 "dhchap_key": "key1", 00:21:49.684 "dhchap_ctrlr_key": "ckey2", 00:21:49.684 "allow_unrecognized_csi": false, 00:21:49.684 "method": "bdev_nvme_attach_controller", 00:21:49.684 "req_id": 1 00:21:49.684 } 00:21:49.684 Got JSON-RPC error response 00:21:49.684 response: 00:21:49.684 { 00:21:49.684 "code": -5, 00:21:49.684 "message": "Input/output error" 00:21:49.684 } 00:21:49.684 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:49.684 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:49.684 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:49.684 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:49.684 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:49.684 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.684 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.684 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.684 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:21:49.684 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.684 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.684 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.684 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.684 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:49.684 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.684 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:49.684 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:49.684 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:49.684 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:49.684 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.684 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.684 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.943 request: 00:21:49.943 { 00:21:49.943 "name": "nvme0", 00:21:49.943 "trtype": "tcp", 00:21:49.943 "traddr": "10.0.0.2", 00:21:49.943 "adrfam": "ipv4", 00:21:49.943 "trsvcid": "4420", 00:21:49.943 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:49.943 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:49.943 "prchk_reftag": false, 00:21:49.943 "prchk_guard": false, 00:21:49.943 "hdgst": false, 00:21:49.943 "ddgst": false, 00:21:49.943 "dhchap_key": "key1", 00:21:49.943 "dhchap_ctrlr_key": "ckey1", 00:21:49.943 "allow_unrecognized_csi": false, 00:21:49.943 "method": "bdev_nvme_attach_controller", 00:21:49.943 "req_id": 1 00:21:49.943 } 00:21:49.943 Got JSON-RPC error response 00:21:49.943 response: 00:21:49.943 { 00:21:49.943 "code": -5, 00:21:49.943 "message": "Input/output error" 00:21:49.943 } 00:21:49.943 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:49.943 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:49.943 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:49.943 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:49.943 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:49.943 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.943 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.202 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.202 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 65763 00:21:50.202 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 65763 ']' 00:21:50.202 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 65763 00:21:50.202 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:21:50.202 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:50.202 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65763 00:21:50.202 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:50.202 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:50.202 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65763' 00:21:50.202 killing process with pid 65763 00:21:50.202 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 65763 00:21:50.202 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 65763 00:21:50.202 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:21:50.202 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:21:50.202 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:50.202 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.202 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # nvmfpid=87206 00:21:50.202 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # waitforlisten 87206 00:21:50.202 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:21:50.202 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 87206 ']' 00:21:50.202 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:50.202 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:50.202 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:50.202 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:50.202 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.462 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:50.462 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:50.462 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:21:50.462 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:50.462 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.462 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:50.462 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:50.462 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 87206 00:21:50.462 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 87206 ']' 00:21:50.462 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:50.462 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:50.462 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:50.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:50.462 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:50.462 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.722 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:50.722 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:50.722 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:21:50.722 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.722 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.722 null0 00:21:50.722 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.722 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:50.722 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Po1 00:21:50.722 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.722 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.981 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.981 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.sNs ]] 00:21:50.981 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.sNs 00:21:50.981 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.981 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.981 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.981 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:50.981 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.sGX 00:21:50.981 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.981 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.981 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.981 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.eQ7 ]] 00:21:50.981 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.eQ7 00:21:50.981 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.981 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.981 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.981 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:50.982 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.GuI 00:21:50.982 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.982 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.982 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.982 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.EWe ]] 00:21:50.982 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.EWe 00:21:50.982 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.982 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.982 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.982 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:50.982 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.xIs 00:21:50.982 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.982 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.982 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.982 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:21:50.982 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:21:50.982 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:50.982 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:50.982 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:50.982 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:50.982 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:50.982 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:21:50.982 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.982 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.982 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.982 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:50.982 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:50.982 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:51.548 nvme0n1 00:21:51.548 12:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:51.548 12:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:51.548 12:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:51.806 12:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.806 12:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:51.806 12:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.806 12:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.806 12:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.806 12:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:51.806 { 00:21:51.806 "cntlid": 1, 00:21:51.806 "qid": 0, 00:21:51.806 "state": "enabled", 00:21:51.806 "thread": "nvmf_tgt_poll_group_000", 00:21:51.806 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:51.806 "listen_address": { 00:21:51.806 "trtype": "TCP", 00:21:51.806 "adrfam": "IPv4", 00:21:51.806 "traddr": "10.0.0.2", 00:21:51.806 "trsvcid": "4420" 00:21:51.806 }, 00:21:51.806 "peer_address": { 00:21:51.806 "trtype": "TCP", 00:21:51.806 "adrfam": "IPv4", 00:21:51.806 "traddr": "10.0.0.1", 00:21:51.806 "trsvcid": "40662" 00:21:51.806 }, 00:21:51.806 "auth": { 00:21:51.806 "state": "completed", 00:21:51.806 "digest": "sha512", 00:21:51.806 "dhgroup": "ffdhe8192" 00:21:51.806 } 00:21:51.806 } 00:21:51.806 ]' 00:21:51.806 12:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:51.806 12:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:51.806 12:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:52.065 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:52.065 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:52.065 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:52.065 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:52.065 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:52.323 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTcwNjZlY2QwZGI0NDRjMzZmN2ZiZGEyMTJkMzZhNDExMjcwNmJhZWJlOGJhOTFmZDc1ZjE2YjQ0ZjcxYTk1ZKgNbOo=: 00:21:52.323 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MTcwNjZlY2QwZGI0NDRjMzZmN2ZiZGEyMTJkMzZhNDExMjcwNmJhZWJlOGJhOTFmZDc1ZjE2YjQ0ZjcxYTk1ZKgNbOo=: 00:21:52.892 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:52.892 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:52.892 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:52.892 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.892 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.892 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.892 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:21:52.892 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.892 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.892 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.892 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:21:52.892 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:21:52.892 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:21:52.892 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:52.892 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:21:52.892 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:52.892 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:52.892 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:52.892 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:52.892 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:52.892 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:52.892 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:53.151 request: 00:21:53.151 { 00:21:53.151 "name": "nvme0", 00:21:53.151 "trtype": "tcp", 00:21:53.151 "traddr": "10.0.0.2", 00:21:53.151 "adrfam": "ipv4", 00:21:53.151 "trsvcid": "4420", 00:21:53.151 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:53.151 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:53.151 "prchk_reftag": false, 00:21:53.151 "prchk_guard": false, 00:21:53.151 "hdgst": false, 00:21:53.151 "ddgst": false, 00:21:53.151 "dhchap_key": "key3", 00:21:53.151 "allow_unrecognized_csi": false, 00:21:53.151 "method": "bdev_nvme_attach_controller", 00:21:53.151 "req_id": 1 00:21:53.151 } 00:21:53.151 Got JSON-RPC error response 00:21:53.151 response: 00:21:53.151 { 00:21:53.151 "code": -5, 00:21:53.151 "message": "Input/output error" 00:21:53.151 } 00:21:53.151 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:53.151 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:53.151 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:53.151 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:53.151 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:21:53.151 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:21:53.151 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:53.151 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:53.410 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:21:53.410 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:53.410 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:21:53.410 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:53.410 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:53.410 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:53.410 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:53.410 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:53.410 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:53.411 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:53.669 request: 00:21:53.669 { 00:21:53.669 "name": "nvme0", 00:21:53.669 "trtype": "tcp", 00:21:53.669 "traddr": "10.0.0.2", 00:21:53.669 "adrfam": "ipv4", 00:21:53.669 "trsvcid": "4420", 00:21:53.669 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:53.669 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:53.669 "prchk_reftag": false, 00:21:53.669 "prchk_guard": false, 00:21:53.669 "hdgst": false, 00:21:53.669 "ddgst": false, 00:21:53.669 "dhchap_key": "key3", 00:21:53.669 "allow_unrecognized_csi": false, 00:21:53.669 "method": "bdev_nvme_attach_controller", 00:21:53.669 "req_id": 1 00:21:53.669 } 00:21:53.669 Got JSON-RPC error response 00:21:53.669 response: 00:21:53.669 { 00:21:53.669 "code": -5, 00:21:53.669 "message": "Input/output error" 00:21:53.669 } 00:21:53.669 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:53.669 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:53.669 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:53.669 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:53.669 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:21:53.669 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:21:53.669 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:21:53.669 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:53.669 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:53.669 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:53.927 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:53.927 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.927 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.928 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.928 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:53.928 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.928 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.928 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.928 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:53.928 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:53.928 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:53.928 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:53.928 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:53.928 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:53.928 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:53.928 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:53.928 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:53.928 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:54.187 request: 00:21:54.187 { 00:21:54.187 "name": "nvme0", 00:21:54.187 "trtype": "tcp", 00:21:54.187 "traddr": "10.0.0.2", 00:21:54.187 "adrfam": "ipv4", 00:21:54.187 "trsvcid": "4420", 00:21:54.187 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:54.187 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:54.187 "prchk_reftag": false, 00:21:54.187 "prchk_guard": false, 00:21:54.187 "hdgst": false, 00:21:54.187 "ddgst": false, 00:21:54.187 "dhchap_key": "key0", 00:21:54.187 "dhchap_ctrlr_key": "key1", 00:21:54.187 "allow_unrecognized_csi": false, 00:21:54.187 "method": "bdev_nvme_attach_controller", 00:21:54.187 "req_id": 1 00:21:54.187 } 00:21:54.187 Got JSON-RPC error response 00:21:54.187 response: 00:21:54.187 { 00:21:54.187 "code": -5, 00:21:54.187 "message": "Input/output error" 00:21:54.187 } 00:21:54.187 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:54.187 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:54.187 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:54.187 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:54.187 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:21:54.187 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:21:54.187 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:21:54.446 nvme0n1 00:21:54.446 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:21:54.446 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:21:54.446 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.704 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.704 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:54.704 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:54.704 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:21:54.705 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.705 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.705 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.705 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:21:54.705 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:54.705 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:55.642 nvme0n1 00:21:55.642 12:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:21:55.642 12:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:21:55.642 12:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:55.642 12:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.642 12:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:55.642 12:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.642 12:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.642 12:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.642 12:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:21:55.642 12:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:21:55.642 12:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:55.902 12:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.902 12:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NjZkNTg1ZGVlZGIzZmM3NDc2N2RmMGE1MTllNDhjNzcwMDFkMjJmZjUyMjU3NDFkNmmLaw==: --dhchap-ctrl-secret DHHC-1:03:MTcwNjZlY2QwZGI0NDRjMzZmN2ZiZGEyMTJkMzZhNDExMjcwNmJhZWJlOGJhOTFmZDc1ZjE2YjQ0ZjcxYTk1ZKgNbOo=: 00:21:55.902 12:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjZkNTg1ZGVlZGIzZmM3NDc2N2RmMGE1MTllNDhjNzcwMDFkMjJmZjUyMjU3NDFkNmmLaw==: --dhchap-ctrl-secret DHHC-1:03:MTcwNjZlY2QwZGI0NDRjMzZmN2ZiZGEyMTJkMzZhNDExMjcwNmJhZWJlOGJhOTFmZDc1ZjE2YjQ0ZjcxYTk1ZKgNbOo=: 00:21:56.470 12:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:21:56.470 12:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:21:56.470 12:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:21:56.470 12:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:21:56.470 12:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:21:56.470 12:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:21:56.470 12:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:21:56.470 12:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:56.470 12:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:56.729 12:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:21:56.729 12:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:56.729 12:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:21:56.729 12:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:56.729 12:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:56.729 12:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:56.729 12:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:56.729 12:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:21:56.729 12:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:56.729 12:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:57.296 request: 00:21:57.296 { 00:21:57.296 "name": "nvme0", 00:21:57.296 "trtype": "tcp", 00:21:57.296 "traddr": "10.0.0.2", 00:21:57.296 "adrfam": "ipv4", 00:21:57.296 "trsvcid": "4420", 00:21:57.296 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:57.296 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:57.296 "prchk_reftag": false, 00:21:57.296 "prchk_guard": false, 00:21:57.296 "hdgst": false, 00:21:57.296 "ddgst": false, 00:21:57.296 "dhchap_key": "key1", 00:21:57.296 "allow_unrecognized_csi": false, 00:21:57.296 "method": "bdev_nvme_attach_controller", 00:21:57.296 "req_id": 1 00:21:57.296 } 00:21:57.296 Got JSON-RPC error response 00:21:57.296 response: 00:21:57.296 { 00:21:57.296 "code": -5, 00:21:57.296 "message": "Input/output error" 00:21:57.296 } 00:21:57.296 12:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:57.296 12:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:57.296 12:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:57.296 12:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:57.296 12:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:57.296 12:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:57.296 12:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:57.862 nvme0n1 00:21:57.862 12:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:21:57.862 12:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:21:57.862 12:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.120 12:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.120 12:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:58.120 12:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:58.378 12:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:58.378 12:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.378 12:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.378 12:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.378 12:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:21:58.378 12:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:21:58.378 12:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:21:58.635 nvme0n1 00:21:58.635 12:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:21:58.635 12:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:21:58.635 12:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.893 12:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.893 12:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:58.894 12:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:58.894 12:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:58.894 12:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.894 12:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.894 12:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.894 12:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:OTVkZWRmNTk5NmJkNjFhMTAwMmQxNWVjNTc1YjI2NWFEY+3Z: '' 2s 00:21:58.894 12:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:21:58.894 12:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:21:58.894 12:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:OTVkZWRmNTk5NmJkNjFhMTAwMmQxNWVjNTc1YjI2NWFEY+3Z: 00:21:58.894 12:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:21:58.894 12:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:21:58.894 12:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:21:58.894 12:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:OTVkZWRmNTk5NmJkNjFhMTAwMmQxNWVjNTc1YjI2NWFEY+3Z: ]] 00:21:58.894 12:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:OTVkZWRmNTk5NmJkNjFhMTAwMmQxNWVjNTc1YjI2NWFEY+3Z: 00:21:59.151 12:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:21:59.151 12:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:21:59.151 12:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:01.045 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:22:01.045 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:22:01.045 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:01.045 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:01.045 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:01.045 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:01.045 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:22:01.045 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:22:01.045 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.045 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.045 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.045 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NjZkNTg1ZGVlZGIzZmM3NDc2N2RmMGE1MTllNDhjNzcwMDFkMjJmZjUyMjU3NDFkNmmLaw==: 2s 00:22:01.045 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:01.045 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:01.045 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:22:01.045 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NjZkNTg1ZGVlZGIzZmM3NDc2N2RmMGE1MTllNDhjNzcwMDFkMjJmZjUyMjU3NDFkNmmLaw==: 00:22:01.045 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:01.045 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:01.046 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:22:01.046 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NjZkNTg1ZGVlZGIzZmM3NDc2N2RmMGE1MTllNDhjNzcwMDFkMjJmZjUyMjU3NDFkNmmLaw==: ]] 00:22:01.046 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NjZkNTg1ZGVlZGIzZmM3NDc2N2RmMGE1MTllNDhjNzcwMDFkMjJmZjUyMjU3NDFkNmmLaw==: 00:22:01.046 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:01.046 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:02.946 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:22:02.946 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:22:03.205 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:03.205 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:03.205 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:03.205 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:03.205 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:22:03.205 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:03.205 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:03.205 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:03.205 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.205 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.205 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.205 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:03.205 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:03.205 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:03.772 nvme0n1 00:22:03.772 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:03.772 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.772 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.772 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.772 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:03.772 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:04.339 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:22:04.339 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:22:04.339 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:04.598 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:04.598 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:04.598 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.598 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.598 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.598 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:22:04.598 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:22:04.856 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:22:04.856 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:22:04.856 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:04.856 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:04.857 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:04.857 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.857 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.857 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.857 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:04.857 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:04.857 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:04.857 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:22:04.857 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:04.857 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:22:04.857 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:04.857 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:04.857 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:05.424 request: 00:22:05.424 { 00:22:05.424 "name": "nvme0", 00:22:05.424 "dhchap_key": "key1", 00:22:05.424 "dhchap_ctrlr_key": "key3", 00:22:05.424 "method": "bdev_nvme_set_keys", 00:22:05.424 "req_id": 1 00:22:05.424 } 00:22:05.424 Got JSON-RPC error response 00:22:05.424 response: 00:22:05.424 { 00:22:05.424 "code": -13, 00:22:05.424 "message": "Permission denied" 00:22:05.424 } 00:22:05.424 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:05.424 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:05.424 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:05.424 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:05.424 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:05.424 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:05.424 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:05.683 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:22:05.683 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:22:06.619 12:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:06.619 12:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:06.619 12:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:06.877 12:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:22:06.877 12:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:06.877 12:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.877 12:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.877 12:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.877 12:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:06.877 12:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:06.877 12:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:07.444 nvme0n1 00:22:07.444 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:07.444 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.444 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.444 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.444 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:07.444 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:07.444 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:07.444 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:22:07.444 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:07.444 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:22:07.444 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:07.444 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:07.444 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:08.009 request: 00:22:08.009 { 00:22:08.009 "name": "nvme0", 00:22:08.009 "dhchap_key": "key2", 00:22:08.009 "dhchap_ctrlr_key": "key0", 00:22:08.009 "method": "bdev_nvme_set_keys", 00:22:08.009 "req_id": 1 00:22:08.009 } 00:22:08.009 Got JSON-RPC error response 00:22:08.009 response: 00:22:08.009 { 00:22:08.009 "code": -13, 00:22:08.009 "message": "Permission denied" 00:22:08.009 } 00:22:08.009 12:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:08.009 12:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:08.009 12:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:08.009 12:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:08.009 12:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:08.009 12:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:08.009 12:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:08.266 12:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:22:08.266 12:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:22:09.197 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:09.197 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:09.197 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:09.454 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:22:09.454 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:22:09.454 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:22:09.454 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 65883 00:22:09.454 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 65883 ']' 00:22:09.454 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 65883 00:22:09.454 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:09.454 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:09.454 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65883 00:22:09.454 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:09.454 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:09.454 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65883' 00:22:09.454 killing process with pid 65883 00:22:09.454 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 65883 00:22:09.454 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 65883 00:22:09.712 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:09.712 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # nvmfcleanup 00:22:09.712 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@99 -- # sync 00:22:09.712 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:22:09.712 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@102 -- # set +e 00:22:09.712 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@103 -- # for i in {1..20} 00:22:09.712 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:22:09.712 rmmod nvme_tcp 00:22:09.712 rmmod nvme_fabrics 00:22:09.712 rmmod nvme_keyring 00:22:09.971 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:22:09.971 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # set -e 00:22:09.971 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # return 0 00:22:09.971 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # '[' -n 87206 ']' 00:22:09.971 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@337 -- # killprocess 87206 00:22:09.971 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 87206 ']' 00:22:09.971 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 87206 00:22:09.971 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:09.971 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:09.971 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87206 00:22:09.971 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:09.971 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:09.971 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87206' 00:22:09.971 killing process with pid 87206 00:22:09.971 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 87206 00:22:09.971 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 87206 00:22:09.971 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:22:09.971 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # nvmf_fini 00:22:09.971 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@264 -- # local dev 00:22:09.971 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@267 -- # remove_target_ns 00:22:09.971 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:22:09.972 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:22:09.972 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@268 -- # delete_main_bridge 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@130 -- # return 0 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@41 -- # _dev=0 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@41 -- # dev_map=() 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@284 -- # iptr 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@542 -- # iptables-save 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@542 -- # iptables-restore 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Po1 /tmp/spdk.key-sha256.sGX /tmp/spdk.key-sha384.GuI /tmp/spdk.key-sha512.xIs /tmp/spdk.key-sha512.sNs /tmp/spdk.key-sha384.eQ7 /tmp/spdk.key-sha256.EWe '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:12.510 00:22:12.510 real 2m31.810s 00:22:12.510 user 5m48.542s 00:22:12.510 sys 0m24.453s 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.510 ************************************ 00:22:12.510 END TEST nvmf_auth_target 00:22:12.510 ************************************ 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:12.510 ************************************ 00:22:12.510 START TEST nvmf_bdevio_no_huge 00:22:12.510 ************************************ 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:12.510 * Looking for test storage... 00:22:12.510 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:12.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:12.510 --rc genhtml_branch_coverage=1 00:22:12.510 --rc genhtml_function_coverage=1 00:22:12.510 --rc genhtml_legend=1 00:22:12.510 --rc geninfo_all_blocks=1 00:22:12.510 --rc geninfo_unexecuted_blocks=1 00:22:12.510 00:22:12.510 ' 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:12.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:12.510 --rc genhtml_branch_coverage=1 00:22:12.510 --rc genhtml_function_coverage=1 00:22:12.510 --rc genhtml_legend=1 00:22:12.510 --rc geninfo_all_blocks=1 00:22:12.510 --rc geninfo_unexecuted_blocks=1 00:22:12.510 00:22:12.510 ' 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:12.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:12.510 --rc genhtml_branch_coverage=1 00:22:12.510 --rc genhtml_function_coverage=1 00:22:12.510 --rc genhtml_legend=1 00:22:12.510 --rc geninfo_all_blocks=1 00:22:12.510 --rc geninfo_unexecuted_blocks=1 00:22:12.510 00:22:12.510 ' 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:12.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:12.510 --rc genhtml_branch_coverage=1 00:22:12.510 --rc genhtml_function_coverage=1 00:22:12.510 --rc genhtml_legend=1 00:22:12.510 --rc geninfo_all_blocks=1 00:22:12.510 --rc geninfo_unexecuted_blocks=1 00:22:12.510 00:22:12.510 ' 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:12.510 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:12.511 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:12.511 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:12.511 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:22:12.511 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:12.511 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:22:12.511 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:12.511 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:12.511 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:12.511 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:22:12.511 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:22:12.511 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:12.511 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:12.511 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:22:12.511 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:12.511 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:12.511 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:12.511 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.511 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.511 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.511 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:12.511 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.511 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:22:12.511 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:22:12.511 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:22:12.511 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:22:12.511 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@50 -- # : 0 00:22:12.511 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:22:12.511 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:22:12.511 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:22:12.511 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:12.511 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:12.511 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:22:12.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:22:12.511 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:22:12.511 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:22:12.511 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@54 -- # have_pci_nics=0 00:22:12.511 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:12.511 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:12.511 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:12.511 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:22:12.511 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:12.511 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # prepare_net_devs 00:22:12.511 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # local -g is_hw=no 00:22:12.511 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # remove_target_ns 00:22:12.511 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:22:12.511 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:22:12.511 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_target_ns 00:22:12.511 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:22:12.511 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:22:12.511 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # xtrace_disable 00:22:12.511 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:19.084 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:19.084 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@131 -- # pci_devs=() 00:22:19.084 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@131 -- # local -a pci_devs 00:22:19.084 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@132 -- # pci_net_devs=() 00:22:19.084 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:22:19.084 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@133 -- # pci_drivers=() 00:22:19.084 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@133 -- # local -A pci_drivers 00:22:19.084 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@135 -- # net_devs=() 00:22:19.084 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@135 -- # local -ga net_devs 00:22:19.084 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@136 -- # e810=() 00:22:19.084 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@136 -- # local -ga e810 00:22:19.084 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@137 -- # x722=() 00:22:19.084 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@137 -- # local -ga x722 00:22:19.084 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@138 -- # mlx=() 00:22:19.084 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@138 -- # local -ga mlx 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:19.085 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:19.085 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # [[ up == up ]] 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:19.085 Found net devices under 0000:86:00.0: cvl_0_0 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # [[ up == up ]] 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:19.085 Found net devices under 0000:86:00.1: cvl_0_1 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # is_hw=yes 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@257 -- # create_target_ns 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@27 -- # local -gA dev_map 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@28 -- # local -g _dev 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@44 -- # ips=() 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:22:19.085 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@11 -- # local val=167772161 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:22:19.086 10.0.0.1 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@11 -- # local val=167772162 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:22:19.086 10.0.0.2 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@38 -- # ping_ips 1 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@107 -- # local dev=initiator0 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:22:19.086 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:19.086 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.378 ms 00:22:19.086 00:22:19.086 --- 10.0.0.1 ping statistics --- 00:22:19.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.086 rtt min/avg/max/mdev = 0.378/0.378/0.378/0.000 ms 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@168 -- # get_net_dev target0 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@107 -- # local dev=target0 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:22:19.086 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:19.086 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.150 ms 00:22:19.086 00:22:19.086 --- 10.0.0.2 ping statistics --- 00:22:19.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.086 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # (( pair++ )) 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # return 0 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:22:19.086 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@107 -- # local dev=initiator0 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@107 -- # local dev=initiator1 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@109 -- # return 1 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@168 -- # dev= 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@169 -- # return 0 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@168 -- # get_net_dev target0 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@107 -- # local dev=target0 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@168 -- # get_net_dev target1 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@107 -- # local dev=target1 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@109 -- # return 1 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@168 -- # dev= 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@169 -- # return 0 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # nvmfpid=94042 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # waitforlisten 94042 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 94042 ']' 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:19.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:19.087 [2024-12-05 12:05:52.653568] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:22:19.087 [2024-12-05 12:05:52.653616] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:19.087 [2024-12-05 12:05:52.720301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:19.087 [2024-12-05 12:05:52.766359] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:19.087 [2024-12-05 12:05:52.766399] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:19.087 [2024-12-05 12:05:52.766406] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:19.087 [2024-12-05 12:05:52.766412] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:19.087 [2024-12-05 12:05:52.766417] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:19.087 [2024-12-05 12:05:52.767655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:19.087 [2024-12-05 12:05:52.767766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:22:19.087 [2024-12-05 12:05:52.767871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:19.087 [2024-12-05 12:05:52.767873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:19.087 [2024-12-05 12:05:52.911114] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:19.087 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.088 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:19.088 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.088 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:19.088 Malloc0 00:22:19.088 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.088 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:19.088 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.088 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:19.088 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.088 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:19.088 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.088 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:19.088 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.088 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:19.088 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.088 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:19.088 [2024-12-05 12:05:52.955424] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:19.088 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.088 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:19.088 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:19.088 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # config=() 00:22:19.088 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # local subsystem config 00:22:19.088 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:22:19.088 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:22:19.088 { 00:22:19.088 "params": { 00:22:19.088 "name": "Nvme$subsystem", 00:22:19.088 "trtype": "$TEST_TRANSPORT", 00:22:19.088 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:19.088 "adrfam": "ipv4", 00:22:19.088 "trsvcid": "$NVMF_PORT", 00:22:19.088 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:19.088 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:19.088 "hdgst": ${hdgst:-false}, 00:22:19.088 "ddgst": ${ddgst:-false} 00:22:19.088 }, 00:22:19.088 "method": "bdev_nvme_attach_controller" 00:22:19.088 } 00:22:19.088 EOF 00:22:19.088 )") 00:22:19.088 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # cat 00:22:19.088 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@396 -- # jq . 00:22:19.088 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@397 -- # IFS=, 00:22:19.088 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:22:19.088 "params": { 00:22:19.088 "name": "Nvme1", 00:22:19.088 "trtype": "tcp", 00:22:19.088 "traddr": "10.0.0.2", 00:22:19.088 "adrfam": "ipv4", 00:22:19.088 "trsvcid": "4420", 00:22:19.088 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:19.088 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:19.088 "hdgst": false, 00:22:19.088 "ddgst": false 00:22:19.088 }, 00:22:19.088 "method": "bdev_nvme_attach_controller" 00:22:19.088 }' 00:22:19.088 [2024-12-05 12:05:53.008556] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:22:19.088 [2024-12-05 12:05:53.008600] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid94217 ] 00:22:19.088 [2024-12-05 12:05:53.085353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:19.088 [2024-12-05 12:05:53.134760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:19.088 [2024-12-05 12:05:53.134867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:19.088 [2024-12-05 12:05:53.134868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:19.346 I/O targets: 00:22:19.346 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:19.346 00:22:19.346 00:22:19.346 CUnit - A unit testing framework for C - Version 2.1-3 00:22:19.346 http://cunit.sourceforge.net/ 00:22:19.346 00:22:19.346 00:22:19.346 Suite: bdevio tests on: Nvme1n1 00:22:19.346 Test: blockdev write read block ...passed 00:22:19.346 Test: blockdev write zeroes read block ...passed 00:22:19.346 Test: blockdev write zeroes read no split ...passed 00:22:19.604 Test: blockdev write zeroes read split ...passed 00:22:19.604 Test: blockdev write zeroes read split partial ...passed 00:22:19.604 Test: blockdev reset ...[2024-12-05 12:05:53.591735] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:19.604 [2024-12-05 12:05:53.591798] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21668e0 (9): Bad file descriptor 00:22:19.604 [2024-12-05 12:05:53.603502] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:22:19.604 passed 00:22:19.604 Test: blockdev write read 8 blocks ...passed 00:22:19.604 Test: blockdev write read size > 128k ...passed 00:22:19.604 Test: blockdev write read invalid size ...passed 00:22:19.604 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:19.604 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:19.604 Test: blockdev write read max offset ...passed 00:22:19.604 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:19.604 Test: blockdev writev readv 8 blocks ...passed 00:22:19.604 Test: blockdev writev readv 30 x 1block ...passed 00:22:19.862 Test: blockdev writev readv block ...passed 00:22:19.862 Test: blockdev writev readv size > 128k ...passed 00:22:19.862 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:19.862 Test: blockdev comparev and writev ...[2024-12-05 12:05:53.818127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:19.862 [2024-12-05 12:05:53.818156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:19.862 [2024-12-05 12:05:53.818171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:19.862 [2024-12-05 12:05:53.818178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:19.862 [2024-12-05 12:05:53.818413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:19.862 [2024-12-05 12:05:53.818424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:19.862 [2024-12-05 12:05:53.818436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:19.862 [2024-12-05 12:05:53.818443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:19.862 [2024-12-05 12:05:53.818680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:19.862 [2024-12-05 12:05:53.818691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:19.862 [2024-12-05 12:05:53.818703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:19.862 [2024-12-05 12:05:53.818709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:19.862 [2024-12-05 12:05:53.818930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:19.862 [2024-12-05 12:05:53.818940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:19.862 [2024-12-05 12:05:53.818951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:19.862 [2024-12-05 12:05:53.818959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:19.862 passed 00:22:19.862 Test: blockdev nvme passthru rw ...passed 00:22:19.862 Test: blockdev nvme passthru vendor specific ...[2024-12-05 12:05:53.901707] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:19.862 [2024-12-05 12:05:53.901725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:19.862 [2024-12-05 12:05:53.901828] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:19.862 [2024-12-05 12:05:53.901838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:19.862 [2024-12-05 12:05:53.901934] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:19.862 [2024-12-05 12:05:53.901943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:19.862 [2024-12-05 12:05:53.902058] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:19.862 [2024-12-05 12:05:53.902075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:19.862 passed 00:22:19.862 Test: blockdev nvme admin passthru ...passed 00:22:19.862 Test: blockdev copy ...passed 00:22:19.862 00:22:19.862 Run Summary: Type Total Ran Passed Failed Inactive 00:22:19.862 suites 1 1 n/a 0 0 00:22:19.862 tests 23 23 23 0 0 00:22:19.862 asserts 152 152 152 0 n/a 00:22:19.862 00:22:19.862 Elapsed time = 1.068 seconds 00:22:20.121 12:05:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:20.121 12:05:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.121 12:05:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:20.121 12:05:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.121 12:05:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:20.121 12:05:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:20.121 12:05:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # nvmfcleanup 00:22:20.121 12:05:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@99 -- # sync 00:22:20.121 12:05:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:22:20.121 12:05:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@102 -- # set +e 00:22:20.121 12:05:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@103 -- # for i in {1..20} 00:22:20.121 12:05:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:22:20.121 rmmod nvme_tcp 00:22:20.121 rmmod nvme_fabrics 00:22:20.121 rmmod nvme_keyring 00:22:20.121 12:05:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:22:20.121 12:05:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@106 -- # set -e 00:22:20.121 12:05:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@107 -- # return 0 00:22:20.121 12:05:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # '[' -n 94042 ']' 00:22:20.121 12:05:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@337 -- # killprocess 94042 00:22:20.121 12:05:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 94042 ']' 00:22:20.121 12:05:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 94042 00:22:20.121 12:05:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:22:20.121 12:05:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:20.121 12:05:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94042 00:22:20.379 12:05:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:22:20.379 12:05:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:22:20.379 12:05:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94042' 00:22:20.379 killing process with pid 94042 00:22:20.379 12:05:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 94042 00:22:20.380 12:05:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 94042 00:22:20.638 12:05:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:22:20.638 12:05:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # nvmf_fini 00:22:20.638 12:05:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@264 -- # local dev 00:22:20.638 12:05:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@267 -- # remove_target_ns 00:22:20.638 12:05:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:22:20.638 12:05:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:22:20.638 12:05:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_target_ns 00:22:22.539 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@268 -- # delete_main_bridge 00:22:22.539 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:22:22.539 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@130 -- # return 0 00:22:22.539 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:22:22.539 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:22:22.539 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:22:22.539 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:22:22.539 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:22:22.539 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:22:22.539 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:22:22.539 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:22:22.539 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:22:22.539 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:22:22.539 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:22:22.539 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:22:22.539 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:22:22.539 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:22:22.539 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:22:22.539 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:22:22.539 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:22:22.539 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@41 -- # _dev=0 00:22:22.539 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@41 -- # dev_map=() 00:22:22.539 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@284 -- # iptr 00:22:22.539 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@542 -- # iptables-save 00:22:22.539 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:22:22.539 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@542 -- # iptables-restore 00:22:22.539 00:22:22.539 real 0m10.408s 00:22:22.539 user 0m11.347s 00:22:22.539 sys 0m5.351s 00:22:22.539 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:22.539 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:22.539 ************************************ 00:22:22.539 END TEST nvmf_bdevio_no_huge 00:22:22.539 ************************************ 00:22:22.798 12:05:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:22.798 12:05:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:22.798 12:05:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:22.798 12:05:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:22.798 ************************************ 00:22:22.798 START TEST nvmf_tls 00:22:22.798 ************************************ 00:22:22.798 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:22.798 * Looking for test storage... 00:22:22.798 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:22.798 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:22.798 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:22:22.798 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:22.798 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:22.798 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:22.798 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:22.798 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:22.798 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:22:22.798 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:22:22.798 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:22:22.798 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:22:22.798 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:22:22.798 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:22:22.798 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:22:22.798 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:22.798 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:22:22.798 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:22:22.798 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:22.798 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:22.798 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:22:22.798 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:22:22.798 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:22.798 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:22:22.798 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:22:22.798 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:22:22.798 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:22:22.798 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:22.798 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:22:22.798 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:22:22.798 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:22.798 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:22.798 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:22:22.798 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:22.798 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:22.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:22.798 --rc genhtml_branch_coverage=1 00:22:22.798 --rc genhtml_function_coverage=1 00:22:22.798 --rc genhtml_legend=1 00:22:22.798 --rc geninfo_all_blocks=1 00:22:22.798 --rc geninfo_unexecuted_blocks=1 00:22:22.798 00:22:22.798 ' 00:22:22.798 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:22.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:22.798 --rc genhtml_branch_coverage=1 00:22:22.798 --rc genhtml_function_coverage=1 00:22:22.798 --rc genhtml_legend=1 00:22:22.798 --rc geninfo_all_blocks=1 00:22:22.798 --rc geninfo_unexecuted_blocks=1 00:22:22.798 00:22:22.798 ' 00:22:22.798 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:22.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:22.798 --rc genhtml_branch_coverage=1 00:22:22.798 --rc genhtml_function_coverage=1 00:22:22.798 --rc genhtml_legend=1 00:22:22.798 --rc geninfo_all_blocks=1 00:22:22.798 --rc geninfo_unexecuted_blocks=1 00:22:22.798 00:22:22.798 ' 00:22:22.798 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:22.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:22.798 --rc genhtml_branch_coverage=1 00:22:22.799 --rc genhtml_function_coverage=1 00:22:22.799 --rc genhtml_legend=1 00:22:22.799 --rc geninfo_all_blocks=1 00:22:22.799 --rc geninfo_unexecuted_blocks=1 00:22:22.799 00:22:22.799 ' 00:22:22.799 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:22.799 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:22.799 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:22.799 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:22.799 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:22.799 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:22.799 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:22.799 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:22:22.799 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:22.799 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:22:22.799 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:22.799 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:22.799 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:22.799 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:22:22.799 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:22:22.799 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:22.799 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:22.799 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:22:22.799 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:22.799 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:22.799 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:22.799 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.799 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.799 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.799 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:22.799 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.799 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:22:22.799 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:22:22.799 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:22:22.799 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:22:22.799 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@50 -- # : 0 00:22:22.799 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:22:22.799 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:22:22.799 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:22:22.799 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:22.799 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:23.058 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:22:23.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:22:23.058 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:22:23.058 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:22:23.058 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@54 -- # have_pci_nics=0 00:22:23.058 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:23.058 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:22:23.058 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:22:23.058 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:23.058 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # prepare_net_devs 00:22:23.058 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # local -g is_hw=no 00:22:23.058 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@260 -- # remove_target_ns 00:22:23.058 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:22:23.058 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:22:23.058 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_target_ns 00:22:23.058 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:22:23.058 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:22:23.058 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # xtrace_disable 00:22:23.058 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@131 -- # pci_devs=() 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@131 -- # local -a pci_devs 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@132 -- # pci_net_devs=() 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@133 -- # pci_drivers=() 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@133 -- # local -A pci_drivers 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@135 -- # net_devs=() 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@135 -- # local -ga net_devs 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@136 -- # e810=() 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@136 -- # local -ga e810 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@137 -- # x722=() 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@137 -- # local -ga x722 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@138 -- # mlx=() 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@138 -- # local -ga mlx 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:29.628 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:29.628 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # [[ up == up ]] 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:29.628 Found net devices under 0000:86:00.0: cvl_0_0 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # [[ up == up ]] 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:29.628 Found net devices under 0000:86:00.1: cvl_0_1 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # is_hw=yes 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@257 -- # create_target_ns 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@27 -- # local -gA dev_map 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@28 -- # local -g _dev 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@44 -- # ips=() 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:22:29.628 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@11 -- # local val=167772161 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:22:29.629 10.0.0.1 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@11 -- # local val=167772162 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:22:29.629 10.0.0.2 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@38 -- # ping_ips 1 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@107 -- # local dev=initiator0 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:22:29.629 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:29.629 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.406 ms 00:22:29.629 00:22:29.629 --- 10.0.0.1 ping statistics --- 00:22:29.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:29.629 rtt min/avg/max/mdev = 0.406/0.406/0.406/0.000 ms 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@168 -- # get_net_dev target0 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@107 -- # local dev=target0 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:22:29.629 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:29.629 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:22:29.629 00:22:29.629 --- 10.0.0.2 ping statistics --- 00:22:29.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:29.629 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:22:29.629 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # (( pair++ )) 00:22:29.630 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:22:29.630 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:29.630 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@270 -- # return 0 00:22:29.630 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:22:29.630 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:22:29.630 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:22:29.630 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:22:29.630 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:22:29.630 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:22:29.630 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:22:29.630 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:22:29.630 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:22:29.630 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:22:29.630 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@107 -- # local dev=initiator0 00:22:29.630 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:22:29.630 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:22:29.630 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:22:29.630 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@107 -- # local dev=initiator1 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@109 -- # return 1 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@168 -- # dev= 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@169 -- # return 0 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@168 -- # get_net_dev target0 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@107 -- # local dev=target0 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@168 -- # get_net_dev target1 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@107 -- # local dev=target1 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@109 -- # return 1 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@168 -- # dev= 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@169 -- # return 0 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=98125 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 98125 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 98125 ']' 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:29.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:29.630 [2024-12-05 12:06:03.140228] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:22:29.630 [2024-12-05 12:06:03.140272] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:29.630 [2024-12-05 12:06:03.219845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:29.630 [2024-12-05 12:06:03.259641] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:29.630 [2024-12-05 12:06:03.259678] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:29.630 [2024-12-05 12:06:03.259685] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:29.630 [2024-12-05 12:06:03.259691] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:29.630 [2024-12-05 12:06:03.259697] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:29.630 [2024-12-05 12:06:03.260246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:29.630 true 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@69 -- # jq -r .tls_version 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@69 -- # version=0 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # [[ 0 != \0 ]] 00:22:29.630 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:29.890 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:29.890 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@77 -- # jq -r .tls_version 00:22:30.149 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@77 -- # version=13 00:22:30.149 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@78 -- # [[ 13 != \1\3 ]] 00:22:30.149 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:30.149 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:30.149 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@85 -- # jq -r .tls_version 00:22:30.408 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@85 -- # version=7 00:22:30.408 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@86 -- # [[ 7 != \7 ]] 00:22:30.408 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:30.408 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@92 -- # jq -r .enable_ktls 00:22:30.667 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@92 -- # ktls=false 00:22:30.667 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@93 -- # [[ false != \f\a\l\s\e ]] 00:22:30.667 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:30.667 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:30.667 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@100 -- # jq -r .enable_ktls 00:22:30.926 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@100 -- # ktls=true 00:22:30.926 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@101 -- # [[ true != \t\r\u\e ]] 00:22:30.926 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:31.184 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:31.184 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@108 -- # jq -r .enable_ktls 00:22:31.443 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@108 -- # ktls=false 00:22:31.443 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@109 -- # [[ false != \f\a\l\s\e ]] 00:22:31.443 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:31.443 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:31.443 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # local prefix key digest 00:22:31.443 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:22:31.443 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # key=00112233445566778899aabbccddeeff 00:22:31.443 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # digest=1 00:22:31.443 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # python - 00:22:31.443 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:31.443 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@115 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:31.443 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:31.443 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # local prefix key digest 00:22:31.443 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:22:31.443 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # key=ffeeddccbbaa99887766554433221100 00:22:31.443 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # digest=1 00:22:31.443 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # python - 00:22:31.443 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@115 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:31.443 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@117 -- # mktemp 00:22:31.443 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@117 -- # key_path=/tmp/tmp.vT3c8gxpLj 00:22:31.443 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # mktemp 00:22:31.443 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key_2_path=/tmp/tmp.WEuLEE9iXB 00:22:31.443 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:31.443 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:31.443 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # chmod 0600 /tmp/tmp.vT3c8gxpLj 00:22:31.443 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # chmod 0600 /tmp/tmp.WEuLEE9iXB 00:22:31.443 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:31.700 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:31.958 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # setup_nvmf_tgt /tmp/tmp.vT3c8gxpLj 00:22:31.958 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.vT3c8gxpLj 00:22:31.958 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:31.958 [2024-12-05 12:06:06.127716] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:31.958 12:06:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:32.216 12:06:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:32.474 [2024-12-05 12:06:06.488662] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:32.474 [2024-12-05 12:06:06.488880] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:32.474 12:06:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:32.733 malloc0 00:22:32.733 12:06:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:32.733 12:06:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.vT3c8gxpLj 00:22:32.991 12:06:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:33.249 12:06:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.vT3c8gxpLj 00:22:43.221 Initializing NVMe Controllers 00:22:43.221 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:43.221 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:43.221 Initialization complete. Launching workers. 00:22:43.221 ======================================================== 00:22:43.221 Latency(us) 00:22:43.221 Device Information : IOPS MiB/s Average min max 00:22:43.221 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16914.30 66.07 3783.86 844.31 5536.86 00:22:43.221 ======================================================== 00:22:43.221 Total : 16914.30 66.07 3783.86 844.31 5536.86 00:22:43.221 00:22:43.221 12:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@139 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.vT3c8gxpLj 00:22:43.221 12:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:43.221 12:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:43.221 12:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:43.221 12:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.vT3c8gxpLj 00:22:43.221 12:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:43.221 12:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=100861 00:22:43.221 12:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:43.221 12:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:43.222 12:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 100861 /var/tmp/bdevperf.sock 00:22:43.222 12:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 100861 ']' 00:22:43.222 12:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:43.222 12:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:43.222 12:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:43.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:43.222 12:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:43.222 12:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:43.222 [2024-12-05 12:06:17.408442] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:22:43.222 [2024-12-05 12:06:17.408491] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100861 ] 00:22:43.505 [2024-12-05 12:06:17.482772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:43.505 [2024-12-05 12:06:17.524216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:43.505 12:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:43.505 12:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:43.505 12:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.vT3c8gxpLj 00:22:43.788 12:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:44.099 [2024-12-05 12:06:17.980308] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:44.099 TLSTESTn1 00:22:44.099 12:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:44.099 Running I/O for 10 seconds... 00:22:45.973 5186.00 IOPS, 20.26 MiB/s [2024-12-05T11:06:21.546Z] 5341.50 IOPS, 20.87 MiB/s [2024-12-05T11:06:22.502Z] 5420.67 IOPS, 21.17 MiB/s [2024-12-05T11:06:23.440Z] 5455.25 IOPS, 21.31 MiB/s [2024-12-05T11:06:24.377Z] 5487.80 IOPS, 21.44 MiB/s [2024-12-05T11:06:25.321Z] 5506.00 IOPS, 21.51 MiB/s [2024-12-05T11:06:26.257Z] 5521.43 IOPS, 21.57 MiB/s [2024-12-05T11:06:27.193Z] 5532.25 IOPS, 21.61 MiB/s [2024-12-05T11:06:28.569Z] 5540.22 IOPS, 21.64 MiB/s [2024-12-05T11:06:28.569Z] 5539.00 IOPS, 21.64 MiB/s 00:22:54.373 Latency(us) 00:22:54.373 [2024-12-05T11:06:28.569Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:54.373 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:54.373 Verification LBA range: start 0x0 length 0x2000 00:22:54.373 TLSTESTn1 : 10.01 5544.25 21.66 0.00 0.00 23053.70 5398.92 42692.02 00:22:54.373 [2024-12-05T11:06:28.569Z] =================================================================================================================== 00:22:54.373 [2024-12-05T11:06:28.569Z] Total : 5544.25 21.66 0.00 0.00 23053.70 5398.92 42692.02 00:22:54.373 { 00:22:54.373 "results": [ 00:22:54.373 { 00:22:54.373 "job": "TLSTESTn1", 00:22:54.373 "core_mask": "0x4", 00:22:54.373 "workload": "verify", 00:22:54.373 "status": "finished", 00:22:54.373 "verify_range": { 00:22:54.373 "start": 0, 00:22:54.373 "length": 8192 00:22:54.373 }, 00:22:54.373 "queue_depth": 128, 00:22:54.373 "io_size": 4096, 00:22:54.373 "runtime": 10.013434, 00:22:54.373 "iops": 5544.25185206194, 00:22:54.373 "mibps": 21.65723379711695, 00:22:54.373 "io_failed": 0, 00:22:54.373 "io_timeout": 0, 00:22:54.373 "avg_latency_us": 23053.69867241008, 00:22:54.373 "min_latency_us": 5398.918095238095, 00:22:54.373 "max_latency_us": 42692.02285714286 00:22:54.373 } 00:22:54.373 ], 00:22:54.373 "core_count": 1 00:22:54.373 } 00:22:54.373 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:54.373 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 100861 00:22:54.373 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 100861 ']' 00:22:54.373 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 100861 00:22:54.373 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:54.373 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:54.373 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100861 00:22:54.373 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:54.373 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:54.374 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100861' 00:22:54.374 killing process with pid 100861 00:22:54.374 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 100861 00:22:54.374 Received shutdown signal, test time was about 10.000000 seconds 00:22:54.374 00:22:54.374 Latency(us) 00:22:54.374 [2024-12-05T11:06:28.570Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:54.374 [2024-12-05T11:06:28.570Z] =================================================================================================================== 00:22:54.374 [2024-12-05T11:06:28.570Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:54.374 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 100861 00:22:54.374 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@142 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.WEuLEE9iXB 00:22:54.374 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:54.374 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.WEuLEE9iXB 00:22:54.374 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:54.374 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:54.374 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:54.374 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:54.374 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.WEuLEE9iXB 00:22:54.374 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:54.374 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:54.374 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:54.374 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.WEuLEE9iXB 00:22:54.374 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:54.374 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=102704 00:22:54.374 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:54.374 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:54.374 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 102704 /var/tmp/bdevperf.sock 00:22:54.374 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 102704 ']' 00:22:54.374 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:54.374 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:54.374 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:54.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:54.374 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:54.374 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:54.374 [2024-12-05 12:06:28.479064] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:22:54.374 [2024-12-05 12:06:28.479112] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102704 ] 00:22:54.374 [2024-12-05 12:06:28.542524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.633 [2024-12-05 12:06:28.585064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:54.633 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:54.633 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:54.633 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.WEuLEE9iXB 00:22:54.891 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:54.891 [2024-12-05 12:06:29.029498] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:54.891 [2024-12-05 12:06:29.034141] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:54.891 [2024-12-05 12:06:29.034764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x117a1a0 (107): Transport endpoint is not connected 00:22:54.891 [2024-12-05 12:06:29.035757] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x117a1a0 (9): Bad file descriptor 00:22:54.891 [2024-12-05 12:06:29.036759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:22:54.891 [2024-12-05 12:06:29.036772] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:54.891 [2024-12-05 12:06:29.036780] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:22:54.891 [2024-12-05 12:06:29.036788] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:22:54.891 request: 00:22:54.891 { 00:22:54.891 "name": "TLSTEST", 00:22:54.891 "trtype": "tcp", 00:22:54.891 "traddr": "10.0.0.2", 00:22:54.891 "adrfam": "ipv4", 00:22:54.891 "trsvcid": "4420", 00:22:54.891 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:54.891 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:54.891 "prchk_reftag": false, 00:22:54.891 "prchk_guard": false, 00:22:54.891 "hdgst": false, 00:22:54.891 "ddgst": false, 00:22:54.891 "psk": "key0", 00:22:54.891 "allow_unrecognized_csi": false, 00:22:54.891 "method": "bdev_nvme_attach_controller", 00:22:54.891 "req_id": 1 00:22:54.891 } 00:22:54.891 Got JSON-RPC error response 00:22:54.891 response: 00:22:54.891 { 00:22:54.891 "code": -5, 00:22:54.891 "message": "Input/output error" 00:22:54.891 } 00:22:54.891 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 102704 00:22:54.891 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 102704 ']' 00:22:54.891 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 102704 00:22:54.891 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:54.891 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:54.891 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102704 00:22:55.151 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:55.151 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:55.151 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102704' 00:22:55.151 killing process with pid 102704 00:22:55.151 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 102704 00:22:55.151 Received shutdown signal, test time was about 10.000000 seconds 00:22:55.151 00:22:55.151 Latency(us) 00:22:55.151 [2024-12-05T11:06:29.347Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:55.151 [2024-12-05T11:06:29.347Z] =================================================================================================================== 00:22:55.151 [2024-12-05T11:06:29.347Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:55.151 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 102704 00:22:55.151 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:55.151 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:55.151 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:55.151 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:55.151 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:55.151 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@145 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.vT3c8gxpLj 00:22:55.151 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:55.151 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.vT3c8gxpLj 00:22:55.151 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:55.151 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:55.151 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:55.151 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:55.151 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.vT3c8gxpLj 00:22:55.151 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:55.151 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:55.151 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:55.151 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.vT3c8gxpLj 00:22:55.151 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:55.151 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=102717 00:22:55.151 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:55.151 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:55.151 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 102717 /var/tmp/bdevperf.sock 00:22:55.151 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 102717 ']' 00:22:55.151 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:55.151 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:55.151 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:55.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:55.151 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:55.151 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:55.151 [2024-12-05 12:06:29.306576] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:22:55.151 [2024-12-05 12:06:29.306625] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102717 ] 00:22:55.410 [2024-12-05 12:06:29.385382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:55.410 [2024-12-05 12:06:29.422697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:55.410 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:55.410 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:55.410 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.vT3c8gxpLj 00:22:55.668 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:22:55.927 [2024-12-05 12:06:29.906006] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:55.927 [2024-12-05 12:06:29.915174] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:55.927 [2024-12-05 12:06:29.915195] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:55.927 [2024-12-05 12:06:29.915224] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:55.927 [2024-12-05 12:06:29.915369] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a1e1a0 (107): Transport endpoint is not connected 00:22:55.927 [2024-12-05 12:06:29.916360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a1e1a0 (9): Bad file descriptor 00:22:55.927 [2024-12-05 12:06:29.917362] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:22:55.927 [2024-12-05 12:06:29.917375] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:55.927 [2024-12-05 12:06:29.917382] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:22:55.927 [2024-12-05 12:06:29.917390] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:22:55.927 request: 00:22:55.927 { 00:22:55.927 "name": "TLSTEST", 00:22:55.927 "trtype": "tcp", 00:22:55.927 "traddr": "10.0.0.2", 00:22:55.927 "adrfam": "ipv4", 00:22:55.927 "trsvcid": "4420", 00:22:55.927 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:55.927 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:55.927 "prchk_reftag": false, 00:22:55.927 "prchk_guard": false, 00:22:55.927 "hdgst": false, 00:22:55.927 "ddgst": false, 00:22:55.927 "psk": "key0", 00:22:55.927 "allow_unrecognized_csi": false, 00:22:55.927 "method": "bdev_nvme_attach_controller", 00:22:55.927 "req_id": 1 00:22:55.927 } 00:22:55.927 Got JSON-RPC error response 00:22:55.927 response: 00:22:55.927 { 00:22:55.927 "code": -5, 00:22:55.927 "message": "Input/output error" 00:22:55.927 } 00:22:55.927 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 102717 00:22:55.927 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 102717 ']' 00:22:55.927 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 102717 00:22:55.927 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:55.927 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:55.927 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102717 00:22:55.927 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:55.927 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:55.927 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102717' 00:22:55.927 killing process with pid 102717 00:22:55.927 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 102717 00:22:55.927 Received shutdown signal, test time was about 10.000000 seconds 00:22:55.927 00:22:55.927 Latency(us) 00:22:55.927 [2024-12-05T11:06:30.123Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:55.927 [2024-12-05T11:06:30.123Z] =================================================================================================================== 00:22:55.927 [2024-12-05T11:06:30.123Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:55.927 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 102717 00:22:56.186 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:56.186 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:56.186 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:56.186 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:56.186 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:56.186 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@148 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.vT3c8gxpLj 00:22:56.186 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:56.186 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.vT3c8gxpLj 00:22:56.186 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:56.186 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:56.186 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:56.186 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:56.186 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.vT3c8gxpLj 00:22:56.186 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:56.186 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:22:56.186 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:56.186 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.vT3c8gxpLj 00:22:56.186 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:56.186 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=102952 00:22:56.186 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:56.186 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:56.186 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 102952 /var/tmp/bdevperf.sock 00:22:56.186 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 102952 ']' 00:22:56.186 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:56.186 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:56.186 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:56.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:56.186 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:56.186 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:56.186 [2024-12-05 12:06:30.197674] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:22:56.186 [2024-12-05 12:06:30.197722] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102952 ] 00:22:56.186 [2024-12-05 12:06:30.267123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:56.186 [2024-12-05 12:06:30.305039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:56.444 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:56.444 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:56.444 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.vT3c8gxpLj 00:22:56.444 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:56.702 [2024-12-05 12:06:30.761088] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:56.702 [2024-12-05 12:06:30.767352] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:56.702 [2024-12-05 12:06:30.767379] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:56.702 [2024-12-05 12:06:30.767401] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:56.702 [2024-12-05 12:06:30.767493] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb501a0 (107): Transport endpoint is not connected 00:22:56.702 [2024-12-05 12:06:30.768478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb501a0 (9): Bad file descriptor 00:22:56.702 [2024-12-05 12:06:30.769479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:22:56.702 [2024-12-05 12:06:30.769495] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:56.702 [2024-12-05 12:06:30.769501] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:22:56.702 [2024-12-05 12:06:30.769509] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:22:56.702 request: 00:22:56.702 { 00:22:56.702 "name": "TLSTEST", 00:22:56.702 "trtype": "tcp", 00:22:56.702 "traddr": "10.0.0.2", 00:22:56.702 "adrfam": "ipv4", 00:22:56.702 "trsvcid": "4420", 00:22:56.702 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:56.702 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:56.702 "prchk_reftag": false, 00:22:56.702 "prchk_guard": false, 00:22:56.702 "hdgst": false, 00:22:56.702 "ddgst": false, 00:22:56.702 "psk": "key0", 00:22:56.702 "allow_unrecognized_csi": false, 00:22:56.702 "method": "bdev_nvme_attach_controller", 00:22:56.702 "req_id": 1 00:22:56.702 } 00:22:56.702 Got JSON-RPC error response 00:22:56.702 response: 00:22:56.702 { 00:22:56.702 "code": -5, 00:22:56.702 "message": "Input/output error" 00:22:56.702 } 00:22:56.702 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 102952 00:22:56.702 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 102952 ']' 00:22:56.702 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 102952 00:22:56.702 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:56.702 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:56.702 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102952 00:22:56.702 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:56.702 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:56.702 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102952' 00:22:56.702 killing process with pid 102952 00:22:56.702 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 102952 00:22:56.702 Received shutdown signal, test time was about 10.000000 seconds 00:22:56.702 00:22:56.702 Latency(us) 00:22:56.702 [2024-12-05T11:06:30.898Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:56.702 [2024-12-05T11:06:30.898Z] =================================================================================================================== 00:22:56.702 [2024-12-05T11:06:30.898Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:56.702 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 102952 00:22:56.960 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:56.960 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:56.960 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:56.960 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:56.960 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:56.960 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@151 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:56.960 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:56.960 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:56.960 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:56.960 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:56.960 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:56.960 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:56.960 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:56.960 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:56.960 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:56.960 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:56.960 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:22:56.960 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:56.960 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:56.960 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=103073 00:22:56.960 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:56.960 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 103073 /var/tmp/bdevperf.sock 00:22:56.960 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 103073 ']' 00:22:56.960 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:56.960 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:56.960 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:56.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:56.960 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:56.960 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:56.960 [2024-12-05 12:06:31.040394] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:22:56.960 [2024-12-05 12:06:31.040442] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103073 ] 00:22:56.960 [2024-12-05 12:06:31.113940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:56.960 [2024-12-05 12:06:31.156166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:57.218 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:57.218 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:57.218 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:22:57.476 [2024-12-05 12:06:31.418719] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:22:57.476 [2024-12-05 12:06:31.418753] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:57.476 request: 00:22:57.476 { 00:22:57.476 "name": "key0", 00:22:57.476 "path": "", 00:22:57.476 "method": "keyring_file_add_key", 00:22:57.476 "req_id": 1 00:22:57.476 } 00:22:57.476 Got JSON-RPC error response 00:22:57.476 response: 00:22:57.476 { 00:22:57.476 "code": -1, 00:22:57.476 "message": "Operation not permitted" 00:22:57.476 } 00:22:57.476 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:57.476 [2024-12-05 12:06:31.607292] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:57.476 [2024-12-05 12:06:31.607323] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:22:57.476 request: 00:22:57.476 { 00:22:57.476 "name": "TLSTEST", 00:22:57.476 "trtype": "tcp", 00:22:57.476 "traddr": "10.0.0.2", 00:22:57.476 "adrfam": "ipv4", 00:22:57.476 "trsvcid": "4420", 00:22:57.476 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:57.476 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:57.476 "prchk_reftag": false, 00:22:57.476 "prchk_guard": false, 00:22:57.476 "hdgst": false, 00:22:57.476 "ddgst": false, 00:22:57.476 "psk": "key0", 00:22:57.476 "allow_unrecognized_csi": false, 00:22:57.476 "method": "bdev_nvme_attach_controller", 00:22:57.476 "req_id": 1 00:22:57.477 } 00:22:57.477 Got JSON-RPC error response 00:22:57.477 response: 00:22:57.477 { 00:22:57.477 "code": -126, 00:22:57.477 "message": "Required key not available" 00:22:57.477 } 00:22:57.477 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 103073 00:22:57.477 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 103073 ']' 00:22:57.477 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 103073 00:22:57.477 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:57.477 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:57.477 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 103073 00:22:57.735 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:57.735 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:57.735 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 103073' 00:22:57.735 killing process with pid 103073 00:22:57.735 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 103073 00:22:57.735 Received shutdown signal, test time was about 10.000000 seconds 00:22:57.735 00:22:57.735 Latency(us) 00:22:57.735 [2024-12-05T11:06:31.931Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:57.735 [2024-12-05T11:06:31.931Z] =================================================================================================================== 00:22:57.735 [2024-12-05T11:06:31.931Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:57.735 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 103073 00:22:57.735 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:57.735 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:57.735 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:57.735 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:57.735 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:57.735 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@154 -- # killprocess 98125 00:22:57.735 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 98125 ']' 00:22:57.735 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 98125 00:22:57.735 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:57.735 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:57.735 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98125 00:22:57.735 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:57.735 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:57.735 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98125' 00:22:57.735 killing process with pid 98125 00:22:57.735 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 98125 00:22:57.735 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 98125 00:22:57.994 12:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:22:57.994 12:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:22:57.994 12:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # local prefix key digest 00:22:57.994 12:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:22:57.994 12:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:22:57.994 12:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # digest=2 00:22:57.994 12:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # python - 00:22:57.994 12:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:57.994 12:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # mktemp 00:22:57.994 12:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # key_long_path=/tmp/tmp.paq7Z4SMlu 00:22:57.994 12:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@157 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:57.994 12:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # chmod 0600 /tmp/tmp.paq7Z4SMlu 00:22:57.994 12:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # nvmfappstart -m 0x2 00:22:57.994 12:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:22:57.994 12:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:57.994 12:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:57.994 12:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=103219 00:22:57.994 12:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:57.994 12:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 103219 00:22:57.994 12:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 103219 ']' 00:22:57.994 12:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:57.994 12:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:57.994 12:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:57.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:57.994 12:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:57.994 12:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:57.994 [2024-12-05 12:06:32.147651] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:22:57.994 [2024-12-05 12:06:32.147696] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:58.252 [2024-12-05 12:06:32.223182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.252 [2024-12-05 12:06:32.259037] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:58.252 [2024-12-05 12:06:32.259074] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:58.252 [2024-12-05 12:06:32.259080] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:58.252 [2024-12-05 12:06:32.259089] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:58.252 [2024-12-05 12:06:32.259094] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:58.252 [2024-12-05 12:06:32.259668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:58.252 12:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:58.252 12:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:58.253 12:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:22:58.253 12:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:58.253 12:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:58.253 12:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:58.253 12:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # setup_nvmf_tgt /tmp/tmp.paq7Z4SMlu 00:22:58.253 12:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.paq7Z4SMlu 00:22:58.253 12:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:58.510 [2024-12-05 12:06:32.572089] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:58.510 12:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:58.768 12:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:58.768 [2024-12-05 12:06:32.965094] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:58.768 [2024-12-05 12:06:32.965313] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:59.026 12:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:59.026 malloc0 00:22:59.026 12:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:59.285 12:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.paq7Z4SMlu 00:22:59.544 12:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:59.544 12:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.paq7Z4SMlu 00:22:59.544 12:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:59.544 12:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:59.544 12:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:59.544 12:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.paq7Z4SMlu 00:22:59.544 12:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:59.544 12:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=103528 00:22:59.544 12:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:59.544 12:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:59.544 12:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 103528 /var/tmp/bdevperf.sock 00:22:59.544 12:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 103528 ']' 00:22:59.544 12:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:59.544 12:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:59.544 12:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:59.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:59.803 12:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:59.803 12:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:59.803 [2024-12-05 12:06:33.784056] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:22:59.803 [2024-12-05 12:06:33.784106] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103528 ] 00:22:59.803 [2024-12-05 12:06:33.860997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.803 [2024-12-05 12:06:33.901374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:59.803 12:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:59.803 12:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:59.803 12:06:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.paq7Z4SMlu 00:23:00.062 12:06:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:00.321 [2024-12-05 12:06:34.357228] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:00.321 TLSTESTn1 00:23:00.321 12:06:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:00.580 Running I/O for 10 seconds... 00:23:02.451 5390.00 IOPS, 21.05 MiB/s [2024-12-05T11:06:37.584Z] 5444.50 IOPS, 21.27 MiB/s [2024-12-05T11:06:38.962Z] 5532.67 IOPS, 21.61 MiB/s [2024-12-05T11:06:39.898Z] 5563.75 IOPS, 21.73 MiB/s [2024-12-05T11:06:40.834Z] 5573.60 IOPS, 21.77 MiB/s [2024-12-05T11:06:41.770Z] 5559.00 IOPS, 21.71 MiB/s [2024-12-05T11:06:42.706Z] 5563.86 IOPS, 21.73 MiB/s [2024-12-05T11:06:43.643Z] 5565.62 IOPS, 21.74 MiB/s [2024-12-05T11:06:44.579Z] 5564.00 IOPS, 21.73 MiB/s [2024-12-05T11:06:44.579Z] 5540.30 IOPS, 21.64 MiB/s 00:23:10.383 Latency(us) 00:23:10.383 [2024-12-05T11:06:44.579Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:10.383 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:10.383 Verification LBA range: start 0x0 length 0x2000 00:23:10.383 TLSTESTn1 : 10.01 5545.37 21.66 0.00 0.00 23048.16 5617.37 23717.79 00:23:10.383 [2024-12-05T11:06:44.579Z] =================================================================================================================== 00:23:10.383 [2024-12-05T11:06:44.579Z] Total : 5545.37 21.66 0.00 0.00 23048.16 5617.37 23717.79 00:23:10.383 { 00:23:10.383 "results": [ 00:23:10.383 { 00:23:10.383 "job": "TLSTESTn1", 00:23:10.383 "core_mask": "0x4", 00:23:10.383 "workload": "verify", 00:23:10.383 "status": "finished", 00:23:10.383 "verify_range": { 00:23:10.383 "start": 0, 00:23:10.383 "length": 8192 00:23:10.383 }, 00:23:10.383 "queue_depth": 128, 00:23:10.383 "io_size": 4096, 00:23:10.383 "runtime": 10.013759, 00:23:10.383 "iops": 5545.370125244676, 00:23:10.383 "mibps": 21.661602051737017, 00:23:10.383 "io_failed": 0, 00:23:10.383 "io_timeout": 0, 00:23:10.383 "avg_latency_us": 23048.163092348197, 00:23:10.383 "min_latency_us": 5617.371428571429, 00:23:10.383 "max_latency_us": 23717.790476190476 00:23:10.383 } 00:23:10.383 ], 00:23:10.383 "core_count": 1 00:23:10.383 } 00:23:10.642 12:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:10.642 12:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 103528 00:23:10.642 12:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 103528 ']' 00:23:10.642 12:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 103528 00:23:10.642 12:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:10.642 12:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:10.642 12:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 103528 00:23:10.642 12:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:10.642 12:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:10.642 12:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 103528' 00:23:10.642 killing process with pid 103528 00:23:10.642 12:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 103528 00:23:10.643 Received shutdown signal, test time was about 10.000000 seconds 00:23:10.643 00:23:10.643 Latency(us) 00:23:10.643 [2024-12-05T11:06:44.839Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:10.643 [2024-12-05T11:06:44.839Z] =================================================================================================================== 00:23:10.643 [2024-12-05T11:06:44.839Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:10.643 12:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 103528 00:23:10.643 12:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # chmod 0666 /tmp/tmp.paq7Z4SMlu 00:23:10.643 12:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.paq7Z4SMlu 00:23:10.643 12:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:10.643 12:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.paq7Z4SMlu 00:23:10.643 12:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:10.643 12:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:10.643 12:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:10.643 12:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:10.643 12:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.paq7Z4SMlu 00:23:10.643 12:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:10.643 12:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:10.643 12:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:10.643 12:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.paq7Z4SMlu 00:23:10.643 12:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:10.643 12:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=105309 00:23:10.643 12:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:10.643 12:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:10.643 12:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 105309 /var/tmp/bdevperf.sock 00:23:10.643 12:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 105309 ']' 00:23:10.643 12:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:10.643 12:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:10.643 12:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:10.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:10.643 12:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:10.643 12:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:10.903 [2024-12-05 12:06:44.859094] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:23:10.903 [2024-12-05 12:06:44.859145] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105309 ] 00:23:10.903 [2024-12-05 12:06:44.936412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.903 [2024-12-05 12:06:44.973596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:10.903 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:10.903 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:10.903 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.paq7Z4SMlu 00:23:11.162 [2024-12-05 12:06:45.248573] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.paq7Z4SMlu': 0100666 00:23:11.162 [2024-12-05 12:06:45.248608] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:11.162 request: 00:23:11.162 { 00:23:11.162 "name": "key0", 00:23:11.162 "path": "/tmp/tmp.paq7Z4SMlu", 00:23:11.162 "method": "keyring_file_add_key", 00:23:11.162 "req_id": 1 00:23:11.162 } 00:23:11.162 Got JSON-RPC error response 00:23:11.162 response: 00:23:11.162 { 00:23:11.162 "code": -1, 00:23:11.162 "message": "Operation not permitted" 00:23:11.162 } 00:23:11.162 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:11.420 [2024-12-05 12:06:45.457190] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:11.420 [2024-12-05 12:06:45.457218] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:11.420 request: 00:23:11.420 { 00:23:11.420 "name": "TLSTEST", 00:23:11.420 "trtype": "tcp", 00:23:11.420 "traddr": "10.0.0.2", 00:23:11.420 "adrfam": "ipv4", 00:23:11.420 "trsvcid": "4420", 00:23:11.420 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:11.420 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:11.420 "prchk_reftag": false, 00:23:11.420 "prchk_guard": false, 00:23:11.420 "hdgst": false, 00:23:11.420 "ddgst": false, 00:23:11.420 "psk": "key0", 00:23:11.420 "allow_unrecognized_csi": false, 00:23:11.420 "method": "bdev_nvme_attach_controller", 00:23:11.420 "req_id": 1 00:23:11.420 } 00:23:11.420 Got JSON-RPC error response 00:23:11.420 response: 00:23:11.420 { 00:23:11.420 "code": -126, 00:23:11.420 "message": "Required key not available" 00:23:11.420 } 00:23:11.420 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 105309 00:23:11.420 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 105309 ']' 00:23:11.420 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 105309 00:23:11.420 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:11.420 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:11.420 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 105309 00:23:11.420 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:11.420 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:11.420 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 105309' 00:23:11.420 killing process with pid 105309 00:23:11.420 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 105309 00:23:11.420 Received shutdown signal, test time was about 10.000000 seconds 00:23:11.420 00:23:11.420 Latency(us) 00:23:11.420 [2024-12-05T11:06:45.616Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:11.420 [2024-12-05T11:06:45.616Z] =================================================================================================================== 00:23:11.420 [2024-12-05T11:06:45.616Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:11.420 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 105309 00:23:11.679 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:11.679 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:11.679 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:11.679 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:11.679 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:11.679 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # killprocess 103219 00:23:11.679 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 103219 ']' 00:23:11.679 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 103219 00:23:11.679 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:11.679 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:11.679 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 103219 00:23:11.679 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:11.679 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:11.679 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 103219' 00:23:11.679 killing process with pid 103219 00:23:11.679 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 103219 00:23:11.679 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 103219 00:23:11.938 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # nvmfappstart -m 0x2 00:23:11.938 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:23:11.938 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:11.938 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:11.938 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=105555 00:23:11.938 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:11.938 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 105555 00:23:11.938 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 105555 ']' 00:23:11.938 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:11.938 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:11.938 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:11.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:11.938 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:11.938 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:11.938 [2024-12-05 12:06:45.967147] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:23:11.938 [2024-12-05 12:06:45.967192] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:11.938 [2024-12-05 12:06:46.027028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:11.938 [2024-12-05 12:06:46.067587] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:11.938 [2024-12-05 12:06:46.067623] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:11.938 [2024-12-05 12:06:46.067631] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:11.938 [2024-12-05 12:06:46.067637] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:11.938 [2024-12-05 12:06:46.067642] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:11.938 [2024-12-05 12:06:46.068198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:12.197 12:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:12.197 12:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:12.197 12:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:23:12.197 12:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:12.197 12:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:12.197 12:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:12.197 12:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@173 -- # NOT setup_nvmf_tgt /tmp/tmp.paq7Z4SMlu 00:23:12.197 12:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:12.197 12:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.paq7Z4SMlu 00:23:12.197 12:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:23:12.197 12:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:12.197 12:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:23:12.197 12:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:12.197 12:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.paq7Z4SMlu 00:23:12.197 12:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.paq7Z4SMlu 00:23:12.197 12:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:12.197 [2024-12-05 12:06:46.371821] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:12.455 12:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:12.455 12:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:12.714 [2024-12-05 12:06:46.772872] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:12.714 [2024-12-05 12:06:46.773095] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:12.714 12:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:12.974 malloc0 00:23:12.974 12:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:13.234 12:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.paq7Z4SMlu 00:23:13.234 [2024-12-05 12:06:47.362218] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.paq7Z4SMlu': 0100666 00:23:13.234 [2024-12-05 12:06:47.362246] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:13.234 request: 00:23:13.234 { 00:23:13.234 "name": "key0", 00:23:13.234 "path": "/tmp/tmp.paq7Z4SMlu", 00:23:13.234 "method": "keyring_file_add_key", 00:23:13.234 "req_id": 1 00:23:13.234 } 00:23:13.234 Got JSON-RPC error response 00:23:13.234 response: 00:23:13.234 { 00:23:13.234 "code": -1, 00:23:13.234 "message": "Operation not permitted" 00:23:13.234 } 00:23:13.234 12:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:13.493 [2024-12-05 12:06:47.550721] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:23:13.493 [2024-12-05 12:06:47.550751] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:13.493 request: 00:23:13.493 { 00:23:13.493 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:13.493 "host": "nqn.2016-06.io.spdk:host1", 00:23:13.493 "psk": "key0", 00:23:13.493 "method": "nvmf_subsystem_add_host", 00:23:13.493 "req_id": 1 00:23:13.493 } 00:23:13.493 Got JSON-RPC error response 00:23:13.493 response: 00:23:13.493 { 00:23:13.493 "code": -32603, 00:23:13.493 "message": "Internal error" 00:23:13.493 } 00:23:13.493 12:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:13.493 12:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:13.493 12:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:13.493 12:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:13.493 12:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # killprocess 105555 00:23:13.493 12:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 105555 ']' 00:23:13.493 12:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 105555 00:23:13.493 12:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:13.493 12:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:13.493 12:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 105555 00:23:13.493 12:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:13.493 12:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:13.493 12:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 105555' 00:23:13.493 killing process with pid 105555 00:23:13.493 12:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 105555 00:23:13.493 12:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 105555 00:23:13.751 12:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # chmod 0600 /tmp/tmp.paq7Z4SMlu 00:23:13.751 12:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # nvmfappstart -m 0x2 00:23:13.751 12:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:23:13.751 12:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:13.751 12:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:13.752 12:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=105817 00:23:13.752 12:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 105817 00:23:13.752 12:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:13.752 12:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 105817 ']' 00:23:13.752 12:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:13.752 12:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:13.752 12:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:13.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:13.752 12:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:13.752 12:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:13.752 [2024-12-05 12:06:47.850669] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:23:13.752 [2024-12-05 12:06:47.850711] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:13.752 [2024-12-05 12:06:47.929871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.011 [2024-12-05 12:06:47.970911] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:14.011 [2024-12-05 12:06:47.970946] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:14.011 [2024-12-05 12:06:47.970953] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:14.011 [2024-12-05 12:06:47.970960] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:14.011 [2024-12-05 12:06:47.970965] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:14.011 [2024-12-05 12:06:47.971506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:14.011 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:14.011 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:14.011 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:23:14.011 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:14.011 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:14.011 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:14.011 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # setup_nvmf_tgt /tmp/tmp.paq7Z4SMlu 00:23:14.011 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.paq7Z4SMlu 00:23:14.011 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:14.269 [2024-12-05 12:06:48.282863] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:14.269 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:14.528 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:14.528 [2024-12-05 12:06:48.647799] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:14.528 [2024-12-05 12:06:48.648008] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:14.528 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:14.786 malloc0 00:23:14.786 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:15.043 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.paq7Z4SMlu 00:23:15.043 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:15.301 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # bdevperf_pid=106133 00:23:15.301 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@183 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:15.301 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:15.301 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # waitforlisten 106133 /var/tmp/bdevperf.sock 00:23:15.301 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 106133 ']' 00:23:15.301 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:15.301 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:15.301 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:15.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:15.301 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:15.301 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:15.301 [2024-12-05 12:06:49.456624] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:23:15.301 [2024-12-05 12:06:49.456674] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106133 ] 00:23:15.558 [2024-12-05 12:06:49.533501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.558 [2024-12-05 12:06:49.574225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:16.124 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:16.124 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:16.124 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.paq7Z4SMlu 00:23:16.381 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:16.638 [2024-12-05 12:06:50.634797] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:16.638 TLSTESTn1 00:23:16.638 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:16.896 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # tgtconf='{ 00:23:16.896 "subsystems": [ 00:23:16.896 { 00:23:16.896 "subsystem": "keyring", 00:23:16.896 "config": [ 00:23:16.896 { 00:23:16.896 "method": "keyring_file_add_key", 00:23:16.896 "params": { 00:23:16.896 "name": "key0", 00:23:16.896 "path": "/tmp/tmp.paq7Z4SMlu" 00:23:16.896 } 00:23:16.896 } 00:23:16.896 ] 00:23:16.896 }, 00:23:16.896 { 00:23:16.896 "subsystem": "iobuf", 00:23:16.896 "config": [ 00:23:16.896 { 00:23:16.896 "method": "iobuf_set_options", 00:23:16.896 "params": { 00:23:16.896 "small_pool_count": 8192, 00:23:16.896 "large_pool_count": 1024, 00:23:16.896 "small_bufsize": 8192, 00:23:16.896 "large_bufsize": 135168, 00:23:16.896 "enable_numa": false 00:23:16.896 } 00:23:16.896 } 00:23:16.896 ] 00:23:16.896 }, 00:23:16.896 { 00:23:16.896 "subsystem": "sock", 00:23:16.896 "config": [ 00:23:16.896 { 00:23:16.896 "method": "sock_set_default_impl", 00:23:16.896 "params": { 00:23:16.896 "impl_name": "posix" 00:23:16.896 } 00:23:16.896 }, 00:23:16.896 { 00:23:16.896 "method": "sock_impl_set_options", 00:23:16.896 "params": { 00:23:16.896 "impl_name": "ssl", 00:23:16.896 "recv_buf_size": 4096, 00:23:16.896 "send_buf_size": 4096, 00:23:16.896 "enable_recv_pipe": true, 00:23:16.896 "enable_quickack": false, 00:23:16.896 "enable_placement_id": 0, 00:23:16.896 "enable_zerocopy_send_server": true, 00:23:16.896 "enable_zerocopy_send_client": false, 00:23:16.896 "zerocopy_threshold": 0, 00:23:16.896 "tls_version": 0, 00:23:16.896 "enable_ktls": false 00:23:16.896 } 00:23:16.896 }, 00:23:16.896 { 00:23:16.896 "method": "sock_impl_set_options", 00:23:16.896 "params": { 00:23:16.897 "impl_name": "posix", 00:23:16.897 "recv_buf_size": 2097152, 00:23:16.897 "send_buf_size": 2097152, 00:23:16.897 "enable_recv_pipe": true, 00:23:16.897 "enable_quickack": false, 00:23:16.897 "enable_placement_id": 0, 00:23:16.897 "enable_zerocopy_send_server": true, 00:23:16.897 "enable_zerocopy_send_client": false, 00:23:16.897 "zerocopy_threshold": 0, 00:23:16.897 "tls_version": 0, 00:23:16.897 "enable_ktls": false 00:23:16.897 } 00:23:16.897 } 00:23:16.897 ] 00:23:16.897 }, 00:23:16.897 { 00:23:16.897 "subsystem": "vmd", 00:23:16.897 "config": [] 00:23:16.897 }, 00:23:16.897 { 00:23:16.897 "subsystem": "accel", 00:23:16.897 "config": [ 00:23:16.897 { 00:23:16.897 "method": "accel_set_options", 00:23:16.897 "params": { 00:23:16.897 "small_cache_size": 128, 00:23:16.897 "large_cache_size": 16, 00:23:16.897 "task_count": 2048, 00:23:16.897 "sequence_count": 2048, 00:23:16.897 "buf_count": 2048 00:23:16.897 } 00:23:16.897 } 00:23:16.897 ] 00:23:16.897 }, 00:23:16.897 { 00:23:16.897 "subsystem": "bdev", 00:23:16.897 "config": [ 00:23:16.897 { 00:23:16.897 "method": "bdev_set_options", 00:23:16.897 "params": { 00:23:16.897 "bdev_io_pool_size": 65535, 00:23:16.897 "bdev_io_cache_size": 256, 00:23:16.897 "bdev_auto_examine": true, 00:23:16.897 "iobuf_small_cache_size": 128, 00:23:16.897 "iobuf_large_cache_size": 16 00:23:16.897 } 00:23:16.897 }, 00:23:16.897 { 00:23:16.897 "method": "bdev_raid_set_options", 00:23:16.897 "params": { 00:23:16.897 "process_window_size_kb": 1024, 00:23:16.897 "process_max_bandwidth_mb_sec": 0 00:23:16.897 } 00:23:16.897 }, 00:23:16.897 { 00:23:16.897 "method": "bdev_iscsi_set_options", 00:23:16.897 "params": { 00:23:16.897 "timeout_sec": 30 00:23:16.897 } 00:23:16.897 }, 00:23:16.897 { 00:23:16.897 "method": "bdev_nvme_set_options", 00:23:16.897 "params": { 00:23:16.897 "action_on_timeout": "none", 00:23:16.897 "timeout_us": 0, 00:23:16.897 "timeout_admin_us": 0, 00:23:16.897 "keep_alive_timeout_ms": 10000, 00:23:16.897 "arbitration_burst": 0, 00:23:16.897 "low_priority_weight": 0, 00:23:16.897 "medium_priority_weight": 0, 00:23:16.897 "high_priority_weight": 0, 00:23:16.897 "nvme_adminq_poll_period_us": 10000, 00:23:16.897 "nvme_ioq_poll_period_us": 0, 00:23:16.897 "io_queue_requests": 0, 00:23:16.897 "delay_cmd_submit": true, 00:23:16.897 "transport_retry_count": 4, 00:23:16.897 "bdev_retry_count": 3, 00:23:16.897 "transport_ack_timeout": 0, 00:23:16.897 "ctrlr_loss_timeout_sec": 0, 00:23:16.897 "reconnect_delay_sec": 0, 00:23:16.897 "fast_io_fail_timeout_sec": 0, 00:23:16.897 "disable_auto_failback": false, 00:23:16.897 "generate_uuids": false, 00:23:16.897 "transport_tos": 0, 00:23:16.897 "nvme_error_stat": false, 00:23:16.897 "rdma_srq_size": 0, 00:23:16.897 "io_path_stat": false, 00:23:16.897 "allow_accel_sequence": false, 00:23:16.897 "rdma_max_cq_size": 0, 00:23:16.897 "rdma_cm_event_timeout_ms": 0, 00:23:16.897 "dhchap_digests": [ 00:23:16.897 "sha256", 00:23:16.897 "sha384", 00:23:16.897 "sha512" 00:23:16.897 ], 00:23:16.897 "dhchap_dhgroups": [ 00:23:16.897 "null", 00:23:16.897 "ffdhe2048", 00:23:16.897 "ffdhe3072", 00:23:16.897 "ffdhe4096", 00:23:16.897 "ffdhe6144", 00:23:16.897 "ffdhe8192" 00:23:16.897 ] 00:23:16.897 } 00:23:16.897 }, 00:23:16.897 { 00:23:16.897 "method": "bdev_nvme_set_hotplug", 00:23:16.897 "params": { 00:23:16.897 "period_us": 100000, 00:23:16.897 "enable": false 00:23:16.897 } 00:23:16.897 }, 00:23:16.897 { 00:23:16.897 "method": "bdev_malloc_create", 00:23:16.897 "params": { 00:23:16.897 "name": "malloc0", 00:23:16.897 "num_blocks": 8192, 00:23:16.897 "block_size": 4096, 00:23:16.897 "physical_block_size": 4096, 00:23:16.897 "uuid": "c4fed690-ad74-4123-bd71-0fa87476b795", 00:23:16.897 "optimal_io_boundary": 0, 00:23:16.897 "md_size": 0, 00:23:16.897 "dif_type": 0, 00:23:16.897 "dif_is_head_of_md": false, 00:23:16.897 "dif_pi_format": 0 00:23:16.897 } 00:23:16.897 }, 00:23:16.897 { 00:23:16.897 "method": "bdev_wait_for_examine" 00:23:16.897 } 00:23:16.897 ] 00:23:16.897 }, 00:23:16.897 { 00:23:16.897 "subsystem": "nbd", 00:23:16.897 "config": [] 00:23:16.897 }, 00:23:16.897 { 00:23:16.897 "subsystem": "scheduler", 00:23:16.897 "config": [ 00:23:16.897 { 00:23:16.897 "method": "framework_set_scheduler", 00:23:16.897 "params": { 00:23:16.897 "name": "static" 00:23:16.897 } 00:23:16.897 } 00:23:16.897 ] 00:23:16.897 }, 00:23:16.897 { 00:23:16.897 "subsystem": "nvmf", 00:23:16.897 "config": [ 00:23:16.897 { 00:23:16.897 "method": "nvmf_set_config", 00:23:16.897 "params": { 00:23:16.897 "discovery_filter": "match_any", 00:23:16.897 "admin_cmd_passthru": { 00:23:16.897 "identify_ctrlr": false 00:23:16.897 }, 00:23:16.897 "dhchap_digests": [ 00:23:16.897 "sha256", 00:23:16.897 "sha384", 00:23:16.897 "sha512" 00:23:16.897 ], 00:23:16.897 "dhchap_dhgroups": [ 00:23:16.897 "null", 00:23:16.897 "ffdhe2048", 00:23:16.897 "ffdhe3072", 00:23:16.897 "ffdhe4096", 00:23:16.897 "ffdhe6144", 00:23:16.897 "ffdhe8192" 00:23:16.897 ] 00:23:16.897 } 00:23:16.897 }, 00:23:16.897 { 00:23:16.897 "method": "nvmf_set_max_subsystems", 00:23:16.897 "params": { 00:23:16.897 "max_subsystems": 1024 00:23:16.897 } 00:23:16.897 }, 00:23:16.897 { 00:23:16.897 "method": "nvmf_set_crdt", 00:23:16.897 "params": { 00:23:16.897 "crdt1": 0, 00:23:16.897 "crdt2": 0, 00:23:16.897 "crdt3": 0 00:23:16.897 } 00:23:16.897 }, 00:23:16.897 { 00:23:16.897 "method": "nvmf_create_transport", 00:23:16.897 "params": { 00:23:16.897 "trtype": "TCP", 00:23:16.897 "max_queue_depth": 128, 00:23:16.897 "max_io_qpairs_per_ctrlr": 127, 00:23:16.897 "in_capsule_data_size": 4096, 00:23:16.897 "max_io_size": 131072, 00:23:16.897 "io_unit_size": 131072, 00:23:16.897 "max_aq_depth": 128, 00:23:16.897 "num_shared_buffers": 511, 00:23:16.897 "buf_cache_size": 4294967295, 00:23:16.897 "dif_insert_or_strip": false, 00:23:16.897 "zcopy": false, 00:23:16.897 "c2h_success": false, 00:23:16.897 "sock_priority": 0, 00:23:16.897 "abort_timeout_sec": 1, 00:23:16.897 "ack_timeout": 0, 00:23:16.897 "data_wr_pool_size": 0 00:23:16.897 } 00:23:16.897 }, 00:23:16.897 { 00:23:16.897 "method": "nvmf_create_subsystem", 00:23:16.897 "params": { 00:23:16.897 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:16.897 "allow_any_host": false, 00:23:16.897 "serial_number": "SPDK00000000000001", 00:23:16.897 "model_number": "SPDK bdev Controller", 00:23:16.897 "max_namespaces": 10, 00:23:16.897 "min_cntlid": 1, 00:23:16.897 "max_cntlid": 65519, 00:23:16.897 "ana_reporting": false 00:23:16.897 } 00:23:16.897 }, 00:23:16.897 { 00:23:16.897 "method": "nvmf_subsystem_add_host", 00:23:16.897 "params": { 00:23:16.897 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:16.897 "host": "nqn.2016-06.io.spdk:host1", 00:23:16.897 "psk": "key0" 00:23:16.897 } 00:23:16.897 }, 00:23:16.897 { 00:23:16.897 "method": "nvmf_subsystem_add_ns", 00:23:16.897 "params": { 00:23:16.897 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:16.897 "namespace": { 00:23:16.897 "nsid": 1, 00:23:16.897 "bdev_name": "malloc0", 00:23:16.897 "nguid": "C4FED690AD744123BD710FA87476B795", 00:23:16.898 "uuid": "c4fed690-ad74-4123-bd71-0fa87476b795", 00:23:16.898 "no_auto_visible": false 00:23:16.898 } 00:23:16.898 } 00:23:16.898 }, 00:23:16.898 { 00:23:16.898 "method": "nvmf_subsystem_add_listener", 00:23:16.898 "params": { 00:23:16.898 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:16.898 "listen_address": { 00:23:16.898 "trtype": "TCP", 00:23:16.898 "adrfam": "IPv4", 00:23:16.898 "traddr": "10.0.0.2", 00:23:16.898 "trsvcid": "4420" 00:23:16.898 }, 00:23:16.898 "secure_channel": true 00:23:16.898 } 00:23:16.898 } 00:23:16.898 ] 00:23:16.898 } 00:23:16.898 ] 00:23:16.898 }' 00:23:16.898 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:17.156 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # bdevperfconf='{ 00:23:17.156 "subsystems": [ 00:23:17.156 { 00:23:17.156 "subsystem": "keyring", 00:23:17.156 "config": [ 00:23:17.156 { 00:23:17.156 "method": "keyring_file_add_key", 00:23:17.156 "params": { 00:23:17.156 "name": "key0", 00:23:17.156 "path": "/tmp/tmp.paq7Z4SMlu" 00:23:17.156 } 00:23:17.156 } 00:23:17.156 ] 00:23:17.156 }, 00:23:17.156 { 00:23:17.156 "subsystem": "iobuf", 00:23:17.156 "config": [ 00:23:17.156 { 00:23:17.156 "method": "iobuf_set_options", 00:23:17.156 "params": { 00:23:17.156 "small_pool_count": 8192, 00:23:17.156 "large_pool_count": 1024, 00:23:17.156 "small_bufsize": 8192, 00:23:17.156 "large_bufsize": 135168, 00:23:17.156 "enable_numa": false 00:23:17.156 } 00:23:17.156 } 00:23:17.156 ] 00:23:17.156 }, 00:23:17.156 { 00:23:17.156 "subsystem": "sock", 00:23:17.156 "config": [ 00:23:17.156 { 00:23:17.156 "method": "sock_set_default_impl", 00:23:17.156 "params": { 00:23:17.156 "impl_name": "posix" 00:23:17.156 } 00:23:17.156 }, 00:23:17.156 { 00:23:17.156 "method": "sock_impl_set_options", 00:23:17.156 "params": { 00:23:17.156 "impl_name": "ssl", 00:23:17.156 "recv_buf_size": 4096, 00:23:17.156 "send_buf_size": 4096, 00:23:17.156 "enable_recv_pipe": true, 00:23:17.156 "enable_quickack": false, 00:23:17.156 "enable_placement_id": 0, 00:23:17.156 "enable_zerocopy_send_server": true, 00:23:17.156 "enable_zerocopy_send_client": false, 00:23:17.156 "zerocopy_threshold": 0, 00:23:17.156 "tls_version": 0, 00:23:17.156 "enable_ktls": false 00:23:17.156 } 00:23:17.156 }, 00:23:17.156 { 00:23:17.156 "method": "sock_impl_set_options", 00:23:17.156 "params": { 00:23:17.156 "impl_name": "posix", 00:23:17.156 "recv_buf_size": 2097152, 00:23:17.156 "send_buf_size": 2097152, 00:23:17.156 "enable_recv_pipe": true, 00:23:17.156 "enable_quickack": false, 00:23:17.156 "enable_placement_id": 0, 00:23:17.156 "enable_zerocopy_send_server": true, 00:23:17.156 "enable_zerocopy_send_client": false, 00:23:17.156 "zerocopy_threshold": 0, 00:23:17.156 "tls_version": 0, 00:23:17.156 "enable_ktls": false 00:23:17.156 } 00:23:17.156 } 00:23:17.156 ] 00:23:17.156 }, 00:23:17.156 { 00:23:17.156 "subsystem": "vmd", 00:23:17.156 "config": [] 00:23:17.156 }, 00:23:17.156 { 00:23:17.156 "subsystem": "accel", 00:23:17.156 "config": [ 00:23:17.156 { 00:23:17.156 "method": "accel_set_options", 00:23:17.156 "params": { 00:23:17.156 "small_cache_size": 128, 00:23:17.156 "large_cache_size": 16, 00:23:17.156 "task_count": 2048, 00:23:17.156 "sequence_count": 2048, 00:23:17.156 "buf_count": 2048 00:23:17.156 } 00:23:17.156 } 00:23:17.156 ] 00:23:17.156 }, 00:23:17.156 { 00:23:17.156 "subsystem": "bdev", 00:23:17.156 "config": [ 00:23:17.156 { 00:23:17.156 "method": "bdev_set_options", 00:23:17.156 "params": { 00:23:17.156 "bdev_io_pool_size": 65535, 00:23:17.156 "bdev_io_cache_size": 256, 00:23:17.156 "bdev_auto_examine": true, 00:23:17.156 "iobuf_small_cache_size": 128, 00:23:17.156 "iobuf_large_cache_size": 16 00:23:17.156 } 00:23:17.156 }, 00:23:17.156 { 00:23:17.156 "method": "bdev_raid_set_options", 00:23:17.156 "params": { 00:23:17.156 "process_window_size_kb": 1024, 00:23:17.156 "process_max_bandwidth_mb_sec": 0 00:23:17.156 } 00:23:17.156 }, 00:23:17.156 { 00:23:17.156 "method": "bdev_iscsi_set_options", 00:23:17.156 "params": { 00:23:17.157 "timeout_sec": 30 00:23:17.157 } 00:23:17.157 }, 00:23:17.157 { 00:23:17.157 "method": "bdev_nvme_set_options", 00:23:17.157 "params": { 00:23:17.157 "action_on_timeout": "none", 00:23:17.157 "timeout_us": 0, 00:23:17.157 "timeout_admin_us": 0, 00:23:17.157 "keep_alive_timeout_ms": 10000, 00:23:17.157 "arbitration_burst": 0, 00:23:17.157 "low_priority_weight": 0, 00:23:17.157 "medium_priority_weight": 0, 00:23:17.157 "high_priority_weight": 0, 00:23:17.157 "nvme_adminq_poll_period_us": 10000, 00:23:17.157 "nvme_ioq_poll_period_us": 0, 00:23:17.157 "io_queue_requests": 512, 00:23:17.157 "delay_cmd_submit": true, 00:23:17.157 "transport_retry_count": 4, 00:23:17.157 "bdev_retry_count": 3, 00:23:17.157 "transport_ack_timeout": 0, 00:23:17.157 "ctrlr_loss_timeout_sec": 0, 00:23:17.157 "reconnect_delay_sec": 0, 00:23:17.157 "fast_io_fail_timeout_sec": 0, 00:23:17.157 "disable_auto_failback": false, 00:23:17.157 "generate_uuids": false, 00:23:17.157 "transport_tos": 0, 00:23:17.157 "nvme_error_stat": false, 00:23:17.157 "rdma_srq_size": 0, 00:23:17.157 "io_path_stat": false, 00:23:17.157 "allow_accel_sequence": false, 00:23:17.157 "rdma_max_cq_size": 0, 00:23:17.157 "rdma_cm_event_timeout_ms": 0, 00:23:17.157 "dhchap_digests": [ 00:23:17.157 "sha256", 00:23:17.157 "sha384", 00:23:17.157 "sha512" 00:23:17.157 ], 00:23:17.157 "dhchap_dhgroups": [ 00:23:17.157 "null", 00:23:17.157 "ffdhe2048", 00:23:17.157 "ffdhe3072", 00:23:17.157 "ffdhe4096", 00:23:17.157 "ffdhe6144", 00:23:17.157 "ffdhe8192" 00:23:17.157 ] 00:23:17.157 } 00:23:17.157 }, 00:23:17.157 { 00:23:17.157 "method": "bdev_nvme_attach_controller", 00:23:17.157 "params": { 00:23:17.157 "name": "TLSTEST", 00:23:17.157 "trtype": "TCP", 00:23:17.157 "adrfam": "IPv4", 00:23:17.157 "traddr": "10.0.0.2", 00:23:17.157 "trsvcid": "4420", 00:23:17.157 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:17.157 "prchk_reftag": false, 00:23:17.157 "prchk_guard": false, 00:23:17.157 "ctrlr_loss_timeout_sec": 0, 00:23:17.157 "reconnect_delay_sec": 0, 00:23:17.157 "fast_io_fail_timeout_sec": 0, 00:23:17.157 "psk": "key0", 00:23:17.157 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:17.157 "hdgst": false, 00:23:17.157 "ddgst": false, 00:23:17.157 "multipath": "multipath" 00:23:17.157 } 00:23:17.157 }, 00:23:17.157 { 00:23:17.157 "method": "bdev_nvme_set_hotplug", 00:23:17.157 "params": { 00:23:17.157 "period_us": 100000, 00:23:17.157 "enable": false 00:23:17.157 } 00:23:17.157 }, 00:23:17.157 { 00:23:17.157 "method": "bdev_wait_for_examine" 00:23:17.157 } 00:23:17.157 ] 00:23:17.157 }, 00:23:17.157 { 00:23:17.157 "subsystem": "nbd", 00:23:17.157 "config": [] 00:23:17.157 } 00:23:17.157 ] 00:23:17.157 }' 00:23:17.157 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # killprocess 106133 00:23:17.157 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 106133 ']' 00:23:17.157 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 106133 00:23:17.157 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:17.157 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:17.157 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 106133 00:23:17.157 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:17.157 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:17.157 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 106133' 00:23:17.157 killing process with pid 106133 00:23:17.157 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 106133 00:23:17.157 Received shutdown signal, test time was about 10.000000 seconds 00:23:17.157 00:23:17.157 Latency(us) 00:23:17.157 [2024-12-05T11:06:51.353Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:17.157 [2024-12-05T11:06:51.353Z] =================================================================================================================== 00:23:17.157 [2024-12-05T11:06:51.353Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:17.157 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 106133 00:23:17.414 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # killprocess 105817 00:23:17.414 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 105817 ']' 00:23:17.414 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 105817 00:23:17.414 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:17.414 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:17.414 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 105817 00:23:17.414 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:17.414 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:17.414 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 105817' 00:23:17.414 killing process with pid 105817 00:23:17.414 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 105817 00:23:17.414 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 105817 00:23:17.673 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:17.673 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:23:17.673 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:17.673 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:17.673 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # echo '{ 00:23:17.673 "subsystems": [ 00:23:17.673 { 00:23:17.673 "subsystem": "keyring", 00:23:17.673 "config": [ 00:23:17.673 { 00:23:17.673 "method": "keyring_file_add_key", 00:23:17.673 "params": { 00:23:17.673 "name": "key0", 00:23:17.673 "path": "/tmp/tmp.paq7Z4SMlu" 00:23:17.673 } 00:23:17.673 } 00:23:17.673 ] 00:23:17.673 }, 00:23:17.673 { 00:23:17.673 "subsystem": "iobuf", 00:23:17.673 "config": [ 00:23:17.673 { 00:23:17.673 "method": "iobuf_set_options", 00:23:17.673 "params": { 00:23:17.673 "small_pool_count": 8192, 00:23:17.673 "large_pool_count": 1024, 00:23:17.673 "small_bufsize": 8192, 00:23:17.673 "large_bufsize": 135168, 00:23:17.673 "enable_numa": false 00:23:17.673 } 00:23:17.673 } 00:23:17.673 ] 00:23:17.673 }, 00:23:17.673 { 00:23:17.673 "subsystem": "sock", 00:23:17.673 "config": [ 00:23:17.673 { 00:23:17.673 "method": "sock_set_default_impl", 00:23:17.673 "params": { 00:23:17.673 "impl_name": "posix" 00:23:17.673 } 00:23:17.673 }, 00:23:17.673 { 00:23:17.673 "method": "sock_impl_set_options", 00:23:17.673 "params": { 00:23:17.673 "impl_name": "ssl", 00:23:17.673 "recv_buf_size": 4096, 00:23:17.673 "send_buf_size": 4096, 00:23:17.673 "enable_recv_pipe": true, 00:23:17.673 "enable_quickack": false, 00:23:17.673 "enable_placement_id": 0, 00:23:17.674 "enable_zerocopy_send_server": true, 00:23:17.674 "enable_zerocopy_send_client": false, 00:23:17.674 "zerocopy_threshold": 0, 00:23:17.674 "tls_version": 0, 00:23:17.674 "enable_ktls": false 00:23:17.674 } 00:23:17.674 }, 00:23:17.674 { 00:23:17.674 "method": "sock_impl_set_options", 00:23:17.674 "params": { 00:23:17.674 "impl_name": "posix", 00:23:17.674 "recv_buf_size": 2097152, 00:23:17.674 "send_buf_size": 2097152, 00:23:17.674 "enable_recv_pipe": true, 00:23:17.674 "enable_quickack": false, 00:23:17.674 "enable_placement_id": 0, 00:23:17.674 "enable_zerocopy_send_server": true, 00:23:17.674 "enable_zerocopy_send_client": false, 00:23:17.674 "zerocopy_threshold": 0, 00:23:17.674 "tls_version": 0, 00:23:17.674 "enable_ktls": false 00:23:17.674 } 00:23:17.674 } 00:23:17.674 ] 00:23:17.674 }, 00:23:17.674 { 00:23:17.674 "subsystem": "vmd", 00:23:17.674 "config": [] 00:23:17.674 }, 00:23:17.674 { 00:23:17.674 "subsystem": "accel", 00:23:17.674 "config": [ 00:23:17.674 { 00:23:17.674 "method": "accel_set_options", 00:23:17.674 "params": { 00:23:17.674 "small_cache_size": 128, 00:23:17.674 "large_cache_size": 16, 00:23:17.674 "task_count": 2048, 00:23:17.674 "sequence_count": 2048, 00:23:17.674 "buf_count": 2048 00:23:17.674 } 00:23:17.674 } 00:23:17.674 ] 00:23:17.674 }, 00:23:17.674 { 00:23:17.674 "subsystem": "bdev", 00:23:17.674 "config": [ 00:23:17.674 { 00:23:17.674 "method": "bdev_set_options", 00:23:17.674 "params": { 00:23:17.674 "bdev_io_pool_size": 65535, 00:23:17.674 "bdev_io_cache_size": 256, 00:23:17.674 "bdev_auto_examine": true, 00:23:17.674 "iobuf_small_cache_size": 128, 00:23:17.674 "iobuf_large_cache_size": 16 00:23:17.674 } 00:23:17.674 }, 00:23:17.674 { 00:23:17.674 "method": "bdev_raid_set_options", 00:23:17.674 "params": { 00:23:17.674 "process_window_size_kb": 1024, 00:23:17.674 "process_max_bandwidth_mb_sec": 0 00:23:17.674 } 00:23:17.674 }, 00:23:17.674 { 00:23:17.674 "method": "bdev_iscsi_set_options", 00:23:17.674 "params": { 00:23:17.674 "timeout_sec": 30 00:23:17.674 } 00:23:17.674 }, 00:23:17.674 { 00:23:17.674 "method": "bdev_nvme_set_options", 00:23:17.674 "params": { 00:23:17.674 "action_on_timeout": "none", 00:23:17.674 "timeout_us": 0, 00:23:17.674 "timeout_admin_us": 0, 00:23:17.674 "keep_alive_timeout_ms": 10000, 00:23:17.674 "arbitration_burst": 0, 00:23:17.674 "low_priority_weight": 0, 00:23:17.674 "medium_priority_weight": 0, 00:23:17.674 "high_priority_weight": 0, 00:23:17.674 "nvme_adminq_poll_period_us": 10000, 00:23:17.674 "nvme_ioq_poll_period_us": 0, 00:23:17.674 "io_queue_requests": 0, 00:23:17.674 "delay_cmd_submit": true, 00:23:17.674 "transport_retry_count": 4, 00:23:17.674 "bdev_retry_count": 3, 00:23:17.674 "transport_ack_timeout": 0, 00:23:17.674 "ctrlr_loss_timeout_sec": 0, 00:23:17.674 "reconnect_delay_sec": 0, 00:23:17.674 "fast_io_fail_timeout_sec": 0, 00:23:17.674 "disable_auto_failback": false, 00:23:17.674 "generate_uuids": false, 00:23:17.674 "transport_tos": 0, 00:23:17.674 "nvme_error_stat": false, 00:23:17.674 "rdma_srq_size": 0, 00:23:17.674 "io_path_stat": false, 00:23:17.674 "allow_accel_sequence": false, 00:23:17.674 "rdma_max_cq_size": 0, 00:23:17.674 "rdma_cm_event_timeout_ms": 0, 00:23:17.674 "dhchap_digests": [ 00:23:17.674 "sha256", 00:23:17.674 "sha384", 00:23:17.674 "sha512" 00:23:17.674 ], 00:23:17.674 "dhchap_dhgroups": [ 00:23:17.674 "null", 00:23:17.674 "ffdhe2048", 00:23:17.674 "ffdhe3072", 00:23:17.674 "ffdhe4096", 00:23:17.674 "ffdhe6144", 00:23:17.674 "ffdhe8192" 00:23:17.674 ] 00:23:17.674 } 00:23:17.674 }, 00:23:17.674 { 00:23:17.674 "method": "bdev_nvme_set_hotplug", 00:23:17.674 "params": { 00:23:17.674 "period_us": 100000, 00:23:17.674 "enable": false 00:23:17.674 } 00:23:17.674 }, 00:23:17.674 { 00:23:17.674 "method": "bdev_malloc_create", 00:23:17.674 "params": { 00:23:17.674 "name": "malloc0", 00:23:17.674 "num_blocks": 8192, 00:23:17.674 "block_size": 4096, 00:23:17.674 "physical_block_size": 4096, 00:23:17.674 "uuid": "c4fed690-ad74-4123-bd71-0fa87476b795", 00:23:17.674 "optimal_io_boundary": 0, 00:23:17.674 "md_size": 0, 00:23:17.674 "dif_type": 0, 00:23:17.674 "dif_is_head_of_md": false, 00:23:17.674 "dif_pi_format": 0 00:23:17.674 } 00:23:17.674 }, 00:23:17.674 { 00:23:17.674 "method": "bdev_wait_for_examine" 00:23:17.674 } 00:23:17.674 ] 00:23:17.674 }, 00:23:17.674 { 00:23:17.674 "subsystem": "nbd", 00:23:17.674 "config": [] 00:23:17.674 }, 00:23:17.674 { 00:23:17.674 "subsystem": "scheduler", 00:23:17.674 "config": [ 00:23:17.674 { 00:23:17.674 "method": "framework_set_scheduler", 00:23:17.674 "params": { 00:23:17.674 "name": "static" 00:23:17.674 } 00:23:17.674 } 00:23:17.674 ] 00:23:17.674 }, 00:23:17.674 { 00:23:17.674 "subsystem": "nvmf", 00:23:17.674 "config": [ 00:23:17.674 { 00:23:17.674 "method": "nvmf_set_config", 00:23:17.674 "params": { 00:23:17.674 "discovery_filter": "match_any", 00:23:17.674 "admin_cmd_passthru": { 00:23:17.674 "identify_ctrlr": false 00:23:17.674 }, 00:23:17.674 "dhchap_digests": [ 00:23:17.674 "sha256", 00:23:17.674 "sha384", 00:23:17.674 "sha512" 00:23:17.674 ], 00:23:17.674 "dhchap_dhgroups": [ 00:23:17.674 "null", 00:23:17.674 "ffdhe2048", 00:23:17.674 "ffdhe3072", 00:23:17.674 "ffdhe4096", 00:23:17.674 "ffdhe6144", 00:23:17.674 "ffdhe8192" 00:23:17.674 ] 00:23:17.674 } 00:23:17.674 }, 00:23:17.674 { 00:23:17.674 "method": "nvmf_set_max_subsystems", 00:23:17.674 "params": { 00:23:17.674 "max_subsystems": 1024 00:23:17.674 } 00:23:17.674 }, 00:23:17.674 { 00:23:17.674 "method": "nvmf_set_crdt", 00:23:17.674 "params": { 00:23:17.674 "crdt1": 0, 00:23:17.674 "crdt2": 0, 00:23:17.674 "crdt3": 0 00:23:17.674 } 00:23:17.674 }, 00:23:17.674 { 00:23:17.674 "method": "nvmf_create_transport", 00:23:17.674 "params": { 00:23:17.674 "trtype": "TCP", 00:23:17.674 "max_queue_depth": 128, 00:23:17.674 "max_io_qpairs_per_ctrlr": 127, 00:23:17.674 "in_capsule_data_size": 4096, 00:23:17.674 "max_io_size": 131072, 00:23:17.674 "io_unit_size": 131072, 00:23:17.674 "max_aq_depth": 128, 00:23:17.674 "num_shared_buffers": 511, 00:23:17.674 "buf_cache_size": 4294967295, 00:23:17.674 "dif_insert_or_strip": false, 00:23:17.674 "zcopy": false, 00:23:17.674 "c2h_success": false, 00:23:17.674 "sock_priority": 0, 00:23:17.674 "abort_timeout_sec": 1, 00:23:17.674 "ack_timeout": 0, 00:23:17.674 "data_wr_pool_size": 0 00:23:17.674 } 00:23:17.674 }, 00:23:17.674 { 00:23:17.674 "method": "nvmf_create_subsystem", 00:23:17.674 "params": { 00:23:17.674 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:17.674 "allow_any_host": false, 00:23:17.674 "serial_number": "SPDK00000000000001", 00:23:17.674 "model_number": "SPDK bdev Controller", 00:23:17.674 "max_namespaces": 10, 00:23:17.674 "min_cntlid": 1, 00:23:17.674 "max_cntlid": 65519, 00:23:17.674 "ana_reporting": false 00:23:17.674 } 00:23:17.674 }, 00:23:17.674 { 00:23:17.674 "method": "nvmf_subsystem_add_host", 00:23:17.674 "params": { 00:23:17.674 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:17.674 "host": "nqn.2016-06.io.spdk:host1", 00:23:17.674 "psk": "key0" 00:23:17.674 } 00:23:17.674 }, 00:23:17.674 { 00:23:17.674 "method": "nvmf_subsystem_add_ns", 00:23:17.674 "params": { 00:23:17.674 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:17.674 "namespace": { 00:23:17.674 "nsid": 1, 00:23:17.674 "bdev_name": "malloc0", 00:23:17.674 "nguid": "C4FED690AD744123BD710FA87476B795", 00:23:17.675 "uuid": "c4fed690-ad74-4123-bd71-0fa87476b795", 00:23:17.675 "no_auto_visible": false 00:23:17.675 } 00:23:17.675 } 00:23:17.675 }, 00:23:17.675 { 00:23:17.675 "method": "nvmf_subsystem_add_listener", 00:23:17.675 "params": { 00:23:17.675 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:17.675 "listen_address": { 00:23:17.675 "trtype": "TCP", 00:23:17.675 "adrfam": "IPv4", 00:23:17.675 "traddr": "10.0.0.2", 00:23:17.675 "trsvcid": "4420" 00:23:17.675 }, 00:23:17.675 "secure_channel": true 00:23:17.675 } 00:23:17.675 } 00:23:17.675 ] 00:23:17.675 } 00:23:17.675 ] 00:23:17.675 }' 00:23:17.675 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=106548 00:23:17.675 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:17.675 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 106548 00:23:17.675 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 106548 ']' 00:23:17.675 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:17.675 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:17.675 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:17.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:17.675 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:17.675 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:17.675 [2024-12-05 12:06:51.746768] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:23:17.675 [2024-12-05 12:06:51.746814] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:17.675 [2024-12-05 12:06:51.823216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:17.675 [2024-12-05 12:06:51.863007] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:17.675 [2024-12-05 12:06:51.863044] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:17.675 [2024-12-05 12:06:51.863051] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:17.675 [2024-12-05 12:06:51.863057] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:17.675 [2024-12-05 12:06:51.863062] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:17.675 [2024-12-05 12:06:51.863657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:17.933 [2024-12-05 12:06:52.076678] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:17.933 [2024-12-05 12:06:52.108712] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:17.933 [2024-12-05 12:06:52.108921] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:18.500 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:18.500 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:18.500 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:23:18.500 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:18.500 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:18.500 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:18.500 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # bdevperf_pid=106794 00:23:18.500 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # waitforlisten 106794 /var/tmp/bdevperf.sock 00:23:18.500 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 106794 ']' 00:23:18.500 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:18.500 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:18.500 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:18.500 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:18.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:18.500 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # echo '{ 00:23:18.500 "subsystems": [ 00:23:18.500 { 00:23:18.500 "subsystem": "keyring", 00:23:18.500 "config": [ 00:23:18.500 { 00:23:18.500 "method": "keyring_file_add_key", 00:23:18.500 "params": { 00:23:18.500 "name": "key0", 00:23:18.500 "path": "/tmp/tmp.paq7Z4SMlu" 00:23:18.500 } 00:23:18.500 } 00:23:18.500 ] 00:23:18.500 }, 00:23:18.500 { 00:23:18.500 "subsystem": "iobuf", 00:23:18.500 "config": [ 00:23:18.500 { 00:23:18.500 "method": "iobuf_set_options", 00:23:18.500 "params": { 00:23:18.500 "small_pool_count": 8192, 00:23:18.500 "large_pool_count": 1024, 00:23:18.500 "small_bufsize": 8192, 00:23:18.500 "large_bufsize": 135168, 00:23:18.500 "enable_numa": false 00:23:18.500 } 00:23:18.500 } 00:23:18.500 ] 00:23:18.500 }, 00:23:18.500 { 00:23:18.500 "subsystem": "sock", 00:23:18.500 "config": [ 00:23:18.500 { 00:23:18.500 "method": "sock_set_default_impl", 00:23:18.500 "params": { 00:23:18.500 "impl_name": "posix" 00:23:18.500 } 00:23:18.500 }, 00:23:18.500 { 00:23:18.500 "method": "sock_impl_set_options", 00:23:18.500 "params": { 00:23:18.500 "impl_name": "ssl", 00:23:18.500 "recv_buf_size": 4096, 00:23:18.500 "send_buf_size": 4096, 00:23:18.500 "enable_recv_pipe": true, 00:23:18.500 "enable_quickack": false, 00:23:18.500 "enable_placement_id": 0, 00:23:18.500 "enable_zerocopy_send_server": true, 00:23:18.500 "enable_zerocopy_send_client": false, 00:23:18.500 "zerocopy_threshold": 0, 00:23:18.500 "tls_version": 0, 00:23:18.500 "enable_ktls": false 00:23:18.500 } 00:23:18.500 }, 00:23:18.500 { 00:23:18.500 "method": "sock_impl_set_options", 00:23:18.500 "params": { 00:23:18.500 "impl_name": "posix", 00:23:18.500 "recv_buf_size": 2097152, 00:23:18.500 "send_buf_size": 2097152, 00:23:18.500 "enable_recv_pipe": true, 00:23:18.500 "enable_quickack": false, 00:23:18.500 "enable_placement_id": 0, 00:23:18.500 "enable_zerocopy_send_server": true, 00:23:18.500 "enable_zerocopy_send_client": false, 00:23:18.500 "zerocopy_threshold": 0, 00:23:18.500 "tls_version": 0, 00:23:18.500 "enable_ktls": false 00:23:18.500 } 00:23:18.500 } 00:23:18.500 ] 00:23:18.500 }, 00:23:18.500 { 00:23:18.500 "subsystem": "vmd", 00:23:18.500 "config": [] 00:23:18.500 }, 00:23:18.500 { 00:23:18.500 "subsystem": "accel", 00:23:18.500 "config": [ 00:23:18.500 { 00:23:18.500 "method": "accel_set_options", 00:23:18.500 "params": { 00:23:18.500 "small_cache_size": 128, 00:23:18.500 "large_cache_size": 16, 00:23:18.500 "task_count": 2048, 00:23:18.500 "sequence_count": 2048, 00:23:18.500 "buf_count": 2048 00:23:18.500 } 00:23:18.500 } 00:23:18.500 ] 00:23:18.500 }, 00:23:18.500 { 00:23:18.500 "subsystem": "bdev", 00:23:18.500 "config": [ 00:23:18.500 { 00:23:18.500 "method": "bdev_set_options", 00:23:18.500 "params": { 00:23:18.500 "bdev_io_pool_size": 65535, 00:23:18.500 "bdev_io_cache_size": 256, 00:23:18.500 "bdev_auto_examine": true, 00:23:18.500 "iobuf_small_cache_size": 128, 00:23:18.500 "iobuf_large_cache_size": 16 00:23:18.500 } 00:23:18.500 }, 00:23:18.500 { 00:23:18.500 "method": "bdev_raid_set_options", 00:23:18.500 "params": { 00:23:18.500 "process_window_size_kb": 1024, 00:23:18.500 "process_max_bandwidth_mb_sec": 0 00:23:18.500 } 00:23:18.500 }, 00:23:18.500 { 00:23:18.500 "method": "bdev_iscsi_set_options", 00:23:18.500 "params": { 00:23:18.500 "timeout_sec": 30 00:23:18.500 } 00:23:18.500 }, 00:23:18.500 { 00:23:18.500 "method": "bdev_nvme_set_options", 00:23:18.500 "params": { 00:23:18.500 "action_on_timeout": "none", 00:23:18.500 "timeout_us": 0, 00:23:18.500 "timeout_admin_us": 0, 00:23:18.500 "keep_alive_timeout_ms": 10000, 00:23:18.500 "arbitration_burst": 0, 00:23:18.500 "low_priority_weight": 0, 00:23:18.500 "medium_priority_weight": 0, 00:23:18.500 "high_priority_weight": 0, 00:23:18.500 "nvme_adminq_poll_period_us": 10000, 00:23:18.500 "nvme_ioq_poll_period_us": 0, 00:23:18.500 "io_queue_requests": 512, 00:23:18.500 "delay_cmd_submit": true, 00:23:18.500 "transport_retry_count": 4, 00:23:18.500 "bdev_retry_count": 3, 00:23:18.500 "transport_ack_timeout": 0, 00:23:18.500 "ctrlr_loss_timeout_sec": 0, 00:23:18.500 "reconnect_delay_sec": 0, 00:23:18.501 "fast_io_fail_timeout_sec": 0, 00:23:18.501 "disable_auto_failback": false, 00:23:18.501 "generate_uuids": false, 00:23:18.501 "transport_tos": 0, 00:23:18.501 "nvme_error_stat": false, 00:23:18.501 "rdma_srq_size": 0, 00:23:18.501 "io_path_stat": false, 00:23:18.501 "allow_accel_sequence": false, 00:23:18.501 "rdma_max_cq_size": 0, 00:23:18.501 "rdma_cm_event_timeout_ms": 0, 00:23:18.501 "dhchap_digests": [ 00:23:18.501 "sha256", 00:23:18.501 "sha384", 00:23:18.501 "sha512" 00:23:18.501 ], 00:23:18.501 "dhchap_dhgroups": [ 00:23:18.501 "null", 00:23:18.501 "ffdhe2048", 00:23:18.501 "ffdhe3072", 00:23:18.501 "ffdhe4096", 00:23:18.501 "ffdhe6144", 00:23:18.501 "ffdhe8192" 00:23:18.501 ] 00:23:18.501 } 00:23:18.501 }, 00:23:18.501 { 00:23:18.501 "method": "bdev_nvme_attach_controller", 00:23:18.501 "params": { 00:23:18.501 "name": "TLSTEST", 00:23:18.501 "trtype": "TCP", 00:23:18.501 "adrfam": "IPv4", 00:23:18.501 "traddr": "10.0.0.2", 00:23:18.501 "trsvcid": "4420", 00:23:18.501 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:18.501 "prchk_reftag": false, 00:23:18.501 "prchk_guard": false, 00:23:18.501 "ctrlr_loss_timeout_sec": 0, 00:23:18.501 "reconnect_delay_sec": 0, 00:23:18.501 "fast_io_fail_timeout_sec": 0, 00:23:18.501 "psk": "key0", 00:23:18.501 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:18.501 "hdgst": false, 00:23:18.501 "ddgst": false, 00:23:18.501 "multipath": "multipath" 00:23:18.501 } 00:23:18.501 }, 00:23:18.501 { 00:23:18.501 "method": "bdev_nvme_set_hotplug", 00:23:18.501 "params": { 00:23:18.501 "period_us": 100000, 00:23:18.501 "enable": false 00:23:18.501 } 00:23:18.501 }, 00:23:18.501 { 00:23:18.501 "method": "bdev_wait_for_examine" 00:23:18.501 } 00:23:18.501 ] 00:23:18.501 }, 00:23:18.501 { 00:23:18.501 "subsystem": "nbd", 00:23:18.501 "config": [] 00:23:18.501 } 00:23:18.501 ] 00:23:18.501 }' 00:23:18.501 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:18.501 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:18.501 [2024-12-05 12:06:52.655588] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:23:18.501 [2024-12-05 12:06:52.655640] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106794 ] 00:23:18.760 [2024-12-05 12:06:52.730986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.760 [2024-12-05 12:06:52.772669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:18.760 [2024-12-05 12:06:52.924356] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:19.327 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:19.327 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:19.327 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:19.586 Running I/O for 10 seconds... 00:23:21.471 5188.00 IOPS, 20.27 MiB/s [2024-12-05T11:06:56.604Z] 5403.00 IOPS, 21.11 MiB/s [2024-12-05T11:06:57.982Z] 5421.67 IOPS, 21.18 MiB/s [2024-12-05T11:06:58.920Z] 5433.00 IOPS, 21.22 MiB/s [2024-12-05T11:06:59.635Z] 5476.60 IOPS, 21.39 MiB/s [2024-12-05T11:07:01.011Z] 5504.00 IOPS, 21.50 MiB/s [2024-12-05T11:07:01.946Z] 5499.29 IOPS, 21.48 MiB/s [2024-12-05T11:07:02.883Z] 5510.38 IOPS, 21.52 MiB/s [2024-12-05T11:07:03.819Z] 5521.33 IOPS, 21.57 MiB/s [2024-12-05T11:07:03.819Z] 5525.90 IOPS, 21.59 MiB/s 00:23:29.623 Latency(us) 00:23:29.623 [2024-12-05T11:07:03.819Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:29.623 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:29.623 Verification LBA range: start 0x0 length 0x2000 00:23:29.623 TLSTESTn1 : 10.01 5531.63 21.61 0.00 0.00 23106.24 5211.67 32955.25 00:23:29.623 [2024-12-05T11:07:03.819Z] =================================================================================================================== 00:23:29.623 [2024-12-05T11:07:03.819Z] Total : 5531.63 21.61 0.00 0.00 23106.24 5211.67 32955.25 00:23:29.623 { 00:23:29.623 "results": [ 00:23:29.623 { 00:23:29.623 "job": "TLSTESTn1", 00:23:29.623 "core_mask": "0x4", 00:23:29.623 "workload": "verify", 00:23:29.623 "status": "finished", 00:23:29.623 "verify_range": { 00:23:29.623 "start": 0, 00:23:29.623 "length": 8192 00:23:29.623 }, 00:23:29.623 "queue_depth": 128, 00:23:29.623 "io_size": 4096, 00:23:29.623 "runtime": 10.012596, 00:23:29.623 "iops": 5531.632355884528, 00:23:29.623 "mibps": 21.607938890173937, 00:23:29.623 "io_failed": 0, 00:23:29.623 "io_timeout": 0, 00:23:29.623 "avg_latency_us": 23106.240652047192, 00:23:29.623 "min_latency_us": 5211.672380952381, 00:23:29.623 "max_latency_us": 32955.24571428572 00:23:29.623 } 00:23:29.623 ], 00:23:29.623 "core_count": 1 00:23:29.623 } 00:23:29.623 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:29.623 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # killprocess 106794 00:23:29.623 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 106794 ']' 00:23:29.623 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 106794 00:23:29.623 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:29.623 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:29.623 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 106794 00:23:29.623 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:29.623 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:29.623 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 106794' 00:23:29.623 killing process with pid 106794 00:23:29.623 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 106794 00:23:29.623 Received shutdown signal, test time was about 10.000000 seconds 00:23:29.623 00:23:29.623 Latency(us) 00:23:29.623 [2024-12-05T11:07:03.819Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:29.623 [2024-12-05T11:07:03.819Z] =================================================================================================================== 00:23:29.623 [2024-12-05T11:07:03.819Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:29.623 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 106794 00:23:29.882 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@212 -- # killprocess 106548 00:23:29.882 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 106548 ']' 00:23:29.882 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 106548 00:23:29.882 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:29.882 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:29.882 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 106548 00:23:29.882 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:29.882 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:29.882 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 106548' 00:23:29.882 killing process with pid 106548 00:23:29.882 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 106548 00:23:29.882 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 106548 00:23:30.141 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # nvmfappstart 00:23:30.141 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:23:30.141 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:30.141 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:30.141 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=108641 00:23:30.141 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 108641 00:23:30.141 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:30.141 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 108641 ']' 00:23:30.141 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:30.141 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:30.141 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:30.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:30.141 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:30.141 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:30.141 [2024-12-05 12:07:04.137629] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:23:30.141 [2024-12-05 12:07:04.137674] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:30.141 [2024-12-05 12:07:04.215505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.141 [2024-12-05 12:07:04.256349] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:30.141 [2024-12-05 12:07:04.256394] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:30.141 [2024-12-05 12:07:04.256402] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:30.141 [2024-12-05 12:07:04.256408] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:30.141 [2024-12-05 12:07:04.256413] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:30.141 [2024-12-05 12:07:04.256972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:30.399 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:30.399 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:30.399 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:23:30.399 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:30.399 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:30.399 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:30.399 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # setup_nvmf_tgt /tmp/tmp.paq7Z4SMlu 00:23:30.399 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.paq7Z4SMlu 00:23:30.399 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:30.399 [2024-12-05 12:07:04.560322] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:30.399 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:30.657 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:30.916 [2024-12-05 12:07:04.945307] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:30.916 [2024-12-05 12:07:04.945519] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:30.916 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:31.174 malloc0 00:23:31.174 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:31.432 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.paq7Z4SMlu 00:23:31.432 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:31.709 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:31.709 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # bdevperf_pid=108905 00:23:31.709 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:31.709 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # waitforlisten 108905 /var/tmp/bdevperf.sock 00:23:31.709 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 108905 ']' 00:23:31.709 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:31.709 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:31.709 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:31.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:31.709 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:31.709 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:31.709 [2024-12-05 12:07:05.747277] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:23:31.709 [2024-12-05 12:07:05.747330] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108905 ] 00:23:31.709 [2024-12-05 12:07:05.821720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:31.709 [2024-12-05 12:07:05.863710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:31.967 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:31.967 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:31.967 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.paq7Z4SMlu 00:23:31.967 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:32.225 [2024-12-05 12:07:06.307508] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:32.225 nvme0n1 00:23:32.225 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:32.483 Running I/O for 1 seconds... 00:23:33.418 5468.00 IOPS, 21.36 MiB/s 00:23:33.418 Latency(us) 00:23:33.418 [2024-12-05T11:07:07.614Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:33.418 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:33.418 Verification LBA range: start 0x0 length 0x2000 00:23:33.418 nvme0n1 : 1.01 5515.69 21.55 0.00 0.00 23021.67 6023.07 27587.54 00:23:33.418 [2024-12-05T11:07:07.614Z] =================================================================================================================== 00:23:33.418 [2024-12-05T11:07:07.614Z] Total : 5515.69 21.55 0.00 0.00 23021.67 6023.07 27587.54 00:23:33.418 { 00:23:33.418 "results": [ 00:23:33.418 { 00:23:33.418 "job": "nvme0n1", 00:23:33.418 "core_mask": "0x2", 00:23:33.418 "workload": "verify", 00:23:33.418 "status": "finished", 00:23:33.418 "verify_range": { 00:23:33.418 "start": 0, 00:23:33.418 "length": 8192 00:23:33.418 }, 00:23:33.418 "queue_depth": 128, 00:23:33.418 "io_size": 4096, 00:23:33.418 "runtime": 1.014742, 00:23:33.418 "iops": 5515.687731462775, 00:23:33.418 "mibps": 21.545655201026467, 00:23:33.418 "io_failed": 0, 00:23:33.418 "io_timeout": 0, 00:23:33.418 "avg_latency_us": 23021.668764729406, 00:23:33.418 "min_latency_us": 6023.070476190476, 00:23:33.418 "max_latency_us": 27587.53523809524 00:23:33.418 } 00:23:33.418 ], 00:23:33.418 "core_count": 1 00:23:33.418 } 00:23:33.418 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@231 -- # killprocess 108905 00:23:33.418 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 108905 ']' 00:23:33.418 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 108905 00:23:33.418 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:33.418 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:33.418 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108905 00:23:33.418 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:33.418 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:33.418 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108905' 00:23:33.418 killing process with pid 108905 00:23:33.418 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 108905 00:23:33.418 Received shutdown signal, test time was about 1.000000 seconds 00:23:33.418 00:23:33.418 Latency(us) 00:23:33.418 [2024-12-05T11:07:07.614Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:33.418 [2024-12-05T11:07:07.614Z] =================================================================================================================== 00:23:33.418 [2024-12-05T11:07:07.614Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:33.418 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 108905 00:23:33.677 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # killprocess 108641 00:23:33.677 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 108641 ']' 00:23:33.677 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 108641 00:23:33.677 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:33.677 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:33.677 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108641 00:23:33.677 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:33.677 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:33.677 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108641' 00:23:33.677 killing process with pid 108641 00:23:33.677 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 108641 00:23:33.677 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 108641 00:23:33.936 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # nvmfappstart 00:23:33.937 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:23:33.937 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:33.937 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:33.937 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=109275 00:23:33.937 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 109275 00:23:33.937 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:33.937 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 109275 ']' 00:23:33.937 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:33.937 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:33.937 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:33.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:33.937 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:33.937 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:33.937 [2024-12-05 12:07:08.012284] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:23:33.937 [2024-12-05 12:07:08.012330] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:33.937 [2024-12-05 12:07:08.088862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.937 [2024-12-05 12:07:08.128988] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:33.937 [2024-12-05 12:07:08.129026] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:33.937 [2024-12-05 12:07:08.129033] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:33.937 [2024-12-05 12:07:08.129039] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:33.937 [2024-12-05 12:07:08.129044] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:33.937 [2024-12-05 12:07:08.129646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:34.196 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:34.196 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:34.196 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:23:34.196 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:34.196 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:34.196 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:34.196 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@238 -- # rpc_cmd 00:23:34.196 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.196 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:34.196 [2024-12-05 12:07:08.261786] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:34.196 malloc0 00:23:34.196 [2024-12-05 12:07:08.289867] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:34.196 [2024-12-05 12:07:08.290081] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:34.196 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.196 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@251 -- # bdevperf_pid=109388 00:23:34.196 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@249 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:34.196 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@253 -- # waitforlisten 109388 /var/tmp/bdevperf.sock 00:23:34.196 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 109388 ']' 00:23:34.196 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:34.196 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:34.196 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:34.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:34.196 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:34.196 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:34.196 [2024-12-05 12:07:08.365068] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:23:34.196 [2024-12-05 12:07:08.365108] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109388 ] 00:23:34.455 [2024-12-05 12:07:08.437828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:34.455 [2024-12-05 12:07:08.477908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:34.455 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:34.455 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:34.455 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.paq7Z4SMlu 00:23:34.713 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:34.972 [2024-12-05 12:07:08.926293] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:34.972 nvme0n1 00:23:34.972 12:07:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:34.972 Running I/O for 1 seconds... 00:23:36.190 5239.00 IOPS, 20.46 MiB/s 00:23:36.190 Latency(us) 00:23:36.190 [2024-12-05T11:07:10.386Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:36.190 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:36.190 Verification LBA range: start 0x0 length 0x2000 00:23:36.190 nvme0n1 : 1.02 5274.31 20.60 0.00 0.00 24079.28 4774.77 22843.98 00:23:36.190 [2024-12-05T11:07:10.386Z] =================================================================================================================== 00:23:36.190 [2024-12-05T11:07:10.386Z] Total : 5274.31 20.60 0.00 0.00 24079.28 4774.77 22843.98 00:23:36.190 { 00:23:36.190 "results": [ 00:23:36.190 { 00:23:36.190 "job": "nvme0n1", 00:23:36.190 "core_mask": "0x2", 00:23:36.190 "workload": "verify", 00:23:36.190 "status": "finished", 00:23:36.190 "verify_range": { 00:23:36.190 "start": 0, 00:23:36.190 "length": 8192 00:23:36.190 }, 00:23:36.190 "queue_depth": 128, 00:23:36.190 "io_size": 4096, 00:23:36.190 "runtime": 1.017763, 00:23:36.190 "iops": 5274.312389033596, 00:23:36.190 "mibps": 20.602782769662486, 00:23:36.190 "io_failed": 0, 00:23:36.190 "io_timeout": 0, 00:23:36.190 "avg_latency_us": 24079.276792278757, 00:23:36.190 "min_latency_us": 4774.765714285714, 00:23:36.190 "max_latency_us": 22843.977142857144 00:23:36.190 } 00:23:36.190 ], 00:23:36.190 "core_count": 1 00:23:36.190 } 00:23:36.190 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # rpc_cmd save_config 00:23:36.190 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.190 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:36.190 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.190 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # tgtcfg='{ 00:23:36.190 "subsystems": [ 00:23:36.190 { 00:23:36.190 "subsystem": "keyring", 00:23:36.190 "config": [ 00:23:36.190 { 00:23:36.190 "method": "keyring_file_add_key", 00:23:36.190 "params": { 00:23:36.190 "name": "key0", 00:23:36.190 "path": "/tmp/tmp.paq7Z4SMlu" 00:23:36.190 } 00:23:36.190 } 00:23:36.190 ] 00:23:36.190 }, 00:23:36.190 { 00:23:36.190 "subsystem": "iobuf", 00:23:36.190 "config": [ 00:23:36.190 { 00:23:36.190 "method": "iobuf_set_options", 00:23:36.190 "params": { 00:23:36.190 "small_pool_count": 8192, 00:23:36.190 "large_pool_count": 1024, 00:23:36.190 "small_bufsize": 8192, 00:23:36.190 "large_bufsize": 135168, 00:23:36.190 "enable_numa": false 00:23:36.190 } 00:23:36.190 } 00:23:36.190 ] 00:23:36.190 }, 00:23:36.190 { 00:23:36.190 "subsystem": "sock", 00:23:36.190 "config": [ 00:23:36.190 { 00:23:36.190 "method": "sock_set_default_impl", 00:23:36.190 "params": { 00:23:36.190 "impl_name": "posix" 00:23:36.190 } 00:23:36.191 }, 00:23:36.191 { 00:23:36.191 "method": "sock_impl_set_options", 00:23:36.191 "params": { 00:23:36.191 "impl_name": "ssl", 00:23:36.191 "recv_buf_size": 4096, 00:23:36.191 "send_buf_size": 4096, 00:23:36.191 "enable_recv_pipe": true, 00:23:36.191 "enable_quickack": false, 00:23:36.191 "enable_placement_id": 0, 00:23:36.191 "enable_zerocopy_send_server": true, 00:23:36.191 "enable_zerocopy_send_client": false, 00:23:36.191 "zerocopy_threshold": 0, 00:23:36.191 "tls_version": 0, 00:23:36.191 "enable_ktls": false 00:23:36.191 } 00:23:36.191 }, 00:23:36.191 { 00:23:36.191 "method": "sock_impl_set_options", 00:23:36.191 "params": { 00:23:36.191 "impl_name": "posix", 00:23:36.191 "recv_buf_size": 2097152, 00:23:36.191 "send_buf_size": 2097152, 00:23:36.191 "enable_recv_pipe": true, 00:23:36.191 "enable_quickack": false, 00:23:36.191 "enable_placement_id": 0, 00:23:36.191 "enable_zerocopy_send_server": true, 00:23:36.191 "enable_zerocopy_send_client": false, 00:23:36.191 "zerocopy_threshold": 0, 00:23:36.191 "tls_version": 0, 00:23:36.191 "enable_ktls": false 00:23:36.191 } 00:23:36.191 } 00:23:36.191 ] 00:23:36.191 }, 00:23:36.191 { 00:23:36.191 "subsystem": "vmd", 00:23:36.191 "config": [] 00:23:36.191 }, 00:23:36.191 { 00:23:36.191 "subsystem": "accel", 00:23:36.191 "config": [ 00:23:36.191 { 00:23:36.191 "method": "accel_set_options", 00:23:36.191 "params": { 00:23:36.191 "small_cache_size": 128, 00:23:36.191 "large_cache_size": 16, 00:23:36.191 "task_count": 2048, 00:23:36.191 "sequence_count": 2048, 00:23:36.191 "buf_count": 2048 00:23:36.191 } 00:23:36.191 } 00:23:36.191 ] 00:23:36.191 }, 00:23:36.191 { 00:23:36.191 "subsystem": "bdev", 00:23:36.191 "config": [ 00:23:36.191 { 00:23:36.191 "method": "bdev_set_options", 00:23:36.191 "params": { 00:23:36.191 "bdev_io_pool_size": 65535, 00:23:36.191 "bdev_io_cache_size": 256, 00:23:36.191 "bdev_auto_examine": true, 00:23:36.191 "iobuf_small_cache_size": 128, 00:23:36.191 "iobuf_large_cache_size": 16 00:23:36.191 } 00:23:36.191 }, 00:23:36.191 { 00:23:36.191 "method": "bdev_raid_set_options", 00:23:36.191 "params": { 00:23:36.191 "process_window_size_kb": 1024, 00:23:36.191 "process_max_bandwidth_mb_sec": 0 00:23:36.191 } 00:23:36.191 }, 00:23:36.191 { 00:23:36.191 "method": "bdev_iscsi_set_options", 00:23:36.191 "params": { 00:23:36.191 "timeout_sec": 30 00:23:36.191 } 00:23:36.191 }, 00:23:36.191 { 00:23:36.191 "method": "bdev_nvme_set_options", 00:23:36.191 "params": { 00:23:36.191 "action_on_timeout": "none", 00:23:36.191 "timeout_us": 0, 00:23:36.191 "timeout_admin_us": 0, 00:23:36.191 "keep_alive_timeout_ms": 10000, 00:23:36.191 "arbitration_burst": 0, 00:23:36.191 "low_priority_weight": 0, 00:23:36.191 "medium_priority_weight": 0, 00:23:36.191 "high_priority_weight": 0, 00:23:36.191 "nvme_adminq_poll_period_us": 10000, 00:23:36.191 "nvme_ioq_poll_period_us": 0, 00:23:36.191 "io_queue_requests": 0, 00:23:36.191 "delay_cmd_submit": true, 00:23:36.191 "transport_retry_count": 4, 00:23:36.191 "bdev_retry_count": 3, 00:23:36.191 "transport_ack_timeout": 0, 00:23:36.191 "ctrlr_loss_timeout_sec": 0, 00:23:36.191 "reconnect_delay_sec": 0, 00:23:36.191 "fast_io_fail_timeout_sec": 0, 00:23:36.191 "disable_auto_failback": false, 00:23:36.191 "generate_uuids": false, 00:23:36.191 "transport_tos": 0, 00:23:36.191 "nvme_error_stat": false, 00:23:36.191 "rdma_srq_size": 0, 00:23:36.191 "io_path_stat": false, 00:23:36.191 "allow_accel_sequence": false, 00:23:36.191 "rdma_max_cq_size": 0, 00:23:36.191 "rdma_cm_event_timeout_ms": 0, 00:23:36.191 "dhchap_digests": [ 00:23:36.191 "sha256", 00:23:36.191 "sha384", 00:23:36.191 "sha512" 00:23:36.191 ], 00:23:36.191 "dhchap_dhgroups": [ 00:23:36.191 "null", 00:23:36.191 "ffdhe2048", 00:23:36.191 "ffdhe3072", 00:23:36.191 "ffdhe4096", 00:23:36.191 "ffdhe6144", 00:23:36.191 "ffdhe8192" 00:23:36.191 ] 00:23:36.191 } 00:23:36.191 }, 00:23:36.191 { 00:23:36.191 "method": "bdev_nvme_set_hotplug", 00:23:36.191 "params": { 00:23:36.191 "period_us": 100000, 00:23:36.191 "enable": false 00:23:36.191 } 00:23:36.191 }, 00:23:36.191 { 00:23:36.191 "method": "bdev_malloc_create", 00:23:36.191 "params": { 00:23:36.191 "name": "malloc0", 00:23:36.191 "num_blocks": 8192, 00:23:36.191 "block_size": 4096, 00:23:36.191 "physical_block_size": 4096, 00:23:36.191 "uuid": "11d63dc3-1b89-4777-942a-483ece76202d", 00:23:36.191 "optimal_io_boundary": 0, 00:23:36.191 "md_size": 0, 00:23:36.191 "dif_type": 0, 00:23:36.191 "dif_is_head_of_md": false, 00:23:36.191 "dif_pi_format": 0 00:23:36.191 } 00:23:36.191 }, 00:23:36.191 { 00:23:36.191 "method": "bdev_wait_for_examine" 00:23:36.191 } 00:23:36.191 ] 00:23:36.191 }, 00:23:36.191 { 00:23:36.191 "subsystem": "nbd", 00:23:36.191 "config": [] 00:23:36.191 }, 00:23:36.191 { 00:23:36.191 "subsystem": "scheduler", 00:23:36.191 "config": [ 00:23:36.191 { 00:23:36.191 "method": "framework_set_scheduler", 00:23:36.191 "params": { 00:23:36.191 "name": "static" 00:23:36.191 } 00:23:36.191 } 00:23:36.191 ] 00:23:36.191 }, 00:23:36.191 { 00:23:36.191 "subsystem": "nvmf", 00:23:36.191 "config": [ 00:23:36.191 { 00:23:36.191 "method": "nvmf_set_config", 00:23:36.191 "params": { 00:23:36.191 "discovery_filter": "match_any", 00:23:36.191 "admin_cmd_passthru": { 00:23:36.191 "identify_ctrlr": false 00:23:36.191 }, 00:23:36.191 "dhchap_digests": [ 00:23:36.191 "sha256", 00:23:36.191 "sha384", 00:23:36.191 "sha512" 00:23:36.191 ], 00:23:36.191 "dhchap_dhgroups": [ 00:23:36.191 "null", 00:23:36.191 "ffdhe2048", 00:23:36.191 "ffdhe3072", 00:23:36.191 "ffdhe4096", 00:23:36.191 "ffdhe6144", 00:23:36.191 "ffdhe8192" 00:23:36.191 ] 00:23:36.191 } 00:23:36.191 }, 00:23:36.191 { 00:23:36.191 "method": "nvmf_set_max_subsystems", 00:23:36.191 "params": { 00:23:36.191 "max_subsystems": 1024 00:23:36.191 } 00:23:36.191 }, 00:23:36.191 { 00:23:36.191 "method": "nvmf_set_crdt", 00:23:36.191 "params": { 00:23:36.191 "crdt1": 0, 00:23:36.191 "crdt2": 0, 00:23:36.191 "crdt3": 0 00:23:36.191 } 00:23:36.191 }, 00:23:36.191 { 00:23:36.191 "method": "nvmf_create_transport", 00:23:36.191 "params": { 00:23:36.191 "trtype": "TCP", 00:23:36.191 "max_queue_depth": 128, 00:23:36.191 "max_io_qpairs_per_ctrlr": 127, 00:23:36.191 "in_capsule_data_size": 4096, 00:23:36.191 "max_io_size": 131072, 00:23:36.191 "io_unit_size": 131072, 00:23:36.191 "max_aq_depth": 128, 00:23:36.191 "num_shared_buffers": 511, 00:23:36.191 "buf_cache_size": 4294967295, 00:23:36.191 "dif_insert_or_strip": false, 00:23:36.191 "zcopy": false, 00:23:36.191 "c2h_success": false, 00:23:36.191 "sock_priority": 0, 00:23:36.191 "abort_timeout_sec": 1, 00:23:36.191 "ack_timeout": 0, 00:23:36.191 "data_wr_pool_size": 0 00:23:36.191 } 00:23:36.191 }, 00:23:36.191 { 00:23:36.191 "method": "nvmf_create_subsystem", 00:23:36.191 "params": { 00:23:36.191 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.191 "allow_any_host": false, 00:23:36.191 "serial_number": "00000000000000000000", 00:23:36.191 "model_number": "SPDK bdev Controller", 00:23:36.191 "max_namespaces": 32, 00:23:36.191 "min_cntlid": 1, 00:23:36.191 "max_cntlid": 65519, 00:23:36.191 "ana_reporting": false 00:23:36.191 } 00:23:36.191 }, 00:23:36.191 { 00:23:36.191 "method": "nvmf_subsystem_add_host", 00:23:36.191 "params": { 00:23:36.191 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.191 "host": "nqn.2016-06.io.spdk:host1", 00:23:36.191 "psk": "key0" 00:23:36.191 } 00:23:36.191 }, 00:23:36.191 { 00:23:36.191 "method": "nvmf_subsystem_add_ns", 00:23:36.191 "params": { 00:23:36.191 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.191 "namespace": { 00:23:36.191 "nsid": 1, 00:23:36.191 "bdev_name": "malloc0", 00:23:36.191 "nguid": "11D63DC31B894777942A483ECE76202D", 00:23:36.191 "uuid": "11d63dc3-1b89-4777-942a-483ece76202d", 00:23:36.191 "no_auto_visible": false 00:23:36.191 } 00:23:36.191 } 00:23:36.191 }, 00:23:36.191 { 00:23:36.191 "method": "nvmf_subsystem_add_listener", 00:23:36.191 "params": { 00:23:36.191 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.191 "listen_address": { 00:23:36.191 "trtype": "TCP", 00:23:36.191 "adrfam": "IPv4", 00:23:36.191 "traddr": "10.0.0.2", 00:23:36.191 "trsvcid": "4420" 00:23:36.191 }, 00:23:36.191 "secure_channel": false, 00:23:36.191 "sock_impl": "ssl" 00:23:36.191 } 00:23:36.191 } 00:23:36.191 ] 00:23:36.191 } 00:23:36.191 ] 00:23:36.191 }' 00:23:36.191 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@263 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:36.449 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@263 -- # bperfcfg='{ 00:23:36.449 "subsystems": [ 00:23:36.449 { 00:23:36.449 "subsystem": "keyring", 00:23:36.449 "config": [ 00:23:36.449 { 00:23:36.449 "method": "keyring_file_add_key", 00:23:36.449 "params": { 00:23:36.449 "name": "key0", 00:23:36.449 "path": "/tmp/tmp.paq7Z4SMlu" 00:23:36.449 } 00:23:36.449 } 00:23:36.449 ] 00:23:36.449 }, 00:23:36.449 { 00:23:36.449 "subsystem": "iobuf", 00:23:36.449 "config": [ 00:23:36.449 { 00:23:36.449 "method": "iobuf_set_options", 00:23:36.449 "params": { 00:23:36.449 "small_pool_count": 8192, 00:23:36.449 "large_pool_count": 1024, 00:23:36.449 "small_bufsize": 8192, 00:23:36.449 "large_bufsize": 135168, 00:23:36.449 "enable_numa": false 00:23:36.449 } 00:23:36.449 } 00:23:36.449 ] 00:23:36.449 }, 00:23:36.449 { 00:23:36.449 "subsystem": "sock", 00:23:36.449 "config": [ 00:23:36.449 { 00:23:36.449 "method": "sock_set_default_impl", 00:23:36.449 "params": { 00:23:36.449 "impl_name": "posix" 00:23:36.449 } 00:23:36.449 }, 00:23:36.449 { 00:23:36.449 "method": "sock_impl_set_options", 00:23:36.449 "params": { 00:23:36.449 "impl_name": "ssl", 00:23:36.449 "recv_buf_size": 4096, 00:23:36.449 "send_buf_size": 4096, 00:23:36.449 "enable_recv_pipe": true, 00:23:36.449 "enable_quickack": false, 00:23:36.449 "enable_placement_id": 0, 00:23:36.449 "enable_zerocopy_send_server": true, 00:23:36.449 "enable_zerocopy_send_client": false, 00:23:36.449 "zerocopy_threshold": 0, 00:23:36.449 "tls_version": 0, 00:23:36.449 "enable_ktls": false 00:23:36.449 } 00:23:36.449 }, 00:23:36.449 { 00:23:36.449 "method": "sock_impl_set_options", 00:23:36.449 "params": { 00:23:36.449 "impl_name": "posix", 00:23:36.449 "recv_buf_size": 2097152, 00:23:36.449 "send_buf_size": 2097152, 00:23:36.449 "enable_recv_pipe": true, 00:23:36.449 "enable_quickack": false, 00:23:36.449 "enable_placement_id": 0, 00:23:36.449 "enable_zerocopy_send_server": true, 00:23:36.449 "enable_zerocopy_send_client": false, 00:23:36.449 "zerocopy_threshold": 0, 00:23:36.449 "tls_version": 0, 00:23:36.449 "enable_ktls": false 00:23:36.449 } 00:23:36.449 } 00:23:36.449 ] 00:23:36.449 }, 00:23:36.449 { 00:23:36.449 "subsystem": "vmd", 00:23:36.449 "config": [] 00:23:36.449 }, 00:23:36.449 { 00:23:36.449 "subsystem": "accel", 00:23:36.449 "config": [ 00:23:36.449 { 00:23:36.449 "method": "accel_set_options", 00:23:36.449 "params": { 00:23:36.449 "small_cache_size": 128, 00:23:36.449 "large_cache_size": 16, 00:23:36.449 "task_count": 2048, 00:23:36.449 "sequence_count": 2048, 00:23:36.449 "buf_count": 2048 00:23:36.449 } 00:23:36.449 } 00:23:36.449 ] 00:23:36.449 }, 00:23:36.449 { 00:23:36.449 "subsystem": "bdev", 00:23:36.449 "config": [ 00:23:36.449 { 00:23:36.449 "method": "bdev_set_options", 00:23:36.449 "params": { 00:23:36.449 "bdev_io_pool_size": 65535, 00:23:36.449 "bdev_io_cache_size": 256, 00:23:36.449 "bdev_auto_examine": true, 00:23:36.449 "iobuf_small_cache_size": 128, 00:23:36.449 "iobuf_large_cache_size": 16 00:23:36.449 } 00:23:36.449 }, 00:23:36.449 { 00:23:36.449 "method": "bdev_raid_set_options", 00:23:36.449 "params": { 00:23:36.449 "process_window_size_kb": 1024, 00:23:36.449 "process_max_bandwidth_mb_sec": 0 00:23:36.449 } 00:23:36.449 }, 00:23:36.449 { 00:23:36.449 "method": "bdev_iscsi_set_options", 00:23:36.449 "params": { 00:23:36.450 "timeout_sec": 30 00:23:36.450 } 00:23:36.450 }, 00:23:36.450 { 00:23:36.450 "method": "bdev_nvme_set_options", 00:23:36.450 "params": { 00:23:36.450 "action_on_timeout": "none", 00:23:36.450 "timeout_us": 0, 00:23:36.450 "timeout_admin_us": 0, 00:23:36.450 "keep_alive_timeout_ms": 10000, 00:23:36.450 "arbitration_burst": 0, 00:23:36.450 "low_priority_weight": 0, 00:23:36.450 "medium_priority_weight": 0, 00:23:36.450 "high_priority_weight": 0, 00:23:36.450 "nvme_adminq_poll_period_us": 10000, 00:23:36.450 "nvme_ioq_poll_period_us": 0, 00:23:36.450 "io_queue_requests": 512, 00:23:36.450 "delay_cmd_submit": true, 00:23:36.450 "transport_retry_count": 4, 00:23:36.450 "bdev_retry_count": 3, 00:23:36.450 "transport_ack_timeout": 0, 00:23:36.450 "ctrlr_loss_timeout_sec": 0, 00:23:36.450 "reconnect_delay_sec": 0, 00:23:36.450 "fast_io_fail_timeout_sec": 0, 00:23:36.450 "disable_auto_failback": false, 00:23:36.450 "generate_uuids": false, 00:23:36.450 "transport_tos": 0, 00:23:36.450 "nvme_error_stat": false, 00:23:36.450 "rdma_srq_size": 0, 00:23:36.450 "io_path_stat": false, 00:23:36.450 "allow_accel_sequence": false, 00:23:36.450 "rdma_max_cq_size": 0, 00:23:36.450 "rdma_cm_event_timeout_ms": 0, 00:23:36.450 "dhchap_digests": [ 00:23:36.450 "sha256", 00:23:36.450 "sha384", 00:23:36.450 "sha512" 00:23:36.450 ], 00:23:36.450 "dhchap_dhgroups": [ 00:23:36.450 "null", 00:23:36.450 "ffdhe2048", 00:23:36.450 "ffdhe3072", 00:23:36.450 "ffdhe4096", 00:23:36.450 "ffdhe6144", 00:23:36.450 "ffdhe8192" 00:23:36.450 ] 00:23:36.450 } 00:23:36.450 }, 00:23:36.450 { 00:23:36.450 "method": "bdev_nvme_attach_controller", 00:23:36.450 "params": { 00:23:36.450 "name": "nvme0", 00:23:36.450 "trtype": "TCP", 00:23:36.450 "adrfam": "IPv4", 00:23:36.450 "traddr": "10.0.0.2", 00:23:36.450 "trsvcid": "4420", 00:23:36.450 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.450 "prchk_reftag": false, 00:23:36.450 "prchk_guard": false, 00:23:36.450 "ctrlr_loss_timeout_sec": 0, 00:23:36.450 "reconnect_delay_sec": 0, 00:23:36.450 "fast_io_fail_timeout_sec": 0, 00:23:36.450 "psk": "key0", 00:23:36.450 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:36.450 "hdgst": false, 00:23:36.450 "ddgst": false, 00:23:36.450 "multipath": "multipath" 00:23:36.450 } 00:23:36.450 }, 00:23:36.450 { 00:23:36.450 "method": "bdev_nvme_set_hotplug", 00:23:36.450 "params": { 00:23:36.450 "period_us": 100000, 00:23:36.450 "enable": false 00:23:36.450 } 00:23:36.450 }, 00:23:36.450 { 00:23:36.450 "method": "bdev_enable_histogram", 00:23:36.450 "params": { 00:23:36.450 "name": "nvme0n1", 00:23:36.450 "enable": true 00:23:36.450 } 00:23:36.450 }, 00:23:36.450 { 00:23:36.450 "method": "bdev_wait_for_examine" 00:23:36.450 } 00:23:36.450 ] 00:23:36.450 }, 00:23:36.450 { 00:23:36.450 "subsystem": "nbd", 00:23:36.450 "config": [] 00:23:36.450 } 00:23:36.450 ] 00:23:36.450 }' 00:23:36.450 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # killprocess 109388 00:23:36.450 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 109388 ']' 00:23:36.450 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 109388 00:23:36.450 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:36.450 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:36.450 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 109388 00:23:36.450 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:36.450 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:36.450 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 109388' 00:23:36.450 killing process with pid 109388 00:23:36.450 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 109388 00:23:36.450 Received shutdown signal, test time was about 1.000000 seconds 00:23:36.450 00:23:36.450 Latency(us) 00:23:36.450 [2024-12-05T11:07:10.646Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:36.450 [2024-12-05T11:07:10.646Z] =================================================================================================================== 00:23:36.450 [2024-12-05T11:07:10.646Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:36.450 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 109388 00:23:36.707 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # killprocess 109275 00:23:36.707 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 109275 ']' 00:23:36.707 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 109275 00:23:36.707 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:36.707 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:36.707 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 109275 00:23:36.707 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:36.707 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:36.707 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 109275' 00:23:36.707 killing process with pid 109275 00:23:36.707 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 109275 00:23:36.707 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 109275 00:23:36.965 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # nvmfappstart -c /dev/fd/62 00:23:36.965 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:23:36.965 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:36.965 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # echo '{ 00:23:36.965 "subsystems": [ 00:23:36.965 { 00:23:36.965 "subsystem": "keyring", 00:23:36.965 "config": [ 00:23:36.965 { 00:23:36.965 "method": "keyring_file_add_key", 00:23:36.965 "params": { 00:23:36.965 "name": "key0", 00:23:36.965 "path": "/tmp/tmp.paq7Z4SMlu" 00:23:36.965 } 00:23:36.965 } 00:23:36.965 ] 00:23:36.965 }, 00:23:36.965 { 00:23:36.965 "subsystem": "iobuf", 00:23:36.965 "config": [ 00:23:36.965 { 00:23:36.965 "method": "iobuf_set_options", 00:23:36.965 "params": { 00:23:36.965 "small_pool_count": 8192, 00:23:36.965 "large_pool_count": 1024, 00:23:36.965 "small_bufsize": 8192, 00:23:36.965 "large_bufsize": 135168, 00:23:36.965 "enable_numa": false 00:23:36.965 } 00:23:36.965 } 00:23:36.965 ] 00:23:36.965 }, 00:23:36.965 { 00:23:36.965 "subsystem": "sock", 00:23:36.965 "config": [ 00:23:36.965 { 00:23:36.965 "method": "sock_set_default_impl", 00:23:36.965 "params": { 00:23:36.965 "impl_name": "posix" 00:23:36.965 } 00:23:36.965 }, 00:23:36.965 { 00:23:36.965 "method": "sock_impl_set_options", 00:23:36.965 "params": { 00:23:36.965 "impl_name": "ssl", 00:23:36.965 "recv_buf_size": 4096, 00:23:36.965 "send_buf_size": 4096, 00:23:36.965 "enable_recv_pipe": true, 00:23:36.965 "enable_quickack": false, 00:23:36.965 "enable_placement_id": 0, 00:23:36.965 "enable_zerocopy_send_server": true, 00:23:36.965 "enable_zerocopy_send_client": false, 00:23:36.965 "zerocopy_threshold": 0, 00:23:36.965 "tls_version": 0, 00:23:36.965 "enable_ktls": false 00:23:36.965 } 00:23:36.965 }, 00:23:36.965 { 00:23:36.965 "method": "sock_impl_set_options", 00:23:36.965 "params": { 00:23:36.965 "impl_name": "posix", 00:23:36.965 "recv_buf_size": 2097152, 00:23:36.965 "send_buf_size": 2097152, 00:23:36.965 "enable_recv_pipe": true, 00:23:36.965 "enable_quickack": false, 00:23:36.965 "enable_placement_id": 0, 00:23:36.965 "enable_zerocopy_send_server": true, 00:23:36.965 "enable_zerocopy_send_client": false, 00:23:36.965 "zerocopy_threshold": 0, 00:23:36.965 "tls_version": 0, 00:23:36.965 "enable_ktls": false 00:23:36.965 } 00:23:36.965 } 00:23:36.965 ] 00:23:36.965 }, 00:23:36.965 { 00:23:36.965 "subsystem": "vmd", 00:23:36.965 "config": [] 00:23:36.965 }, 00:23:36.965 { 00:23:36.965 "subsystem": "accel", 00:23:36.965 "config": [ 00:23:36.965 { 00:23:36.965 "method": "accel_set_options", 00:23:36.965 "params": { 00:23:36.965 "small_cache_size": 128, 00:23:36.965 "large_cache_size": 16, 00:23:36.965 "task_count": 2048, 00:23:36.965 "sequence_count": 2048, 00:23:36.965 "buf_count": 2048 00:23:36.965 } 00:23:36.965 } 00:23:36.965 ] 00:23:36.965 }, 00:23:36.965 { 00:23:36.965 "subsystem": "bdev", 00:23:36.965 "config": [ 00:23:36.965 { 00:23:36.965 "method": "bdev_set_options", 00:23:36.965 "params": { 00:23:36.965 "bdev_io_pool_size": 65535, 00:23:36.965 "bdev_io_cache_size": 256, 00:23:36.965 "bdev_auto_examine": true, 00:23:36.965 "iobuf_small_cache_size": 128, 00:23:36.965 "iobuf_large_cache_size": 16 00:23:36.965 } 00:23:36.965 }, 00:23:36.965 { 00:23:36.965 "method": "bdev_raid_set_options", 00:23:36.965 "params": { 00:23:36.966 "process_window_size_kb": 1024, 00:23:36.966 "process_max_bandwidth_mb_sec": 0 00:23:36.966 } 00:23:36.966 }, 00:23:36.966 { 00:23:36.966 "method": "bdev_iscsi_set_options", 00:23:36.966 "params": { 00:23:36.966 "timeout_sec": 30 00:23:36.966 } 00:23:36.966 }, 00:23:36.966 { 00:23:36.966 "method": "bdev_nvme_set_options", 00:23:36.966 "params": { 00:23:36.966 "action_on_timeout": "none", 00:23:36.966 "timeout_us": 0, 00:23:36.966 "timeout_admin_us": 0, 00:23:36.966 "keep_alive_timeout_ms": 10000, 00:23:36.966 "arbitration_burst": 0, 00:23:36.966 "low_priority_weight": 0, 00:23:36.966 "medium_priority_weight": 0, 00:23:36.966 "high_priority_weight": 0, 00:23:36.966 "nvme_adminq_poll_period_us": 10000, 00:23:36.966 "nvme_ioq_poll_period_us": 0, 00:23:36.966 "io_queue_requests": 0, 00:23:36.966 "delay_cmd_submit": true, 00:23:36.966 "transport_retry_count": 4, 00:23:36.966 "bdev_retry_count": 3, 00:23:36.966 "transport_ack_timeout": 0, 00:23:36.966 "ctrlr_loss_timeout_sec": 0, 00:23:36.966 "reconnect_delay_sec": 0, 00:23:36.966 "fast_io_fail_timeout_sec": 0, 00:23:36.966 "disable_auto_failback": false, 00:23:36.966 "generate_uuids": false, 00:23:36.966 "transport_tos": 0, 00:23:36.966 "nvme_error_stat": false, 00:23:36.966 "rdma_srq_size": 0, 00:23:36.966 "io_path_stat": false, 00:23:36.966 "allow_accel_sequence": false, 00:23:36.966 "rdma_max_cq_size": 0, 00:23:36.966 "rdma_cm_event_timeout_ms": 0, 00:23:36.966 "dhchap_digests": [ 00:23:36.966 "sha256", 00:23:36.966 "sha384", 00:23:36.966 "sha512" 00:23:36.966 ], 00:23:36.966 "dhchap_dhgroups": [ 00:23:36.966 "null", 00:23:36.966 "ffdhe2048", 00:23:36.966 "ffdhe3072", 00:23:36.966 "ffdhe4096", 00:23:36.966 "ffdhe6144", 00:23:36.966 "ffdhe8192" 00:23:36.966 ] 00:23:36.966 } 00:23:36.966 }, 00:23:36.966 { 00:23:36.966 "method": "bdev_nvme_set_hotplug", 00:23:36.966 "params": { 00:23:36.966 "period_us": 100000, 00:23:36.966 "enable": false 00:23:36.966 } 00:23:36.966 }, 00:23:36.966 { 00:23:36.966 "method": "bdev_malloc_create", 00:23:36.966 "params": { 00:23:36.966 "name": "malloc0", 00:23:36.966 "num_blocks": 8192, 00:23:36.966 "block_size": 4096, 00:23:36.966 "physical_block_size": 4096, 00:23:36.966 "uuid": "11d63dc3-1b89-4777-942a-483ece76202d", 00:23:36.966 "optimal_io_boundary": 0, 00:23:36.966 "md_size": 0, 00:23:36.966 "dif_type": 0, 00:23:36.966 "dif_is_head_of_md": false, 00:23:36.966 "dif_pi_format": 0 00:23:36.966 } 00:23:36.966 }, 00:23:36.966 { 00:23:36.966 "method": "bdev_wait_for_examine" 00:23:36.966 } 00:23:36.966 ] 00:23:36.966 }, 00:23:36.966 { 00:23:36.966 "subsystem": "nbd", 00:23:36.966 "config": [] 00:23:36.966 }, 00:23:36.966 { 00:23:36.966 "subsystem": "scheduler", 00:23:36.966 "config": [ 00:23:36.966 { 00:23:36.966 "method": "framework_set_scheduler", 00:23:36.966 "params": { 00:23:36.966 "name": "static" 00:23:36.966 } 00:23:36.966 } 00:23:36.966 ] 00:23:36.966 }, 00:23:36.966 { 00:23:36.966 "subsystem": "nvmf", 00:23:36.966 "config": [ 00:23:36.966 { 00:23:36.966 "method": "nvmf_set_config", 00:23:36.966 "params": { 00:23:36.966 "discovery_filter": "match_any", 00:23:36.966 "admin_cmd_passthru": { 00:23:36.966 "identify_ctrlr": false 00:23:36.966 }, 00:23:36.966 "dhchap_digests": [ 00:23:36.966 "sha256", 00:23:36.966 "sha384", 00:23:36.966 "sha512" 00:23:36.966 ], 00:23:36.966 "dhchap_dhgroups": [ 00:23:36.966 "null", 00:23:36.966 "ffdhe2048", 00:23:36.966 "ffdhe3072", 00:23:36.966 "ffdhe4096", 00:23:36.966 "ffdhe6144", 00:23:36.966 "ffdhe8192" 00:23:36.966 ] 00:23:36.966 } 00:23:36.966 }, 00:23:36.966 { 00:23:36.966 "method": "nvmf_set_max_subsystems", 00:23:36.966 "params": { 00:23:36.966 "max_subsystems": 1024 00:23:36.966 } 00:23:36.966 }, 00:23:36.966 { 00:23:36.966 "method": "nvmf_set_crdt", 00:23:36.966 "params": { 00:23:36.966 "crdt1": 0, 00:23:36.966 "crdt2": 0, 00:23:36.966 "crdt3": 0 00:23:36.966 } 00:23:36.966 }, 00:23:36.966 { 00:23:36.966 "method": "nvmf_create_transport", 00:23:36.966 "params": { 00:23:36.966 "trtype": "TCP", 00:23:36.966 "max_queue_depth": 128, 00:23:36.966 "max_io_qpairs_per_ctrlr": 127, 00:23:36.966 "in_capsule_data_size": 4096, 00:23:36.966 "max_io_size": 131072, 00:23:36.966 "io_unit_size": 131072, 00:23:36.966 "max_aq_depth": 128, 00:23:36.966 "num_shared_buffers": 511, 00:23:36.966 "buf_cache_size": 4294967295, 00:23:36.966 "dif_insert_or_strip": false, 00:23:36.966 "zcopy": false, 00:23:36.966 "c2h_success": false, 00:23:36.966 "sock_priority": 0, 00:23:36.966 "abort_timeout_sec": 1, 00:23:36.966 "ack_timeout": 0, 00:23:36.966 "data_wr_pool_size": 0 00:23:36.966 } 00:23:36.966 }, 00:23:36.966 { 00:23:36.966 "method": "nvmf_create_subsystem", 00:23:36.966 "params": { 00:23:36.966 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.966 "allow_any_host": false, 00:23:36.966 "serial_number": "00000000000000000000", 00:23:36.966 "model_number": "SPDK bdev Controller", 00:23:36.966 "max_namespaces": 32, 00:23:36.966 "min_cntlid": 1, 00:23:36.966 "max_cntlid": 65519, 00:23:36.966 "ana_reporting": false 00:23:36.966 } 00:23:36.966 }, 00:23:36.966 { 00:23:36.966 "method": "nvmf_subsystem_add_host", 00:23:36.966 "params": { 00:23:36.966 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.966 "host": "nqn.2016-06.io.spdk:host1", 00:23:36.966 "psk": "key0" 00:23:36.966 } 00:23:36.966 }, 00:23:36.966 { 00:23:36.966 "method": "nvmf_subsystem_add_ns", 00:23:36.966 "params": { 00:23:36.966 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.966 "namespace": { 00:23:36.966 "nsid": 1, 00:23:36.966 "bdev_name": "malloc0", 00:23:36.966 "nguid": "11D63DC31B894777942A483ECE76202D", 00:23:36.966 "uuid": "11d63dc3-1b89-4777-942a-483ece76202d", 00:23:36.966 "no_auto_visible": false 00:23:36.966 } 00:23:36.966 } 00:23:36.966 }, 00:23:36.966 { 00:23:36.966 "method": "nvmf_subsystem_add_listener", 00:23:36.966 "params": { 00:23:36.966 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.966 "listen_address": { 00:23:36.966 "trtype": "TCP", 00:23:36.966 "adrfam": "IPv4", 00:23:36.966 "traddr": "10.0.0.2", 00:23:36.966 "trsvcid": "4420" 00:23:36.966 }, 00:23:36.966 "secure_channel": false, 00:23:36.966 "sock_impl": "ssl" 00:23:36.966 } 00:23:36.966 } 00:23:36.966 ] 00:23:36.966 } 00:23:36.966 ] 00:23:36.966 }' 00:23:36.966 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:36.966 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=109865 00:23:36.966 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:36.966 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 109865 00:23:36.966 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 109865 ']' 00:23:36.966 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:36.966 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:36.966 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:36.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:36.966 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:36.966 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:36.966 [2024-12-05 12:07:11.018666] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:23:36.966 [2024-12-05 12:07:11.018714] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:36.966 [2024-12-05 12:07:11.095651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.966 [2024-12-05 12:07:11.135918] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:36.966 [2024-12-05 12:07:11.135951] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:36.966 [2024-12-05 12:07:11.135958] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:36.966 [2024-12-05 12:07:11.135964] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:36.966 [2024-12-05 12:07:11.135969] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:36.966 [2024-12-05 12:07:11.136551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:37.234 [2024-12-05 12:07:11.350884] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:37.234 [2024-12-05 12:07:11.382918] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:37.234 [2024-12-05 12:07:11.383133] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:37.800 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:37.800 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:37.800 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:23:37.800 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:37.800 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:37.800 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:37.800 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # bdevperf_pid=109901 00:23:37.800 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # waitforlisten 109901 /var/tmp/bdevperf.sock 00:23:37.800 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 109901 ']' 00:23:37.800 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:37.800 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:23:37.800 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:37.800 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:37.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:37.800 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:23:37.800 "subsystems": [ 00:23:37.800 { 00:23:37.800 "subsystem": "keyring", 00:23:37.800 "config": [ 00:23:37.800 { 00:23:37.800 "method": "keyring_file_add_key", 00:23:37.800 "params": { 00:23:37.800 "name": "key0", 00:23:37.800 "path": "/tmp/tmp.paq7Z4SMlu" 00:23:37.800 } 00:23:37.800 } 00:23:37.800 ] 00:23:37.800 }, 00:23:37.800 { 00:23:37.800 "subsystem": "iobuf", 00:23:37.800 "config": [ 00:23:37.800 { 00:23:37.800 "method": "iobuf_set_options", 00:23:37.800 "params": { 00:23:37.800 "small_pool_count": 8192, 00:23:37.800 "large_pool_count": 1024, 00:23:37.800 "small_bufsize": 8192, 00:23:37.800 "large_bufsize": 135168, 00:23:37.800 "enable_numa": false 00:23:37.800 } 00:23:37.800 } 00:23:37.800 ] 00:23:37.800 }, 00:23:37.800 { 00:23:37.800 "subsystem": "sock", 00:23:37.800 "config": [ 00:23:37.800 { 00:23:37.800 "method": "sock_set_default_impl", 00:23:37.800 "params": { 00:23:37.800 "impl_name": "posix" 00:23:37.800 } 00:23:37.800 }, 00:23:37.800 { 00:23:37.800 "method": "sock_impl_set_options", 00:23:37.800 "params": { 00:23:37.800 "impl_name": "ssl", 00:23:37.800 "recv_buf_size": 4096, 00:23:37.800 "send_buf_size": 4096, 00:23:37.800 "enable_recv_pipe": true, 00:23:37.800 "enable_quickack": false, 00:23:37.800 "enable_placement_id": 0, 00:23:37.801 "enable_zerocopy_send_server": true, 00:23:37.801 "enable_zerocopy_send_client": false, 00:23:37.801 "zerocopy_threshold": 0, 00:23:37.801 "tls_version": 0, 00:23:37.801 "enable_ktls": false 00:23:37.801 } 00:23:37.801 }, 00:23:37.801 { 00:23:37.801 "method": "sock_impl_set_options", 00:23:37.801 "params": { 00:23:37.801 "impl_name": "posix", 00:23:37.801 "recv_buf_size": 2097152, 00:23:37.801 "send_buf_size": 2097152, 00:23:37.801 "enable_recv_pipe": true, 00:23:37.801 "enable_quickack": false, 00:23:37.801 "enable_placement_id": 0, 00:23:37.801 "enable_zerocopy_send_server": true, 00:23:37.801 "enable_zerocopy_send_client": false, 00:23:37.801 "zerocopy_threshold": 0, 00:23:37.801 "tls_version": 0, 00:23:37.801 "enable_ktls": false 00:23:37.801 } 00:23:37.801 } 00:23:37.801 ] 00:23:37.801 }, 00:23:37.801 { 00:23:37.801 "subsystem": "vmd", 00:23:37.801 "config": [] 00:23:37.801 }, 00:23:37.801 { 00:23:37.801 "subsystem": "accel", 00:23:37.801 "config": [ 00:23:37.801 { 00:23:37.801 "method": "accel_set_options", 00:23:37.801 "params": { 00:23:37.801 "small_cache_size": 128, 00:23:37.801 "large_cache_size": 16, 00:23:37.801 "task_count": 2048, 00:23:37.801 "sequence_count": 2048, 00:23:37.801 "buf_count": 2048 00:23:37.801 } 00:23:37.801 } 00:23:37.801 ] 00:23:37.801 }, 00:23:37.801 { 00:23:37.801 "subsystem": "bdev", 00:23:37.801 "config": [ 00:23:37.801 { 00:23:37.801 "method": "bdev_set_options", 00:23:37.801 "params": { 00:23:37.801 "bdev_io_pool_size": 65535, 00:23:37.801 "bdev_io_cache_size": 256, 00:23:37.801 "bdev_auto_examine": true, 00:23:37.801 "iobuf_small_cache_size": 128, 00:23:37.801 "iobuf_large_cache_size": 16 00:23:37.801 } 00:23:37.801 }, 00:23:37.801 { 00:23:37.801 "method": "bdev_raid_set_options", 00:23:37.801 "params": { 00:23:37.801 "process_window_size_kb": 1024, 00:23:37.801 "process_max_bandwidth_mb_sec": 0 00:23:37.801 } 00:23:37.801 }, 00:23:37.801 { 00:23:37.801 "method": "bdev_iscsi_set_options", 00:23:37.801 "params": { 00:23:37.801 "timeout_sec": 30 00:23:37.801 } 00:23:37.801 }, 00:23:37.801 { 00:23:37.801 "method": "bdev_nvme_set_options", 00:23:37.801 "params": { 00:23:37.801 "action_on_timeout": "none", 00:23:37.801 "timeout_us": 0, 00:23:37.801 "timeout_admin_us": 0, 00:23:37.801 "keep_alive_timeout_ms": 10000, 00:23:37.801 "arbitration_burst": 0, 00:23:37.801 "low_priority_weight": 0, 00:23:37.801 "medium_priority_weight": 0, 00:23:37.801 "high_priority_weight": 0, 00:23:37.801 "nvme_adminq_poll_period_us": 10000, 00:23:37.801 "nvme_ioq_poll_period_us": 0, 00:23:37.801 "io_queue_requests": 512, 00:23:37.801 "delay_cmd_submit": true, 00:23:37.801 "transport_retry_count": 4, 00:23:37.801 "bdev_retry_count": 3, 00:23:37.801 "transport_ack_timeout": 0, 00:23:37.801 "ctrlr_loss_timeout_sec": 0, 00:23:37.801 "reconnect_delay_sec": 0, 00:23:37.801 "fast_io_fail_timeout_sec": 0, 00:23:37.801 "disable_auto_failback": false, 00:23:37.801 "generate_uuids": false, 00:23:37.801 "transport_tos": 0, 00:23:37.801 "nvme_error_stat": false, 00:23:37.801 "rdma_srq_size": 0, 00:23:37.801 "io_path_stat": false, 00:23:37.801 "allow_accel_sequence": false, 00:23:37.801 "rdma_max_cq_size": 0, 00:23:37.801 "rdma_cm_event_timeout_ms": 0, 00:23:37.801 "dhchap_digests": [ 00:23:37.801 "sha256", 00:23:37.801 "sha384", 00:23:37.801 "sha512" 00:23:37.801 ], 00:23:37.801 "dhchap_dhgroups": [ 00:23:37.801 "null", 00:23:37.801 "ffdhe2048", 00:23:37.801 "ffdhe3072", 00:23:37.801 "ffdhe4096", 00:23:37.801 "ffdhe6144", 00:23:37.801 "ffdhe8192" 00:23:37.801 ] 00:23:37.801 } 00:23:37.801 }, 00:23:37.801 { 00:23:37.801 "method": "bdev_nvme_attach_controller", 00:23:37.801 "params": { 00:23:37.801 "name": "nvme0", 00:23:37.801 "trtype": "TCP", 00:23:37.801 "adrfam": "IPv4", 00:23:37.801 "traddr": "10.0.0.2", 00:23:37.801 "trsvcid": "4420", 00:23:37.801 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:37.801 "prchk_reftag": false, 00:23:37.801 "prchk_guard": false, 00:23:37.801 "ctrlr_loss_timeout_sec": 0, 00:23:37.801 "reconnect_delay_sec": 0, 00:23:37.801 "fast_io_fail_timeout_sec": 0, 00:23:37.801 "psk": "key0", 00:23:37.801 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:37.801 "hdgst": false, 00:23:37.801 "ddgst": false, 00:23:37.801 "multipath": "multipath" 00:23:37.801 } 00:23:37.801 }, 00:23:37.801 { 00:23:37.801 "method": "bdev_nvme_set_hotplug", 00:23:37.801 "params": { 00:23:37.801 "period_us": 100000, 00:23:37.801 "enable": false 00:23:37.801 } 00:23:37.801 }, 00:23:37.801 { 00:23:37.801 "method": "bdev_enable_histogram", 00:23:37.801 "params": { 00:23:37.801 "name": "nvme0n1", 00:23:37.801 "enable": true 00:23:37.801 } 00:23:37.801 }, 00:23:37.801 { 00:23:37.801 "method": "bdev_wait_for_examine" 00:23:37.801 } 00:23:37.801 ] 00:23:37.801 }, 00:23:37.801 { 00:23:37.801 "subsystem": "nbd", 00:23:37.801 "config": [] 00:23:37.801 } 00:23:37.801 ] 00:23:37.801 }' 00:23:37.801 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:37.801 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:37.801 [2024-12-05 12:07:11.933069] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:23:37.801 [2024-12-05 12:07:11.933118] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109901 ] 00:23:38.059 [2024-12-05 12:07:12.005921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.059 [2024-12-05 12:07:12.046132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:38.059 [2024-12-05 12:07:12.200208] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:38.623 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:38.623 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:38.623 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:38.623 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # jq -r '.[].name' 00:23:38.880 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.881 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:38.881 Running I/O for 1 seconds... 00:23:40.258 5325.00 IOPS, 20.80 MiB/s 00:23:40.258 Latency(us) 00:23:40.258 [2024-12-05T11:07:14.454Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:40.258 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:40.258 Verification LBA range: start 0x0 length 0x2000 00:23:40.258 nvme0n1 : 1.02 5351.27 20.90 0.00 0.00 23717.47 7770.70 24217.11 00:23:40.258 [2024-12-05T11:07:14.454Z] =================================================================================================================== 00:23:40.258 [2024-12-05T11:07:14.454Z] Total : 5351.27 20.90 0.00 0.00 23717.47 7770.70 24217.11 00:23:40.258 { 00:23:40.258 "results": [ 00:23:40.258 { 00:23:40.258 "job": "nvme0n1", 00:23:40.258 "core_mask": "0x2", 00:23:40.258 "workload": "verify", 00:23:40.258 "status": "finished", 00:23:40.258 "verify_range": { 00:23:40.258 "start": 0, 00:23:40.258 "length": 8192 00:23:40.258 }, 00:23:40.258 "queue_depth": 128, 00:23:40.258 "io_size": 4096, 00:23:40.258 "runtime": 1.01901, 00:23:40.258 "iops": 5351.272313323716, 00:23:40.258 "mibps": 20.903407473920765, 00:23:40.258 "io_failed": 0, 00:23:40.258 "io_timeout": 0, 00:23:40.258 "avg_latency_us": 23717.4728486722, 00:23:40.258 "min_latency_us": 7770.697142857143, 00:23:40.258 "max_latency_us": 24217.11238095238 00:23:40.258 } 00:23:40.258 ], 00:23:40.258 "core_count": 1 00:23:40.258 } 00:23:40.258 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # trap - SIGINT SIGTERM EXIT 00:23:40.258 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # cleanup 00:23:40.258 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:23:40.258 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:23:40.258 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:23:40.258 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:23:40.258 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:40.258 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:23:40.258 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:23:40.258 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:23:40.258 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:40.258 nvmf_trace.0 00:23:40.258 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:23:40.258 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 109901 00:23:40.258 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 109901 ']' 00:23:40.258 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 109901 00:23:40.258 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:40.258 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:40.258 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 109901 00:23:40.258 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:40.258 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:40.258 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 109901' 00:23:40.258 killing process with pid 109901 00:23:40.258 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 109901 00:23:40.258 Received shutdown signal, test time was about 1.000000 seconds 00:23:40.258 00:23:40.258 Latency(us) 00:23:40.258 [2024-12-05T11:07:14.454Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:40.258 [2024-12-05T11:07:14.454Z] =================================================================================================================== 00:23:40.258 [2024-12-05T11:07:14.454Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:40.258 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 109901 00:23:40.258 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:23:40.258 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@335 -- # nvmfcleanup 00:23:40.258 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@99 -- # sync 00:23:40.258 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:23:40.258 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@102 -- # set +e 00:23:40.258 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@103 -- # for i in {1..20} 00:23:40.258 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:23:40.258 rmmod nvme_tcp 00:23:40.258 rmmod nvme_fabrics 00:23:40.258 rmmod nvme_keyring 00:23:40.517 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:23:40.517 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@106 -- # set -e 00:23:40.517 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@107 -- # return 0 00:23:40.517 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # '[' -n 109865 ']' 00:23:40.517 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@337 -- # killprocess 109865 00:23:40.517 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 109865 ']' 00:23:40.517 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 109865 00:23:40.517 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:40.518 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:40.518 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 109865 00:23:40.518 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:40.518 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:40.518 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 109865' 00:23:40.518 killing process with pid 109865 00:23:40.518 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 109865 00:23:40.518 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 109865 00:23:40.518 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:23:40.518 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # nvmf_fini 00:23:40.518 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@264 -- # local dev 00:23:40.518 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@267 -- # remove_target_ns 00:23:40.518 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:23:40.518 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:23:40.518 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_target_ns 00:23:43.050 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@268 -- # delete_main_bridge 00:23:43.050 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:23:43.050 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@130 -- # return 0 00:23:43.050 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:23:43.050 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:23:43.050 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:23:43.050 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:23:43.050 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:23:43.050 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:23:43.050 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:23:43.050 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:23:43.050 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:23:43.051 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:23:43.051 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:23:43.051 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:23:43.051 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:23:43.051 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:23:43.051 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:23:43.051 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:23:43.051 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:23:43.051 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@41 -- # _dev=0 00:23:43.051 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@41 -- # dev_map=() 00:23:43.051 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@284 -- # iptr 00:23:43.051 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@542 -- # iptables-save 00:23:43.051 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:23:43.051 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@542 -- # iptables-restore 00:23:43.051 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.vT3c8gxpLj /tmp/tmp.WEuLEE9iXB /tmp/tmp.paq7Z4SMlu 00:23:43.051 00:23:43.051 real 1m19.980s 00:23:43.051 user 2m3.108s 00:23:43.051 sys 0m30.158s 00:23:43.051 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:43.051 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:43.051 ************************************ 00:23:43.051 END TEST nvmf_tls 00:23:43.051 ************************************ 00:23:43.051 12:07:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:43.051 12:07:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:43.051 12:07:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:43.051 12:07:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:43.051 ************************************ 00:23:43.051 START TEST nvmf_fips 00:23:43.051 ************************************ 00:23:43.051 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:43.051 * Looking for test storage... 00:23:43.051 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:23:43.051 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:43.051 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:23:43.051 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:43.051 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:43.051 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:43.051 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:43.051 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:43.051 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:23:43.051 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:23:43.051 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:23:43.051 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:23:43.051 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:23:43.051 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:23:43.051 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:23:43.051 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:43.051 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:23:43.051 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:23:43.051 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:43.051 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:43.051 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:23:43.051 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:23:43.051 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:43.051 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:23:43.051 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:23:43.051 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:23:43.051 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:23:43.051 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:43.051 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:23:43.051 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:23:43.051 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:43.051 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:43.051 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:23:43.051 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:43.051 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:43.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:43.051 --rc genhtml_branch_coverage=1 00:23:43.051 --rc genhtml_function_coverage=1 00:23:43.051 --rc genhtml_legend=1 00:23:43.051 --rc geninfo_all_blocks=1 00:23:43.051 --rc geninfo_unexecuted_blocks=1 00:23:43.051 00:23:43.051 ' 00:23:43.051 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:43.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:43.051 --rc genhtml_branch_coverage=1 00:23:43.051 --rc genhtml_function_coverage=1 00:23:43.051 --rc genhtml_legend=1 00:23:43.051 --rc geninfo_all_blocks=1 00:23:43.051 --rc geninfo_unexecuted_blocks=1 00:23:43.051 00:23:43.051 ' 00:23:43.051 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:43.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:43.051 --rc genhtml_branch_coverage=1 00:23:43.051 --rc genhtml_function_coverage=1 00:23:43.051 --rc genhtml_legend=1 00:23:43.051 --rc geninfo_all_blocks=1 00:23:43.051 --rc geninfo_unexecuted_blocks=1 00:23:43.051 00:23:43.051 ' 00:23:43.051 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:43.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:43.051 --rc genhtml_branch_coverage=1 00:23:43.051 --rc genhtml_function_coverage=1 00:23:43.051 --rc genhtml_legend=1 00:23:43.051 --rc geninfo_all_blocks=1 00:23:43.051 --rc geninfo_unexecuted_blocks=1 00:23:43.051 00:23:43.051 ' 00:23:43.051 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:43.051 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:23:43.051 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:43.051 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:43.051 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:43.052 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:43.052 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:43.052 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:23:43.052 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:43.052 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:23:43.052 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:43.052 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:43.052 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:43.052 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:23:43.052 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:23:43.052 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:43.052 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:43.052 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:23:43.052 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:43.052 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:43.052 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:43.052 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.052 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.052 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.052 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:23:43.052 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.052 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:23:43.052 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:23:43.052 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:23:43.052 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:23:43.052 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@50 -- # : 0 00:23:43.052 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:23:43.052 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:23:43.052 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:23:43.052 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:43.052 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:43.052 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:23:43.052 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:23:43.052 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:23:43.052 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:23:43.052 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@54 -- # have_pci_nics=0 00:23:43.052 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:43.052 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:23:43.052 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:23:43.052 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:23:43.052 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:23:43.052 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:23:43.052 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:23:43.052 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:43.052 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:43.052 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:23:43.052 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:23:43.052 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:23:43.052 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:23:43.052 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:23:43.052 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:23:43.052 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:23:43.052 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:43.052 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:23:43.052 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:23:43.052 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:43.052 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:43.052 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:23:43.052 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:23:43.052 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:43.052 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:23:43.052 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:23:43.052 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:23:43.052 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:23:43.052 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:43.052 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:23:43.052 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:23:43.053 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:43.053 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:43.053 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:23:43.053 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:43.053 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:23:43.053 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:23:43.053 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:43.053 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:23:43.053 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:23:43.053 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:23:43.053 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:23:43.053 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:43.053 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:23:43.053 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:23:43.053 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:43.053 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:23:43.053 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:23:43.053 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:43.053 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:23:43.053 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:43.053 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:43.053 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:23:43.053 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:23:43.053 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:23:43.053 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:23:43.053 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:23:43.053 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:23:43.053 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:43.053 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:23:43.053 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:23:43.053 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:23:43.053 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:23:43.053 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:23:43.053 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:23:43.053 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:43.053 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:23:43.053 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:23:43.053 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:23:43.053 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:43.053 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:23:43.053 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:43.053 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:23:43.053 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:43.053 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:23:43.053 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:43.053 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:23:43.053 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:23:43.053 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:23:43.053 Error setting digest 00:23:43.053 403275D3E67F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:23:43.053 403275D3E67F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:23:43.053 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:23:43.053 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:43.053 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:43.053 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:43.053 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:23:43.053 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:23:43.053 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:43.053 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # prepare_net_devs 00:23:43.053 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # local -g is_hw=no 00:23:43.053 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@260 -- # remove_target_ns 00:23:43.053 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:23:43.053 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:23:43.053 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_target_ns 00:23:43.053 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:23:43.053 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:23:43.053 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # xtrace_disable 00:23:43.053 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:49.630 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:49.630 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@131 -- # pci_devs=() 00:23:49.630 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@131 -- # local -a pci_devs 00:23:49.630 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@132 -- # pci_net_devs=() 00:23:49.630 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:23:49.630 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@133 -- # pci_drivers=() 00:23:49.630 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@133 -- # local -A pci_drivers 00:23:49.630 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@135 -- # net_devs=() 00:23:49.630 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@135 -- # local -ga net_devs 00:23:49.630 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@136 -- # e810=() 00:23:49.630 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@136 -- # local -ga e810 00:23:49.630 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@137 -- # x722=() 00:23:49.630 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@137 -- # local -ga x722 00:23:49.630 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@138 -- # mlx=() 00:23:49.630 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@138 -- # local -ga mlx 00:23:49.630 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:49.630 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:49.630 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:49.630 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:49.630 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:49.630 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:49.630 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:49.630 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:49.630 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:49.630 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:49.630 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:49.630 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:49.630 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:23:49.630 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:23:49.630 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:23:49.630 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:23:49.630 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:23:49.630 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:23:49.630 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:23:49.630 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:49.630 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:49.630 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:23:49.630 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:23:49.630 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:49.630 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:49.630 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:23:49.630 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:23:49.630 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:49.630 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:49.630 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:23:49.630 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:23:49.630 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:49.630 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:49.630 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:23:49.630 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:23:49.630 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:23:49.630 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:23:49.630 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:23:49.630 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:49.630 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:23:49.630 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:49.630 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # [[ up == up ]] 00:23:49.630 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:23:49.630 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:49.630 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:49.631 Found net devices under 0000:86:00.0: cvl_0_0 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # [[ up == up ]] 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:49.631 Found net devices under 0000:86:00.1: cvl_0_1 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # is_hw=yes 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@257 -- # create_target_ns 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@27 -- # local -gA dev_map 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@28 -- # local -g _dev 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@44 -- # ips=() 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@11 -- # local val=167772161 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:23:49.631 10.0.0.1 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@11 -- # local val=167772162 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:23:49.631 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:23:49.631 10.0.0.2 00:23:49.631 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:23:49.631 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:23:49.631 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:23:49.631 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:23:49.631 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@38 -- # ping_ips 1 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@107 -- # local dev=initiator0 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:23:49.632 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:49.632 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.385 ms 00:23:49.632 00:23:49.632 --- 10.0.0.1 ping statistics --- 00:23:49.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:49.632 rtt min/avg/max/mdev = 0.385/0.385/0.385/0.000 ms 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@168 -- # get_net_dev target0 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@107 -- # local dev=target0 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:23:49.632 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:49.632 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:23:49.632 00:23:49.632 --- 10.0.0.2 ping statistics --- 00:23:49.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:49.632 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # (( pair++ )) 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@270 -- # return 0 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:23:49.632 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@107 -- # local dev=initiator0 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@107 -- # local dev=initiator1 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@109 -- # return 1 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@168 -- # dev= 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@169 -- # return 0 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@168 -- # get_net_dev target0 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@107 -- # local dev=target0 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@168 -- # get_net_dev target1 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@107 -- # local dev=target1 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@109 -- # return 1 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@168 -- # dev= 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@169 -- # return 0 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # nvmfpid=113939 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@329 -- # waitforlisten 113939 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 113939 ']' 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:49.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:49.633 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:49.633 [2024-12-05 12:07:23.330178] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:23:49.633 [2024-12-05 12:07:23.330229] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:49.633 [2024-12-05 12:07:23.410250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:49.633 [2024-12-05 12:07:23.452288] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:49.633 [2024-12-05 12:07:23.452322] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:49.634 [2024-12-05 12:07:23.452329] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:49.634 [2024-12-05 12:07:23.452335] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:49.634 [2024-12-05 12:07:23.452341] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:49.634 [2024-12-05 12:07:23.452905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:50.202 12:07:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:50.202 12:07:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:23:50.202 12:07:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:23:50.202 12:07:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:50.202 12:07:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:50.202 12:07:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:50.202 12:07:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:23:50.202 12:07:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:50.202 12:07:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:23:50.202 12:07:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.REe 00:23:50.202 12:07:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:50.202 12:07:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.REe 00:23:50.202 12:07:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.REe 00:23:50.202 12:07:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.REe 00:23:50.202 12:07:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:50.202 [2024-12-05 12:07:24.355267] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:50.202 [2024-12-05 12:07:24.371272] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:50.202 [2024-12-05 12:07:24.371477] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:50.461 malloc0 00:23:50.461 12:07:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:50.461 12:07:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=114188 00:23:50.461 12:07:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:50.461 12:07:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 114188 /var/tmp/bdevperf.sock 00:23:50.461 12:07:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 114188 ']' 00:23:50.461 12:07:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:50.461 12:07:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:50.461 12:07:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:50.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:50.461 12:07:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:50.461 12:07:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:50.461 [2024-12-05 12:07:24.499500] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:23:50.461 [2024-12-05 12:07:24.499549] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114188 ] 00:23:50.461 [2024-12-05 12:07:24.572165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:50.461 [2024-12-05 12:07:24.612962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:51.398 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:51.398 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:23:51.398 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.REe 00:23:51.398 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:51.657 [2024-12-05 12:07:25.673685] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:51.657 TLSTESTn1 00:23:51.657 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:51.916 Running I/O for 10 seconds... 00:23:53.791 5299.00 IOPS, 20.70 MiB/s [2024-12-05T11:07:28.924Z] 5433.50 IOPS, 21.22 MiB/s [2024-12-05T11:07:29.885Z] 5478.67 IOPS, 21.40 MiB/s [2024-12-05T11:07:30.956Z] 5467.50 IOPS, 21.36 MiB/s [2024-12-05T11:07:31.890Z] 5445.00 IOPS, 21.27 MiB/s [2024-12-05T11:07:33.264Z] 5449.83 IOPS, 21.29 MiB/s [2024-12-05T11:07:34.200Z] 5448.00 IOPS, 21.28 MiB/s [2024-12-05T11:07:35.136Z] 5456.12 IOPS, 21.31 MiB/s [2024-12-05T11:07:36.070Z] 5473.67 IOPS, 21.38 MiB/s [2024-12-05T11:07:36.070Z] 5487.40 IOPS, 21.44 MiB/s 00:24:01.874 Latency(us) 00:24:01.874 [2024-12-05T11:07:36.070Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:01.874 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:01.874 Verification LBA range: start 0x0 length 0x2000 00:24:01.874 TLSTESTn1 : 10.01 5492.79 21.46 0.00 0.00 23269.09 5586.16 21845.33 00:24:01.874 [2024-12-05T11:07:36.070Z] =================================================================================================================== 00:24:01.874 [2024-12-05T11:07:36.070Z] Total : 5492.79 21.46 0.00 0.00 23269.09 5586.16 21845.33 00:24:01.874 { 00:24:01.874 "results": [ 00:24:01.874 { 00:24:01.874 "job": "TLSTESTn1", 00:24:01.874 "core_mask": "0x4", 00:24:01.874 "workload": "verify", 00:24:01.874 "status": "finished", 00:24:01.874 "verify_range": { 00:24:01.874 "start": 0, 00:24:01.874 "length": 8192 00:24:01.874 }, 00:24:01.874 "queue_depth": 128, 00:24:01.874 "io_size": 4096, 00:24:01.874 "runtime": 10.013118, 00:24:01.874 "iops": 5492.794552106547, 00:24:01.874 "mibps": 21.456228719166198, 00:24:01.874 "io_failed": 0, 00:24:01.874 "io_timeout": 0, 00:24:01.874 "avg_latency_us": 23269.087613229436, 00:24:01.874 "min_latency_us": 5586.1638095238095, 00:24:01.874 "max_latency_us": 21845.333333333332 00:24:01.874 } 00:24:01.874 ], 00:24:01.874 "core_count": 1 00:24:01.874 } 00:24:01.874 12:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:01.874 12:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:01.874 12:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:24:01.874 12:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:24:01.874 12:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:24:01.874 12:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:01.874 12:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:24:01.874 12:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:24:01.874 12:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:24:01.874 12:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:01.874 nvmf_trace.0 00:24:01.875 12:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:24:01.875 12:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 114188 00:24:01.875 12:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 114188 ']' 00:24:01.875 12:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 114188 00:24:01.875 12:07:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:24:01.875 12:07:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:01.875 12:07:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 114188 00:24:01.875 12:07:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:01.875 12:07:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:01.875 12:07:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 114188' 00:24:01.875 killing process with pid 114188 00:24:01.875 12:07:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 114188 00:24:01.875 Received shutdown signal, test time was about 10.000000 seconds 00:24:01.875 00:24:01.875 Latency(us) 00:24:01.875 [2024-12-05T11:07:36.071Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:01.875 [2024-12-05T11:07:36.071Z] =================================================================================================================== 00:24:01.875 [2024-12-05T11:07:36.071Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:01.875 12:07:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 114188 00:24:02.133 12:07:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:02.133 12:07:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@335 -- # nvmfcleanup 00:24:02.133 12:07:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@99 -- # sync 00:24:02.133 12:07:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:24:02.133 12:07:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@102 -- # set +e 00:24:02.133 12:07:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@103 -- # for i in {1..20} 00:24:02.133 12:07:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:24:02.133 rmmod nvme_tcp 00:24:02.133 rmmod nvme_fabrics 00:24:02.133 rmmod nvme_keyring 00:24:02.133 12:07:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:24:02.133 12:07:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@106 -- # set -e 00:24:02.133 12:07:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@107 -- # return 0 00:24:02.133 12:07:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # '[' -n 113939 ']' 00:24:02.133 12:07:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@337 -- # killprocess 113939 00:24:02.133 12:07:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 113939 ']' 00:24:02.133 12:07:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 113939 00:24:02.133 12:07:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:24:02.133 12:07:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:02.133 12:07:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 113939 00:24:02.392 12:07:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:02.392 12:07:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:02.392 12:07:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 113939' 00:24:02.392 killing process with pid 113939 00:24:02.392 12:07:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 113939 00:24:02.392 12:07:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 113939 00:24:02.392 12:07:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:24:02.392 12:07:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # nvmf_fini 00:24:02.392 12:07:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@264 -- # local dev 00:24:02.392 12:07:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@267 -- # remove_target_ns 00:24:02.392 12:07:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:24:02.392 12:07:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:24:02.392 12:07:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_target_ns 00:24:04.937 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@268 -- # delete_main_bridge 00:24:04.937 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:24:04.937 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@130 -- # return 0 00:24:04.937 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:24:04.937 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:24:04.937 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:24:04.937 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:24:04.937 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:24:04.937 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@41 -- # _dev=0 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@41 -- # dev_map=() 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@284 -- # iptr 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@542 -- # iptables-save 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@542 -- # iptables-restore 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.REe 00:24:04.938 00:24:04.938 real 0m21.746s 00:24:04.938 user 0m23.532s 00:24:04.938 sys 0m9.581s 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:04.938 ************************************ 00:24:04.938 END TEST nvmf_fips 00:24:04.938 ************************************ 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:04.938 ************************************ 00:24:04.938 START TEST nvmf_control_msg_list 00:24:04.938 ************************************ 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:04.938 * Looking for test storage... 00:24:04.938 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:04.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.938 --rc genhtml_branch_coverage=1 00:24:04.938 --rc genhtml_function_coverage=1 00:24:04.938 --rc genhtml_legend=1 00:24:04.938 --rc geninfo_all_blocks=1 00:24:04.938 --rc geninfo_unexecuted_blocks=1 00:24:04.938 00:24:04.938 ' 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:04.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.938 --rc genhtml_branch_coverage=1 00:24:04.938 --rc genhtml_function_coverage=1 00:24:04.938 --rc genhtml_legend=1 00:24:04.938 --rc geninfo_all_blocks=1 00:24:04.938 --rc geninfo_unexecuted_blocks=1 00:24:04.938 00:24:04.938 ' 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:04.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.938 --rc genhtml_branch_coverage=1 00:24:04.938 --rc genhtml_function_coverage=1 00:24:04.938 --rc genhtml_legend=1 00:24:04.938 --rc geninfo_all_blocks=1 00:24:04.938 --rc geninfo_unexecuted_blocks=1 00:24:04.938 00:24:04.938 ' 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:04.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.938 --rc genhtml_branch_coverage=1 00:24:04.938 --rc genhtml_function_coverage=1 00:24:04.938 --rc genhtml_legend=1 00:24:04.938 --rc geninfo_all_blocks=1 00:24:04.938 --rc geninfo_unexecuted_blocks=1 00:24:04.938 00:24:04.938 ' 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:04.938 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:04.939 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.939 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.939 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.939 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:24:04.939 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.939 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:24:04.939 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:24:04.939 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:24:04.939 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:24:04.939 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@50 -- # : 0 00:24:04.939 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:24:04.939 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:24:04.939 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:24:04.939 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:04.939 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:04.939 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:24:04.939 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:24:04.939 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:24:04.939 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:24:04.939 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@54 -- # have_pci_nics=0 00:24:04.939 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:24:04.939 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:24:04.939 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:04.939 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@296 -- # prepare_net_devs 00:24:04.939 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # local -g is_hw=no 00:24:04.939 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@260 -- # remove_target_ns 00:24:04.939 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:24:04.939 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:24:04.939 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_target_ns 00:24:04.939 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:24:04.939 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:24:04.939 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # xtrace_disable 00:24:04.939 12:07:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:11.510 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:11.510 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@131 -- # pci_devs=() 00:24:11.510 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@131 -- # local -a pci_devs 00:24:11.510 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@132 -- # pci_net_devs=() 00:24:11.510 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:24:11.510 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@133 -- # pci_drivers=() 00:24:11.510 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@133 -- # local -A pci_drivers 00:24:11.510 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@135 -- # net_devs=() 00:24:11.510 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@135 -- # local -ga net_devs 00:24:11.510 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@136 -- # e810=() 00:24:11.510 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@136 -- # local -ga e810 00:24:11.510 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@137 -- # x722=() 00:24:11.510 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@137 -- # local -ga x722 00:24:11.510 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@138 -- # mlx=() 00:24:11.510 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@138 -- # local -ga mlx 00:24:11.510 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:11.510 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:11.510 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:11.510 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:11.510 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:11.510 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:11.510 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:11.510 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:11.510 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:11.510 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:11.510 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:11.510 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:11.510 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:24:11.510 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:24:11.510 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:24:11.510 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:24:11.510 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:24:11.510 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:24:11.510 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:24:11.510 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:11.510 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:11.510 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:24:11.510 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:24:11.510 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:11.510 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:11.510 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:24:11.510 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:24:11.510 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:11.510 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:11.510 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:24:11.510 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # [[ up == up ]] 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:11.511 Found net devices under 0000:86:00.0: cvl_0_0 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # [[ up == up ]] 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:11.511 Found net devices under 0000:86:00.1: cvl_0_1 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # is_hw=yes 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@257 -- # create_target_ns 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@27 -- # local -gA dev_map 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@28 -- # local -g _dev 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@44 -- # ips=() 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@11 -- # local val=167772161 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:24:11.511 10.0.0.1 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@11 -- # local val=167772162 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:24:11.511 10.0.0.2 00:24:11.511 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@38 -- # ping_ips 1 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@107 -- # local dev=initiator0 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:24:11.512 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:11.512 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.392 ms 00:24:11.512 00:24:11.512 --- 10.0.0.1 ping statistics --- 00:24:11.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:11.512 rtt min/avg/max/mdev = 0.392/0.392/0.392/0.000 ms 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@168 -- # get_net_dev target0 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@107 -- # local dev=target0 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:24:11.512 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:11.512 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:24:11.512 00:24:11.512 --- 10.0.0.2 ping statistics --- 00:24:11.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:11.512 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # (( pair++ )) 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@270 -- # return 0 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@107 -- # local dev=initiator0 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@107 -- # local dev=initiator1 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@109 -- # return 1 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@168 -- # dev= 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@169 -- # return 0 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@168 -- # get_net_dev target0 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@107 -- # local dev=target0 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@168 -- # get_net_dev target1 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@107 -- # local dev=target1 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:24:11.512 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:24:11.513 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@109 -- # return 1 00:24:11.513 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@168 -- # dev= 00:24:11.513 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@169 -- # return 0 00:24:11.513 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:24:11.513 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:11.513 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:24:11.513 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:24:11.513 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:11.513 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:24:11.513 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:24:11.513 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:24:11.513 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:24:11.513 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:11.513 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:11.513 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # nvmfpid=119607 00:24:11.513 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@329 -- # waitforlisten 119607 00:24:11.513 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:11.513 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 119607 ']' 00:24:11.513 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:11.513 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:11.513 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:11.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:11.513 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:11.513 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:11.513 [2024-12-05 12:07:44.980147] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:24:11.513 [2024-12-05 12:07:44.980194] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:11.513 [2024-12-05 12:07:45.058275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.513 [2024-12-05 12:07:45.098656] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:11.513 [2024-12-05 12:07:45.098694] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:11.513 [2024-12-05 12:07:45.098702] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:11.513 [2024-12-05 12:07:45.098707] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:11.513 [2024-12-05 12:07:45.098713] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:11.513 [2024-12-05 12:07:45.099268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:11.513 12:07:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:11.513 12:07:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:24:11.513 12:07:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:24:11.513 12:07:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:11.513 12:07:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:11.513 12:07:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:11.513 12:07:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:11.513 12:07:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:11.513 12:07:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:24:11.513 12:07:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.513 12:07:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:11.513 [2024-12-05 12:07:45.235819] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:11.513 12:07:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.513 12:07:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:24:11.513 12:07:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.513 12:07:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:11.513 12:07:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.513 12:07:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:11.513 12:07:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.513 12:07:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:11.513 Malloc0 00:24:11.513 12:07:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.513 12:07:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:11.513 12:07:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.513 12:07:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:11.513 12:07:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.513 12:07:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:11.513 12:07:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.513 12:07:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:11.513 [2024-12-05 12:07:45.276187] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:11.513 12:07:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.513 12:07:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=119820 00:24:11.513 12:07:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:11.513 12:07:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=119821 00:24:11.513 12:07:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:11.513 12:07:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=119822 00:24:11.513 12:07:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 119820 00:24:11.513 12:07:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:11.513 [2024-12-05 12:07:45.374944] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:11.513 [2024-12-05 12:07:45.375120] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:11.513 [2024-12-05 12:07:45.375280] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:12.447 Initializing NVMe Controllers 00:24:12.447 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:12.447 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:24:12.447 Initialization complete. Launching workers. 00:24:12.447 ======================================================== 00:24:12.447 Latency(us) 00:24:12.447 Device Information : IOPS MiB/s Average min max 00:24:12.447 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40934.68 40749.23 41923.39 00:24:12.447 ======================================================== 00:24:12.447 Total : 25.00 0.10 40934.68 40749.23 41923.39 00:24:12.447 00:24:12.447 Initializing NVMe Controllers 00:24:12.447 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:12.447 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:24:12.447 Initialization complete. Launching workers. 00:24:12.447 ======================================================== 00:24:12.447 Latency(us) 00:24:12.447 Device Information : IOPS MiB/s Average min max 00:24:12.447 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 40969.70 40612.75 41912.68 00:24:12.447 ======================================================== 00:24:12.447 Total : 25.00 0.10 40969.70 40612.75 41912.68 00:24:12.447 00:24:12.447 Initializing NVMe Controllers 00:24:12.447 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:12.447 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:24:12.447 Initialization complete. Launching workers. 00:24:12.447 ======================================================== 00:24:12.447 Latency(us) 00:24:12.447 Device Information : IOPS MiB/s Average min max 00:24:12.447 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40883.62 40593.65 40996.74 00:24:12.447 ======================================================== 00:24:12.447 Total : 25.00 0.10 40883.62 40593.65 40996.74 00:24:12.447 00:24:12.447 12:07:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 119821 00:24:12.447 12:07:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 119822 00:24:12.447 12:07:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:12.447 12:07:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:24:12.447 12:07:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@335 -- # nvmfcleanup 00:24:12.447 12:07:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@99 -- # sync 00:24:12.447 12:07:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:24:12.447 12:07:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@102 -- # set +e 00:24:12.447 12:07:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@103 -- # for i in {1..20} 00:24:12.447 12:07:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:24:12.447 rmmod nvme_tcp 00:24:12.447 rmmod nvme_fabrics 00:24:12.447 rmmod nvme_keyring 00:24:12.447 12:07:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:24:12.707 12:07:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@106 -- # set -e 00:24:12.707 12:07:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@107 -- # return 0 00:24:12.707 12:07:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # '[' -n 119607 ']' 00:24:12.707 12:07:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@337 -- # killprocess 119607 00:24:12.707 12:07:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 119607 ']' 00:24:12.707 12:07:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 119607 00:24:12.707 12:07:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:24:12.707 12:07:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:12.707 12:07:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 119607 00:24:12.707 12:07:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:12.707 12:07:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:12.707 12:07:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 119607' 00:24:12.707 killing process with pid 119607 00:24:12.707 12:07:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 119607 00:24:12.707 12:07:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 119607 00:24:12.707 12:07:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:24:12.707 12:07:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@342 -- # nvmf_fini 00:24:12.707 12:07:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@264 -- # local dev 00:24:12.707 12:07:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@267 -- # remove_target_ns 00:24:12.707 12:07:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:24:12.707 12:07:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:24:12.707 12:07:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_target_ns 00:24:15.242 12:07:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@268 -- # delete_main_bridge 00:24:15.242 12:07:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:24:15.242 12:07:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@130 -- # return 0 00:24:15.242 12:07:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:24:15.242 12:07:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:24:15.242 12:07:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:24:15.242 12:07:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:24:15.242 12:07:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:24:15.242 12:07:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:24:15.242 12:07:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:24:15.242 12:07:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:24:15.242 12:07:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:24:15.242 12:07:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:24:15.242 12:07:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:24:15.242 12:07:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:24:15.242 12:07:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:24:15.242 12:07:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:24:15.242 12:07:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:24:15.242 12:07:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:24:15.242 12:07:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:24:15.242 12:07:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@41 -- # _dev=0 00:24:15.242 12:07:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@41 -- # dev_map=() 00:24:15.242 12:07:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@284 -- # iptr 00:24:15.242 12:07:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@542 -- # iptables-save 00:24:15.242 12:07:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:24:15.242 12:07:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@542 -- # iptables-restore 00:24:15.242 00:24:15.242 real 0m10.292s 00:24:15.242 user 0m7.004s 00:24:15.242 sys 0m5.397s 00:24:15.242 12:07:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:15.242 12:07:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:15.242 ************************************ 00:24:15.242 END TEST nvmf_control_msg_list 00:24:15.242 ************************************ 00:24:15.242 12:07:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:15.242 12:07:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:15.242 12:07:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:15.242 12:07:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:15.242 ************************************ 00:24:15.242 START TEST nvmf_wait_for_buf 00:24:15.242 ************************************ 00:24:15.242 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:15.242 * Looking for test storage... 00:24:15.242 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:15.242 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:15.242 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:24:15.242 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:15.242 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:15.242 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:15.242 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:15.242 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:15.242 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:24:15.242 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:24:15.242 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:24:15.242 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:24:15.242 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:24:15.242 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:24:15.242 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:24:15.242 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:15.242 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:24:15.242 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:24:15.242 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:15.242 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:15.242 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:24:15.242 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:24:15.242 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:15.242 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:24:15.242 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:15.242 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:24:15.242 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:24:15.242 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:15.242 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:24:15.242 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:15.242 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:15.242 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:15.242 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:24:15.242 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:15.242 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:15.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:15.242 --rc genhtml_branch_coverage=1 00:24:15.242 --rc genhtml_function_coverage=1 00:24:15.242 --rc genhtml_legend=1 00:24:15.242 --rc geninfo_all_blocks=1 00:24:15.242 --rc geninfo_unexecuted_blocks=1 00:24:15.242 00:24:15.242 ' 00:24:15.242 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:15.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:15.242 --rc genhtml_branch_coverage=1 00:24:15.242 --rc genhtml_function_coverage=1 00:24:15.242 --rc genhtml_legend=1 00:24:15.242 --rc geninfo_all_blocks=1 00:24:15.242 --rc geninfo_unexecuted_blocks=1 00:24:15.242 00:24:15.242 ' 00:24:15.242 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:15.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:15.242 --rc genhtml_branch_coverage=1 00:24:15.242 --rc genhtml_function_coverage=1 00:24:15.242 --rc genhtml_legend=1 00:24:15.242 --rc geninfo_all_blocks=1 00:24:15.242 --rc geninfo_unexecuted_blocks=1 00:24:15.242 00:24:15.242 ' 00:24:15.242 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:15.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:15.242 --rc genhtml_branch_coverage=1 00:24:15.242 --rc genhtml_function_coverage=1 00:24:15.242 --rc genhtml_legend=1 00:24:15.242 --rc geninfo_all_blocks=1 00:24:15.242 --rc geninfo_unexecuted_blocks=1 00:24:15.242 00:24:15.242 ' 00:24:15.242 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:15.242 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:24:15.242 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:15.242 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:15.242 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:15.242 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:15.242 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:15.242 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:24:15.242 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:15.242 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:24:15.242 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:24:15.242 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:24:15.242 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:15.242 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:24:15.242 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:24:15.242 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:15.242 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:15.242 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:15.242 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:15.242 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:15.242 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:15.242 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.242 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.242 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.242 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:24:15.242 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.242 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:24:15.242 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:24:15.242 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:24:15.243 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:24:15.243 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@50 -- # : 0 00:24:15.243 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:24:15.243 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:24:15.243 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:24:15.243 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:15.243 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:15.243 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:24:15.243 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:24:15.243 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:24:15.243 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:24:15.243 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@54 -- # have_pci_nics=0 00:24:15.243 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:24:15.243 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:24:15.243 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:15.243 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@296 -- # prepare_net_devs 00:24:15.243 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # local -g is_hw=no 00:24:15.243 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@260 -- # remove_target_ns 00:24:15.243 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:24:15.243 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:24:15.243 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_target_ns 00:24:15.243 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:24:15.243 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:24:15.243 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # xtrace_disable 00:24:15.243 12:07:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@131 -- # pci_devs=() 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@131 -- # local -a pci_devs 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@132 -- # pci_net_devs=() 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@133 -- # pci_drivers=() 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@133 -- # local -A pci_drivers 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@135 -- # net_devs=() 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@135 -- # local -ga net_devs 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@136 -- # e810=() 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@136 -- # local -ga e810 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@137 -- # x722=() 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@137 -- # local -ga x722 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@138 -- # mlx=() 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@138 -- # local -ga mlx 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:21.816 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:21.816 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # [[ up == up ]] 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:21.816 Found net devices under 0000:86:00.0: cvl_0_0 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # [[ up == up ]] 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:21.816 Found net devices under 0000:86:00.1: cvl_0_1 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # is_hw=yes 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@257 -- # create_target_ns 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@27 -- # local -gA dev_map 00:24:21.816 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@28 -- # local -g _dev 00:24:21.817 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:24:21.817 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:24:21.817 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:21.817 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:24:21.817 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@44 -- # ips=() 00:24:21.817 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:24:21.817 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:24:21.817 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:24:21.817 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:24:21.817 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:24:21.817 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:24:21.817 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:24:21.817 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:24:21.817 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:24:21.817 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:24:21.817 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:24:21.817 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:24:21.817 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:24:21.817 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:24:21.817 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:24:21.817 12:07:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@11 -- # local val=167772161 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:24:21.817 10.0.0.1 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@11 -- # local val=167772162 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:24:21.817 10.0.0.2 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@38 -- # ping_ips 1 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@107 -- # local dev=initiator0 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:24:21.817 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:21.817 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:24:21.817 00:24:21.817 --- 10.0.0.1 ping statistics --- 00:24:21.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:21.817 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@168 -- # get_net_dev target0 00:24:21.817 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@107 -- # local dev=target0 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:24:21.818 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:21.818 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:24:21.818 00:24:21.818 --- 10.0.0.2 ping statistics --- 00:24:21.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:21.818 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # (( pair++ )) 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@270 -- # return 0 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@107 -- # local dev=initiator0 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@107 -- # local dev=initiator1 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@109 -- # return 1 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@168 -- # dev= 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@169 -- # return 0 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@168 -- # get_net_dev target0 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@107 -- # local dev=target0 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@168 -- # get_net_dev target1 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@107 -- # local dev=target1 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@109 -- # return 1 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@168 -- # dev= 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@169 -- # return 0 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # nvmfpid=123605 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@329 -- # waitforlisten 123605 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 123605 ']' 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:21.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:21.818 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:21.818 [2024-12-05 12:07:55.373020] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:24:21.819 [2024-12-05 12:07:55.373067] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:21.819 [2024-12-05 12:07:55.450761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:21.819 [2024-12-05 12:07:55.490660] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:21.819 [2024-12-05 12:07:55.490694] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:21.819 [2024-12-05 12:07:55.490701] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:21.819 [2024-12-05 12:07:55.490707] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:21.819 [2024-12-05 12:07:55.490713] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:21.819 [2024-12-05 12:07:55.491248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:21.819 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:21.819 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:24:21.819 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:24:21.819 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:21.819 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:21.819 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:21.819 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:21.819 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:21.819 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:24:21.819 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.819 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:21.819 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.819 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:24:21.819 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.819 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:21.819 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.819 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:24:21.819 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.819 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:21.819 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.819 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:21.819 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.819 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:21.819 Malloc0 00:24:21.819 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.819 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:24:21.819 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.819 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:21.819 [2024-12-05 12:07:55.668535] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:21.819 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.819 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:24:21.819 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.819 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:21.819 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.819 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:21.819 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.819 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:21.819 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.819 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:21.819 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.819 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:21.819 [2024-12-05 12:07:55.696733] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:21.819 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.819 12:07:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:21.819 [2024-12-05 12:07:55.782453] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:23.195 Initializing NVMe Controllers 00:24:23.195 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:23.195 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:24:23.195 Initialization complete. Launching workers. 00:24:23.195 ======================================================== 00:24:23.195 Latency(us) 00:24:23.195 Device Information : IOPS MiB/s Average min max 00:24:23.195 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 35.00 4.38 117450.92 23975.74 191533.90 00:24:23.195 ======================================================== 00:24:23.195 Total : 35.00 4.38 117450.92 23975.74 191533.90 00:24:23.195 00:24:23.195 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:24:23.195 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:24:23.195 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.195 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:23.195 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.195 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=534 00:24:23.195 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 534 -eq 0 ]] 00:24:23.195 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:23.195 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:24:23.196 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@335 -- # nvmfcleanup 00:24:23.196 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@99 -- # sync 00:24:23.196 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:24:23.196 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@102 -- # set +e 00:24:23.196 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@103 -- # for i in {1..20} 00:24:23.196 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:24:23.196 rmmod nvme_tcp 00:24:23.196 rmmod nvme_fabrics 00:24:23.196 rmmod nvme_keyring 00:24:23.196 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:24:23.196 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@106 -- # set -e 00:24:23.196 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@107 -- # return 0 00:24:23.196 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # '[' -n 123605 ']' 00:24:23.196 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@337 -- # killprocess 123605 00:24:23.196 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 123605 ']' 00:24:23.196 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 123605 00:24:23.196 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:24:23.196 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:23.196 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 123605 00:24:23.455 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:23.455 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:23.455 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 123605' 00:24:23.455 killing process with pid 123605 00:24:23.455 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 123605 00:24:23.455 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 123605 00:24:23.455 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:24:23.455 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@342 -- # nvmf_fini 00:24:23.455 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@264 -- # local dev 00:24:23.455 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@267 -- # remove_target_ns 00:24:23.455 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:24:23.455 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:24:23.455 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_target_ns 00:24:25.992 12:07:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@268 -- # delete_main_bridge 00:24:25.992 12:07:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:24:25.992 12:07:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@130 -- # return 0 00:24:25.992 12:07:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:24:25.992 12:07:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:24:25.992 12:07:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:24:25.992 12:07:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:24:25.992 12:07:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:24:25.992 12:07:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:24:25.992 12:07:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:24:25.992 12:07:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:24:25.992 12:07:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:24:25.992 12:07:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:24:25.992 12:07:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:24:25.992 12:07:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:24:25.992 12:07:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:24:25.992 12:07:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:24:25.992 12:07:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:24:25.992 12:07:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:24:25.992 12:07:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:24:25.992 12:07:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@41 -- # _dev=0 00:24:25.992 12:07:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@41 -- # dev_map=() 00:24:25.992 12:07:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@284 -- # iptr 00:24:25.992 12:07:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@542 -- # iptables-save 00:24:25.992 12:07:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:24:25.992 12:07:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@542 -- # iptables-restore 00:24:25.992 00:24:25.992 real 0m10.594s 00:24:25.992 user 0m4.115s 00:24:25.992 sys 0m4.920s 00:24:25.992 12:07:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:25.992 12:07:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:25.992 ************************************ 00:24:25.992 END TEST nvmf_wait_for_buf 00:24:25.992 ************************************ 00:24:25.992 12:07:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:24:25.992 12:07:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:24:25.992 12:07:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:24:25.992 12:07:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:24:25.992 12:07:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@125 -- # xtrace_disable 00:24:25.992 12:07:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@131 -- # pci_devs=() 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@131 -- # local -a pci_devs 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@132 -- # pci_net_devs=() 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@133 -- # pci_drivers=() 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@133 -- # local -A pci_drivers 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@135 -- # net_devs=() 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@135 -- # local -ga net_devs 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@136 -- # e810=() 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@136 -- # local -ga e810 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@137 -- # x722=() 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@137 -- # local -ga x722 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@138 -- # mlx=() 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@138 -- # local -ga mlx 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:31.262 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:31.262 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@234 -- # [[ up == up ]] 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:31.262 Found net devices under 0000:86:00.0: cvl_0_0 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@234 -- # [[ up == up ]] 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:31.262 Found net devices under 0000:86:00.1: cvl_0_1 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:31.262 12:08:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:31.262 ************************************ 00:24:31.262 START TEST nvmf_perf_adq 00:24:31.262 ************************************ 00:24:31.263 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:24:31.263 * Looking for test storage... 00:24:31.263 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:31.263 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:31.263 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version 00:24:31.263 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:31.564 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:31.564 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:31.564 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:31.564 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:31.564 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:24:31.564 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:24:31.564 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:24:31.564 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:24:31.564 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:24:31.564 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:24:31.564 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:24:31.565 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:31.565 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:24:31.565 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:24:31.565 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:31.565 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:31.565 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:24:31.565 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:24:31.565 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:31.565 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:24:31.565 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:24:31.565 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:24:31.565 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:24:31.565 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:31.565 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:24:31.565 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:24:31.565 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:31.565 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:31.565 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:24:31.565 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:31.565 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:31.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.565 --rc genhtml_branch_coverage=1 00:24:31.565 --rc genhtml_function_coverage=1 00:24:31.565 --rc genhtml_legend=1 00:24:31.565 --rc geninfo_all_blocks=1 00:24:31.565 --rc geninfo_unexecuted_blocks=1 00:24:31.565 00:24:31.565 ' 00:24:31.565 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:31.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.565 --rc genhtml_branch_coverage=1 00:24:31.565 --rc genhtml_function_coverage=1 00:24:31.565 --rc genhtml_legend=1 00:24:31.565 --rc geninfo_all_blocks=1 00:24:31.565 --rc geninfo_unexecuted_blocks=1 00:24:31.565 00:24:31.565 ' 00:24:31.565 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:31.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.565 --rc genhtml_branch_coverage=1 00:24:31.565 --rc genhtml_function_coverage=1 00:24:31.565 --rc genhtml_legend=1 00:24:31.565 --rc geninfo_all_blocks=1 00:24:31.565 --rc geninfo_unexecuted_blocks=1 00:24:31.565 00:24:31.565 ' 00:24:31.565 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:31.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.565 --rc genhtml_branch_coverage=1 00:24:31.565 --rc genhtml_function_coverage=1 00:24:31.565 --rc genhtml_legend=1 00:24:31.565 --rc geninfo_all_blocks=1 00:24:31.565 --rc geninfo_unexecuted_blocks=1 00:24:31.565 00:24:31.565 ' 00:24:31.565 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:31.565 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:24:31.565 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:31.565 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:31.565 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:31.565 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:31.565 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:31.565 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:24:31.565 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:31.565 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:24:31.565 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:24:31.565 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:24:31.565 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:31.565 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:24:31.565 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:24:31.565 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:31.565 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:31.565 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:24:31.565 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:31.565 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:31.565 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:31.565 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.565 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.565 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.565 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:24:31.565 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.565 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:24:31.565 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:24:31.565 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:24:31.565 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:24:31.565 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@50 -- # : 0 00:24:31.565 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:24:31.565 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:24:31.565 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:24:31.565 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:31.565 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:31.565 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:24:31.565 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:24:31.565 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:24:31.565 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:24:31.565 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@54 -- # have_pci_nics=0 00:24:31.565 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:24:31.565 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # xtrace_disable 00:24:31.565 12:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:38.134 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:38.134 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@131 -- # pci_devs=() 00:24:38.134 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@131 -- # local -a pci_devs 00:24:38.134 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@132 -- # pci_net_devs=() 00:24:38.134 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:24:38.134 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@133 -- # pci_drivers=() 00:24:38.134 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@133 -- # local -A pci_drivers 00:24:38.134 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@135 -- # net_devs=() 00:24:38.134 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@135 -- # local -ga net_devs 00:24:38.134 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@136 -- # e810=() 00:24:38.134 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@136 -- # local -ga e810 00:24:38.134 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@137 -- # x722=() 00:24:38.134 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@137 -- # local -ga x722 00:24:38.134 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@138 -- # mlx=() 00:24:38.134 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@138 -- # local -ga mlx 00:24:38.134 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:38.134 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:38.134 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:38.134 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:38.134 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:38.134 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:38.134 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:38.134 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:38.134 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:38.134 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:38.134 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:38.134 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:38.134 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:24:38.134 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:24:38.134 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:24:38.134 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:24:38.134 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:24:38.134 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:24:38.134 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:24:38.134 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:38.134 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:38.134 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:24:38.134 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:24:38.134 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:38.134 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:38.134 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:24:38.134 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:24:38.134 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:38.134 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:38.134 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:24:38.134 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:24:38.134 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:38.134 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:38.134 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:24:38.134 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:24:38.134 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:24:38.134 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:24:38.134 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:24:38.134 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.134 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:24:38.134 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:38.134 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # [[ up == up ]] 00:24:38.134 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:24:38.134 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.134 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:38.134 Found net devices under 0000:86:00.0: cvl_0_0 00:24:38.134 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.134 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:24:38.134 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.134 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:24:38.134 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:38.134 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # [[ up == up ]] 00:24:38.134 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:24:38.134 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.134 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:38.134 Found net devices under 0000:86:00.1: cvl_0_1 00:24:38.134 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.134 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:24:38.134 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:24:38.134 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:38.134 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:24:38.135 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:38.135 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:24:38.135 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:24:38.135 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:24:38.135 12:08:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:24:40.039 12:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:24:45.315 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:24:45.315 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:24:45.315 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:45.315 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # prepare_net_devs 00:24:45.315 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # local -g is_hw=no 00:24:45.315 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # remove_target_ns 00:24:45.315 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:24:45.315 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:24:45.315 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_target_ns 00:24:45.315 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:24:45.315 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:24:45.315 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # xtrace_disable 00:24:45.315 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:45.315 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:45.315 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@131 -- # pci_devs=() 00:24:45.315 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@131 -- # local -a pci_devs 00:24:45.315 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@132 -- # pci_net_devs=() 00:24:45.315 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:24:45.315 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@133 -- # pci_drivers=() 00:24:45.315 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@133 -- # local -A pci_drivers 00:24:45.315 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@135 -- # net_devs=() 00:24:45.315 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@135 -- # local -ga net_devs 00:24:45.315 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@136 -- # e810=() 00:24:45.315 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@136 -- # local -ga e810 00:24:45.315 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@137 -- # x722=() 00:24:45.315 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@137 -- # local -ga x722 00:24:45.315 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@138 -- # mlx=() 00:24:45.315 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@138 -- # local -ga mlx 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:45.316 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:45.316 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # [[ up == up ]] 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:45.316 Found net devices under 0000:86:00.0: cvl_0_0 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # [[ up == up ]] 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:45.316 Found net devices under 0000:86:00.1: cvl_0_1 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # is_hw=yes 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@257 -- # create_target_ns 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@27 -- # local -gA dev_map 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@28 -- # local -g _dev 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@44 -- # ips=() 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@11 -- # local val=167772161 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:24:45.316 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:24:45.316 10.0.0.1 00:24:45.317 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:24:45.317 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:24:45.317 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:45.317 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:45.317 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:24:45.317 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@11 -- # local val=167772162 00:24:45.317 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:24:45.317 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:24:45.317 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:24:45.317 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:24:45.317 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:24:45.317 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:24:45.317 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:24:45.317 10.0.0.2 00:24:45.317 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:24:45.317 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:24:45.317 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:24:45.317 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:24:45.317 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:24:45.317 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:24:45.317 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:24:45.317 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:45.317 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:45.317 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:24:45.317 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:24:45.317 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:24:45.317 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:24:45.317 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:24:45.317 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:24:45.317 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:24:45.317 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:24:45.317 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:24:45.317 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:24:45.317 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:45.317 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@38 -- # ping_ips 1 00:24:45.317 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:24:45.317 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:24:45.317 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:24:45.317 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:24:45.317 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:24:45.317 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:24:45.317 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:24:45.317 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:24:45.317 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:24:45.317 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@107 -- # local dev=initiator0 00:24:45.317 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:24:45.317 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:24:45.317 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:24:45.317 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:24:45.317 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:24:45.317 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:24:45.317 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:24:45.317 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:24:45.317 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:24:45.317 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:24:45.317 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:24:45.317 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:45.317 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:45.317 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:24:45.317 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:24:45.577 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:45.577 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.401 ms 00:24:45.577 00:24:45.577 --- 10.0.0.1 ping statistics --- 00:24:45.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:45.577 rtt min/avg/max/mdev = 0.401/0.401/0.401/0.000 ms 00:24:45.577 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:24:45.577 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:24:45.577 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:24:45.577 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:24:45.577 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:45.577 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:45.577 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # get_net_dev target0 00:24:45.577 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@107 -- # local dev=target0 00:24:45.577 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:24:45.577 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:24:45.577 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:24:45.577 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:24:45.577 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:24:45.577 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:24:45.577 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:24:45.577 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:24:45.577 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:24:45.577 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:24:45.577 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:24:45.577 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:24:45.577 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:24:45.577 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:24:45.577 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:45.577 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:24:45.577 00:24:45.577 --- 10.0.0.2 ping statistics --- 00:24:45.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:45.577 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:24:45.577 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # (( pair++ )) 00:24:45.577 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:24:45.577 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:45.577 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # return 0 00:24:45.577 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:24:45.577 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:24:45.577 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:24:45.577 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:24:45.577 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:24:45.577 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:24:45.577 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:24:45.577 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:24:45.577 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:24:45.577 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:24:45.577 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@107 -- # local dev=initiator0 00:24:45.577 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:24:45.577 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:24:45.578 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:24:45.578 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:24:45.578 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:24:45.578 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:24:45.578 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:24:45.578 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:24:45.578 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:24:45.578 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:45.578 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:24:45.578 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:24:45.578 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:24:45.578 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:24:45.578 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:24:45.578 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:24:45.578 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@107 -- # local dev=initiator1 00:24:45.578 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:24:45.578 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:24:45.578 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # return 1 00:24:45.578 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # dev= 00:24:45.578 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@169 -- # return 0 00:24:45.578 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:24:45.578 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:24:45.578 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:24:45.578 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:24:45.578 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:24:45.578 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:45.578 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:45.578 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # get_net_dev target0 00:24:45.578 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@107 -- # local dev=target0 00:24:45.578 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:24:45.578 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:24:45.578 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:24:45.578 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:24:45.578 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:24:45.578 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:24:45.578 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:24:45.578 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:24:45.578 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:24:45.578 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:45.578 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:24:45.578 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:24:45.578 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:24:45.578 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:24:45.578 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:45.578 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:45.578 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # get_net_dev target1 00:24:45.578 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@107 -- # local dev=target1 00:24:45.578 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:24:45.578 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:24:45.578 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # return 1 00:24:45.578 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # dev= 00:24:45.578 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@169 -- # return 0 00:24:45.578 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:24:45.578 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:45.578 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:24:45.578 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:24:45.578 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:45.578 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:24:45.578 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:24:45.578 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:24:45.578 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:24:45.578 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:45.578 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:45.578 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # nvmfpid=131957 00:24:45.578 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # waitforlisten 131957 00:24:45.578 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:45.578 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 131957 ']' 00:24:45.578 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:45.578 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:45.578 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:45.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:45.578 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:45.578 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:45.578 [2024-12-05 12:08:19.688036] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:24:45.578 [2024-12-05 12:08:19.688078] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:45.578 [2024-12-05 12:08:19.763835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:45.838 [2024-12-05 12:08:19.807439] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:45.838 [2024-12-05 12:08:19.807472] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:45.838 [2024-12-05 12:08:19.807479] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:45.838 [2024-12-05 12:08:19.807485] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:45.838 [2024-12-05 12:08:19.807491] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:45.838 [2024-12-05 12:08:19.808931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:45.838 [2024-12-05 12:08:19.808969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:45.838 [2024-12-05 12:08:19.809077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:45.838 [2024-12-05 12:08:19.809078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:45.838 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:45.838 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:24:45.838 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:24:45.838 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:45.838 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:45.838 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:45.838 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:24:45.838 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:24:45.838 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:24:45.838 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.838 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:45.838 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.838 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:24:45.838 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:24:45.838 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.838 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:45.838 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.838 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:24:45.838 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.838 12:08:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:45.838 12:08:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.838 12:08:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:24:45.838 12:08:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.838 12:08:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:45.838 [2024-12-05 12:08:20.011571] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:45.838 12:08:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.838 12:08:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:45.838 12:08:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.838 12:08:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:46.249 Malloc1 00:24:46.249 12:08:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.249 12:08:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:46.249 12:08:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.249 12:08:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:46.249 12:08:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.249 12:08:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:46.249 12:08:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.249 12:08:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:46.249 12:08:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.249 12:08:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:46.249 12:08:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.249 12:08:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:46.249 [2024-12-05 12:08:20.081170] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:46.249 12:08:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.249 12:08:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=131992 00:24:46.249 12:08:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:24:46.249 12:08:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:48.231 12:08:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:24:48.231 12:08:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.231 12:08:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:48.231 12:08:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.231 12:08:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:24:48.231 "tick_rate": 2100000000, 00:24:48.231 "poll_groups": [ 00:24:48.231 { 00:24:48.231 "name": "nvmf_tgt_poll_group_000", 00:24:48.231 "admin_qpairs": 1, 00:24:48.231 "io_qpairs": 1, 00:24:48.231 "current_admin_qpairs": 1, 00:24:48.231 "current_io_qpairs": 1, 00:24:48.231 "pending_bdev_io": 0, 00:24:48.231 "completed_nvme_io": 20324, 00:24:48.231 "transports": [ 00:24:48.231 { 00:24:48.231 "trtype": "TCP" 00:24:48.231 } 00:24:48.231 ] 00:24:48.231 }, 00:24:48.231 { 00:24:48.231 "name": "nvmf_tgt_poll_group_001", 00:24:48.231 "admin_qpairs": 0, 00:24:48.231 "io_qpairs": 1, 00:24:48.231 "current_admin_qpairs": 0, 00:24:48.231 "current_io_qpairs": 1, 00:24:48.231 "pending_bdev_io": 0, 00:24:48.231 "completed_nvme_io": 20425, 00:24:48.231 "transports": [ 00:24:48.231 { 00:24:48.231 "trtype": "TCP" 00:24:48.231 } 00:24:48.231 ] 00:24:48.231 }, 00:24:48.231 { 00:24:48.231 "name": "nvmf_tgt_poll_group_002", 00:24:48.231 "admin_qpairs": 0, 00:24:48.231 "io_qpairs": 1, 00:24:48.231 "current_admin_qpairs": 0, 00:24:48.231 "current_io_qpairs": 1, 00:24:48.231 "pending_bdev_io": 0, 00:24:48.231 "completed_nvme_io": 20084, 00:24:48.231 "transports": [ 00:24:48.231 { 00:24:48.231 "trtype": "TCP" 00:24:48.231 } 00:24:48.231 ] 00:24:48.231 }, 00:24:48.231 { 00:24:48.231 "name": "nvmf_tgt_poll_group_003", 00:24:48.231 "admin_qpairs": 0, 00:24:48.231 "io_qpairs": 1, 00:24:48.231 "current_admin_qpairs": 0, 00:24:48.231 "current_io_qpairs": 1, 00:24:48.231 "pending_bdev_io": 0, 00:24:48.231 "completed_nvme_io": 20357, 00:24:48.231 "transports": [ 00:24:48.231 { 00:24:48.231 "trtype": "TCP" 00:24:48.231 } 00:24:48.231 ] 00:24:48.231 } 00:24:48.231 ] 00:24:48.231 }' 00:24:48.231 12:08:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:24:48.231 12:08:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:24:48.231 12:08:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:24:48.231 12:08:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:24:48.231 12:08:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 131992 00:24:56.353 Initializing NVMe Controllers 00:24:56.353 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:56.353 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:24:56.353 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:24:56.353 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:24:56.353 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:24:56.353 Initialization complete. Launching workers. 00:24:56.353 ======================================================== 00:24:56.353 Latency(us) 00:24:56.353 Device Information : IOPS MiB/s Average min max 00:24:56.353 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10838.10 42.34 5907.12 1757.81 9932.01 00:24:56.353 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10942.20 42.74 5850.64 1804.08 12863.18 00:24:56.353 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10796.10 42.17 5929.93 2207.66 10933.20 00:24:56.353 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10809.40 42.22 5921.00 1946.63 9721.37 00:24:56.353 ======================================================== 00:24:56.353 Total : 43385.80 169.48 5902.01 1757.81 12863.18 00:24:56.353 00:24:56.353 12:08:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:24:56.353 12:08:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # nvmfcleanup 00:24:56.353 12:08:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@99 -- # sync 00:24:56.353 12:08:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:24:56.353 12:08:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@102 -- # set +e 00:24:56.353 12:08:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@103 -- # for i in {1..20} 00:24:56.353 12:08:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:24:56.353 rmmod nvme_tcp 00:24:56.353 rmmod nvme_fabrics 00:24:56.353 rmmod nvme_keyring 00:24:56.353 12:08:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:24:56.353 12:08:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@106 -- # set -e 00:24:56.353 12:08:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@107 -- # return 0 00:24:56.353 12:08:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # '[' -n 131957 ']' 00:24:56.353 12:08:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@337 -- # killprocess 131957 00:24:56.353 12:08:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 131957 ']' 00:24:56.353 12:08:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 131957 00:24:56.353 12:08:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:24:56.353 12:08:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:56.353 12:08:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 131957 00:24:56.353 12:08:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:56.353 12:08:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:56.353 12:08:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 131957' 00:24:56.353 killing process with pid 131957 00:24:56.353 12:08:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 131957 00:24:56.353 12:08:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 131957 00:24:56.613 12:08:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:24:56.613 12:08:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # nvmf_fini 00:24:56.613 12:08:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@264 -- # local dev 00:24:56.613 12:08:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@267 -- # remove_target_ns 00:24:56.613 12:08:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:24:56.613 12:08:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:24:56.613 12:08:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_target_ns 00:24:58.522 12:08:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@268 -- # delete_main_bridge 00:24:58.522 12:08:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:24:58.522 12:08:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@130 -- # return 0 00:24:58.522 12:08:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:24:58.522 12:08:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:24:58.522 12:08:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:24:58.522 12:08:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:24:58.522 12:08:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:24:58.522 12:08:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:24:58.522 12:08:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:24:58.522 12:08:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:24:58.522 12:08:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:24:58.522 12:08:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:24:58.522 12:08:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:24:58.522 12:08:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:24:58.522 12:08:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:24:58.522 12:08:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:24:58.522 12:08:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:24:58.522 12:08:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:24:58.522 12:08:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:24:58.522 12:08:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@41 -- # _dev=0 00:24:58.522 12:08:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@41 -- # dev_map=() 00:24:58.522 12:08:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@284 -- # iptr 00:24:58.522 12:08:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@542 -- # iptables-save 00:24:58.522 12:08:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:24:58.522 12:08:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@542 -- # iptables-restore 00:24:58.522 12:08:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:24:58.522 12:08:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:24:58.522 12:08:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:24:59.902 12:08:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:25:01.810 12:08:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:25:07.087 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:25:07.087 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:25:07.087 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:07.087 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # prepare_net_devs 00:25:07.087 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # local -g is_hw=no 00:25:07.087 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # remove_target_ns 00:25:07.087 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:25:07.087 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:25:07.087 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_target_ns 00:25:07.087 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:25:07.087 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:25:07.087 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # xtrace_disable 00:25:07.087 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:07.087 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:07.087 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@131 -- # pci_devs=() 00:25:07.087 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@131 -- # local -a pci_devs 00:25:07.087 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@132 -- # pci_net_devs=() 00:25:07.087 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:25:07.087 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@133 -- # pci_drivers=() 00:25:07.087 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@133 -- # local -A pci_drivers 00:25:07.087 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@135 -- # net_devs=() 00:25:07.087 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@135 -- # local -ga net_devs 00:25:07.087 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@136 -- # e810=() 00:25:07.087 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@136 -- # local -ga e810 00:25:07.087 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@137 -- # x722=() 00:25:07.087 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@137 -- # local -ga x722 00:25:07.087 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@138 -- # mlx=() 00:25:07.087 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@138 -- # local -ga mlx 00:25:07.087 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:07.087 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:07.087 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:07.087 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:07.088 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:07.088 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # [[ up == up ]] 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:07.088 Found net devices under 0000:86:00.0: cvl_0_0 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # [[ up == up ]] 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:07.088 Found net devices under 0000:86:00.1: cvl_0_1 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # is_hw=yes 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@257 -- # create_target_ns 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@27 -- # local -gA dev_map 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@28 -- # local -g _dev 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@44 -- # ips=() 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@11 -- # local val=167772161 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:25:07.088 10.0.0.1 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:25:07.088 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:07.089 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:07.089 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:25:07.089 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@11 -- # local val=167772162 00:25:07.089 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:25:07.089 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:25:07.089 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:25:07.089 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:25:07.089 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:25:07.089 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:25:07.089 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:25:07.089 10.0.0.2 00:25:07.089 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:25:07.089 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:25:07.089 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:25:07.089 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:25:07.089 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:25:07.089 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:25:07.089 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:25:07.089 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:07.089 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:07.089 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:25:07.089 12:08:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@38 -- # ping_ips 1 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@107 -- # local dev=initiator0 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:25:07.089 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:07.089 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.485 ms 00:25:07.089 00:25:07.089 --- 10.0.0.1 ping statistics --- 00:25:07.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:07.089 rtt min/avg/max/mdev = 0.485/0.485/0.485/0.000 ms 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # get_net_dev target0 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@107 -- # local dev=target0 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:25:07.089 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:07.089 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:25:07.089 00:25:07.089 --- 10.0.0.2 ping statistics --- 00:25:07.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:07.089 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # (( pair++ )) 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # return 0 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@107 -- # local dev=initiator0 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:07.089 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:25:07.090 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:25:07.090 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:25:07.090 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:25:07.090 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:25:07.090 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:25:07.090 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@107 -- # local dev=initiator1 00:25:07.090 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:25:07.090 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:25:07.090 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # return 1 00:25:07.090 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # dev= 00:25:07.090 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@169 -- # return 0 00:25:07.090 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:25:07.090 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:25:07.090 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:25:07.090 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:25:07.090 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:25:07.090 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:07.090 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:07.090 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # get_net_dev target0 00:25:07.090 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@107 -- # local dev=target0 00:25:07.090 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:25:07.090 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:25:07.090 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:25:07.090 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:25:07.090 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:25:07.090 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:25:07.090 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:25:07.090 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:25:07.090 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:25:07.090 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:07.090 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:25:07.090 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:25:07.090 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:25:07.090 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:25:07.090 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:07.090 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:07.090 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # get_net_dev target1 00:25:07.090 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@107 -- # local dev=target1 00:25:07.090 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:25:07.090 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:25:07.090 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # return 1 00:25:07.090 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # dev= 00:25:07.090 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@169 -- # return 0 00:25:07.090 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:25:07.090 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:07.090 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:25:07.090 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:25:07.090 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:07.090 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:25:07.090 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:25:07.090 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:25:07.090 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec nvmf_ns_spdk ethtool --offload cvl_0_1 hw-tc-offload on 00:25:07.090 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec nvmf_ns_spdk ethtool --set-priv-flags cvl_0_1 channel-pkt-inspect-optimize off 00:25:07.090 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:25:07.090 net.core.busy_poll = 1 00:25:07.090 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:25:07.090 net.core.busy_read = 1 00:25:07.090 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:25:07.090 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec nvmf_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_1 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:25:07.349 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec nvmf_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_1 ingress 00:25:07.349 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec nvmf_ns_spdk /usr/sbin/tc filter add dev cvl_0_1 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:25:07.349 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_1 00:25:07.349 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:25:07.349 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:25:07.349 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:07.349 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:07.349 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # nvmfpid=135795 00:25:07.349 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # waitforlisten 135795 00:25:07.349 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:07.349 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 135795 ']' 00:25:07.349 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:07.349 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:07.349 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:07.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:07.349 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:07.349 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:07.349 [2024-12-05 12:08:41.515812] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:25:07.349 [2024-12-05 12:08:41.515870] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:07.608 [2024-12-05 12:08:41.594240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:07.608 [2024-12-05 12:08:41.637011] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:07.608 [2024-12-05 12:08:41.637045] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:07.608 [2024-12-05 12:08:41.637052] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:07.608 [2024-12-05 12:08:41.637058] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:07.608 [2024-12-05 12:08:41.637064] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:07.608 [2024-12-05 12:08:41.638463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:07.608 [2024-12-05 12:08:41.638566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:07.608 [2024-12-05 12:08:41.638672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:07.608 [2024-12-05 12:08:41.638673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:07.608 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:07.608 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:25:07.608 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:25:07.608 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:07.608 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:07.608 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:07.608 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:25:07.608 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:25:07.608 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:25:07.608 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.608 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:07.608 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.608 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:25:07.608 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:25:07.608 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.608 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:07.608 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.608 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:25:07.608 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.608 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:07.867 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.868 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:25:07.868 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.868 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:07.868 [2024-12-05 12:08:41.840807] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:07.868 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.868 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:07.868 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.868 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:07.868 Malloc1 00:25:07.868 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.868 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:07.868 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.868 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:07.868 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.868 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:07.868 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.868 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:07.868 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.868 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:07.868 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.868 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:07.868 [2024-12-05 12:08:41.912466] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:07.868 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.868 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=135829 00:25:07.868 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:25:07.868 12:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:09.770 12:08:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:25:09.770 12:08:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.770 12:08:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:09.770 12:08:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.770 12:08:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:25:09.770 "tick_rate": 2100000000, 00:25:09.770 "poll_groups": [ 00:25:09.770 { 00:25:09.770 "name": "nvmf_tgt_poll_group_000", 00:25:09.770 "admin_qpairs": 1, 00:25:09.770 "io_qpairs": 2, 00:25:09.770 "current_admin_qpairs": 1, 00:25:09.770 "current_io_qpairs": 2, 00:25:09.770 "pending_bdev_io": 0, 00:25:09.770 "completed_nvme_io": 29021, 00:25:09.770 "transports": [ 00:25:09.770 { 00:25:09.770 "trtype": "TCP" 00:25:09.770 } 00:25:09.770 ] 00:25:09.770 }, 00:25:09.770 { 00:25:09.770 "name": "nvmf_tgt_poll_group_001", 00:25:09.770 "admin_qpairs": 0, 00:25:09.770 "io_qpairs": 2, 00:25:09.770 "current_admin_qpairs": 0, 00:25:09.770 "current_io_qpairs": 2, 00:25:09.770 "pending_bdev_io": 0, 00:25:09.770 "completed_nvme_io": 28885, 00:25:09.770 "transports": [ 00:25:09.770 { 00:25:09.770 "trtype": "TCP" 00:25:09.770 } 00:25:09.770 ] 00:25:09.770 }, 00:25:09.770 { 00:25:09.770 "name": "nvmf_tgt_poll_group_002", 00:25:09.770 "admin_qpairs": 0, 00:25:09.770 "io_qpairs": 0, 00:25:09.770 "current_admin_qpairs": 0, 00:25:09.771 "current_io_qpairs": 0, 00:25:09.771 "pending_bdev_io": 0, 00:25:09.771 "completed_nvme_io": 0, 00:25:09.771 "transports": [ 00:25:09.771 { 00:25:09.771 "trtype": "TCP" 00:25:09.771 } 00:25:09.771 ] 00:25:09.771 }, 00:25:09.771 { 00:25:09.771 "name": "nvmf_tgt_poll_group_003", 00:25:09.771 "admin_qpairs": 0, 00:25:09.771 "io_qpairs": 0, 00:25:09.771 "current_admin_qpairs": 0, 00:25:09.771 "current_io_qpairs": 0, 00:25:09.771 "pending_bdev_io": 0, 00:25:09.771 "completed_nvme_io": 0, 00:25:09.771 "transports": [ 00:25:09.771 { 00:25:09.771 "trtype": "TCP" 00:25:09.771 } 00:25:09.771 ] 00:25:09.771 } 00:25:09.771 ] 00:25:09.771 }' 00:25:09.771 12:08:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:25:09.771 12:08:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:25:10.029 12:08:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:25:10.029 12:08:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:25:10.029 12:08:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 135829 00:25:18.145 Initializing NVMe Controllers 00:25:18.145 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:18.145 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:25:18.145 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:25:18.146 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:25:18.146 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:25:18.146 Initialization complete. Launching workers. 00:25:18.146 ======================================================== 00:25:18.146 Latency(us) 00:25:18.146 Device Information : IOPS MiB/s Average min max 00:25:18.146 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7724.50 30.17 8286.63 1192.23 54186.71 00:25:18.146 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6938.10 27.10 9226.20 1473.45 53399.18 00:25:18.146 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7919.90 30.94 8082.44 1498.40 52315.17 00:25:18.146 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7670.90 29.96 8368.90 1453.62 52456.84 00:25:18.146 ======================================================== 00:25:18.146 Total : 30253.40 118.18 8469.51 1192.23 54186.71 00:25:18.146 00:25:18.146 12:08:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:25:18.146 12:08:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # nvmfcleanup 00:25:18.146 12:08:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@99 -- # sync 00:25:18.146 12:08:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:25:18.146 12:08:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@102 -- # set +e 00:25:18.146 12:08:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@103 -- # for i in {1..20} 00:25:18.146 12:08:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:25:18.146 rmmod nvme_tcp 00:25:18.146 rmmod nvme_fabrics 00:25:18.146 rmmod nvme_keyring 00:25:18.146 12:08:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:25:18.146 12:08:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@106 -- # set -e 00:25:18.146 12:08:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@107 -- # return 0 00:25:18.146 12:08:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # '[' -n 135795 ']' 00:25:18.146 12:08:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@337 -- # killprocess 135795 00:25:18.146 12:08:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 135795 ']' 00:25:18.146 12:08:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 135795 00:25:18.146 12:08:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:25:18.146 12:08:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:18.146 12:08:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 135795 00:25:18.146 12:08:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:18.146 12:08:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:18.146 12:08:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 135795' 00:25:18.146 killing process with pid 135795 00:25:18.146 12:08:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 135795 00:25:18.146 12:08:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 135795 00:25:18.406 12:08:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:25:18.406 12:08:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # nvmf_fini 00:25:18.406 12:08:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@264 -- # local dev 00:25:18.406 12:08:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@267 -- # remove_target_ns 00:25:18.406 12:08:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:25:18.406 12:08:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:25:18.406 12:08:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_target_ns 00:25:21.698 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@268 -- # delete_main_bridge 00:25:21.698 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:25:21.698 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@130 -- # return 0 00:25:21.698 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:25:21.698 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:25:21.698 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:25:21.698 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:25:21.698 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:25:21.698 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:25:21.698 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:25:21.698 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:25:21.698 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:25:21.698 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:25:21.698 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:25:21.698 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:25:21.698 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:25:21.698 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:25:21.698 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:25:21.698 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:25:21.698 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:25:21.698 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@41 -- # _dev=0 00:25:21.698 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@41 -- # dev_map=() 00:25:21.698 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@284 -- # iptr 00:25:21.698 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@542 -- # iptables-save 00:25:21.698 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:25:21.698 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@542 -- # iptables-restore 00:25:21.698 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:25:21.698 00:25:21.698 real 0m50.179s 00:25:21.698 user 2m43.851s 00:25:21.698 sys 0m10.642s 00:25:21.698 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:21.698 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:21.698 ************************************ 00:25:21.698 END TEST nvmf_perf_adq 00:25:21.698 ************************************ 00:25:21.698 12:08:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:25:21.698 12:08:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:21.698 12:08:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:21.698 12:08:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:21.698 ************************************ 00:25:21.698 START TEST nvmf_shutdown 00:25:21.698 ************************************ 00:25:21.698 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:25:21.698 * Looking for test storage... 00:25:21.698 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:21.698 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:21.698 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:25:21.698 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:21.698 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:21.698 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:21.698 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:21.698 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:21.698 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:25:21.698 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:25:21.698 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:25:21.698 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:25:21.698 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:25:21.698 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:25:21.698 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:25:21.698 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:21.698 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:25:21.698 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:25:21.698 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:21.698 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:21.698 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:25:21.698 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:25:21.698 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:21.698 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:25:21.698 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:25:21.698 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:25:21.698 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:25:21.698 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:21.698 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:25:21.698 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:25:21.698 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:21.698 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:21.698 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:25:21.698 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:21.698 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:21.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:21.698 --rc genhtml_branch_coverage=1 00:25:21.698 --rc genhtml_function_coverage=1 00:25:21.698 --rc genhtml_legend=1 00:25:21.698 --rc geninfo_all_blocks=1 00:25:21.698 --rc geninfo_unexecuted_blocks=1 00:25:21.698 00:25:21.698 ' 00:25:21.698 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:21.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:21.698 --rc genhtml_branch_coverage=1 00:25:21.698 --rc genhtml_function_coverage=1 00:25:21.698 --rc genhtml_legend=1 00:25:21.698 --rc geninfo_all_blocks=1 00:25:21.699 --rc geninfo_unexecuted_blocks=1 00:25:21.699 00:25:21.699 ' 00:25:21.699 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:21.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:21.699 --rc genhtml_branch_coverage=1 00:25:21.699 --rc genhtml_function_coverage=1 00:25:21.699 --rc genhtml_legend=1 00:25:21.699 --rc geninfo_all_blocks=1 00:25:21.699 --rc geninfo_unexecuted_blocks=1 00:25:21.699 00:25:21.699 ' 00:25:21.699 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:21.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:21.699 --rc genhtml_branch_coverage=1 00:25:21.699 --rc genhtml_function_coverage=1 00:25:21.699 --rc genhtml_legend=1 00:25:21.699 --rc geninfo_all_blocks=1 00:25:21.699 --rc geninfo_unexecuted_blocks=1 00:25:21.699 00:25:21.699 ' 00:25:21.699 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:21.699 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:25:21.699 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:21.699 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:21.699 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:21.699 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:21.699 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:21.699 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:25:21.699 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:21.699 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:25:21.699 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:25:21.699 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:25:21.699 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:21.699 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:25:21.699 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:25:21.699 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:21.699 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:21.699 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:25:21.699 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:21.699 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:21.699 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:21.699 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.699 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.699 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.699 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:25:21.699 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.699 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:25:21.699 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:25:21.699 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:25:21.699 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:25:21.699 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@50 -- # : 0 00:25:21.699 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:25:21.699 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:25:21.699 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:25:21.699 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:21.699 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:21.699 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:25:21.699 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:25:21.699 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:25:21.699 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:25:21.699 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@54 -- # have_pci_nics=0 00:25:21.699 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:21.699 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:21.699 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:25:21.699 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:21.699 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:21.699 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:21.699 ************************************ 00:25:21.699 START TEST nvmf_shutdown_tc1 00:25:21.699 ************************************ 00:25:21.699 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:25:21.699 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:25:21.699 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:25:21.699 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:25:21.699 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:21.699 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # prepare_net_devs 00:25:21.699 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # local -g is_hw=no 00:25:21.699 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # remove_target_ns 00:25:21.699 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:25:21.699 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:25:21.699 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_target_ns 00:25:21.699 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:25:21.699 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:25:21.699 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # xtrace_disable 00:25:21.699 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:28.273 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:28.273 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@131 -- # pci_devs=() 00:25:28.273 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@131 -- # local -a pci_devs 00:25:28.273 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@132 -- # pci_net_devs=() 00:25:28.273 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:25:28.273 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@133 -- # pci_drivers=() 00:25:28.273 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@133 -- # local -A pci_drivers 00:25:28.273 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@135 -- # net_devs=() 00:25:28.273 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@135 -- # local -ga net_devs 00:25:28.273 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@136 -- # e810=() 00:25:28.273 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@136 -- # local -ga e810 00:25:28.273 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@137 -- # x722=() 00:25:28.273 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@137 -- # local -ga x722 00:25:28.273 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@138 -- # mlx=() 00:25:28.273 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@138 -- # local -ga mlx 00:25:28.273 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:28.273 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:28.273 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:28.273 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:28.273 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:28.274 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:28.274 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # [[ up == up ]] 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:28.274 Found net devices under 0000:86:00.0: cvl_0_0 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # [[ up == up ]] 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:28.274 Found net devices under 0000:86:00.1: cvl_0_1 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # is_hw=yes 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@257 -- # create_target_ns 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@27 -- # local -gA dev_map 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@28 -- # local -g _dev 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@44 -- # ips=() 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:25:28.274 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@11 -- # local val=167772161 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:25:28.275 10.0.0.1 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@11 -- # local val=167772162 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:25:28.275 10.0.0.2 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@38 -- # ping_ips 1 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@107 -- # local dev=initiator0 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:25:28.275 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:28.275 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.394 ms 00:25:28.275 00:25:28.275 --- 10.0.0.1 ping statistics --- 00:25:28.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:28.275 rtt min/avg/max/mdev = 0.394/0.394/0.394/0.000 ms 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@168 -- # get_net_dev target0 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@107 -- # local dev=target0 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:25:28.275 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:25:28.275 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:28.275 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:25:28.275 00:25:28.275 --- 10.0.0.2 ping statistics --- 00:25:28.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:28.275 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@98 -- # (( pair++ )) 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # return 0 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@107 -- # local dev=initiator0 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@107 -- # local dev=initiator1 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@109 -- # return 1 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@168 -- # dev= 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@169 -- # return 0 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@168 -- # get_net_dev target0 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@107 -- # local dev=target0 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@168 -- # get_net_dev target1 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@107 -- # local dev=target1 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@109 -- # return 1 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@168 -- # dev= 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@169 -- # return 0 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # nvmfpid=141421 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # waitforlisten 141421 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 141421 ']' 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:28.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:28.276 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:28.276 [2024-12-05 12:09:01.987782] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:25:28.276 [2024-12-05 12:09:01.987826] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:28.276 [2024-12-05 12:09:02.066195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:28.276 [2024-12-05 12:09:02.107047] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:28.276 [2024-12-05 12:09:02.107087] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:28.276 [2024-12-05 12:09:02.107093] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:28.276 [2024-12-05 12:09:02.107099] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:28.276 [2024-12-05 12:09:02.107104] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:28.276 [2024-12-05 12:09:02.108750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:28.277 [2024-12-05 12:09:02.108860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:28.277 [2024-12-05 12:09:02.108965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:28.277 [2024-12-05 12:09:02.108966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:25:28.277 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:28.277 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:25:28.277 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:25:28.277 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:28.277 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:28.277 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:28.277 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:28.277 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.277 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:28.277 [2024-12-05 12:09:02.259005] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:28.277 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.277 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:25:28.277 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:25:28.277 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:28.277 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:28.277 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:28.277 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:28.277 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:28.277 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:28.277 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:28.277 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:28.277 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:28.277 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:28.277 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:28.277 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:28.277 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:28.277 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:28.277 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:28.277 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:28.277 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:28.277 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:28.277 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:28.277 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:28.277 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:28.277 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:28.277 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:28.277 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:25:28.277 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.277 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:28.277 Malloc1 00:25:28.277 [2024-12-05 12:09:02.371986] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:28.277 Malloc2 00:25:28.277 Malloc3 00:25:28.536 Malloc4 00:25:28.536 Malloc5 00:25:28.536 Malloc6 00:25:28.536 Malloc7 00:25:28.536 Malloc8 00:25:28.536 Malloc9 00:25:28.796 Malloc10 00:25:28.796 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.796 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:25:28.796 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:28.796 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:28.796 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=141699 00:25:28.796 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 141699 /var/tmp/bdevperf.sock 00:25:28.796 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 141699 ']' 00:25:28.796 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:28.796 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:25:28.796 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:28.796 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:28.796 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:28.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:28.796 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # config=() 00:25:28.796 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:28.796 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # local subsystem config 00:25:28.796 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:28.796 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:25:28.796 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:25:28.796 { 00:25:28.796 "params": { 00:25:28.796 "name": "Nvme$subsystem", 00:25:28.796 "trtype": "$TEST_TRANSPORT", 00:25:28.796 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:28.796 "adrfam": "ipv4", 00:25:28.796 "trsvcid": "$NVMF_PORT", 00:25:28.796 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:28.796 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:28.796 "hdgst": ${hdgst:-false}, 00:25:28.796 "ddgst": ${ddgst:-false} 00:25:28.796 }, 00:25:28.796 "method": "bdev_nvme_attach_controller" 00:25:28.796 } 00:25:28.796 EOF 00:25:28.796 )") 00:25:28.796 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:25:28.796 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:25:28.796 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:25:28.796 { 00:25:28.796 "params": { 00:25:28.796 "name": "Nvme$subsystem", 00:25:28.796 "trtype": "$TEST_TRANSPORT", 00:25:28.796 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:28.796 "adrfam": "ipv4", 00:25:28.796 "trsvcid": "$NVMF_PORT", 00:25:28.796 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:28.796 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:28.796 "hdgst": ${hdgst:-false}, 00:25:28.796 "ddgst": ${ddgst:-false} 00:25:28.796 }, 00:25:28.796 "method": "bdev_nvme_attach_controller" 00:25:28.796 } 00:25:28.796 EOF 00:25:28.796 )") 00:25:28.796 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:25:28.796 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:25:28.796 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:25:28.796 { 00:25:28.796 "params": { 00:25:28.796 "name": "Nvme$subsystem", 00:25:28.796 "trtype": "$TEST_TRANSPORT", 00:25:28.796 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:28.796 "adrfam": "ipv4", 00:25:28.796 "trsvcid": "$NVMF_PORT", 00:25:28.796 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:28.796 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:28.796 "hdgst": ${hdgst:-false}, 00:25:28.796 "ddgst": ${ddgst:-false} 00:25:28.796 }, 00:25:28.796 "method": "bdev_nvme_attach_controller" 00:25:28.796 } 00:25:28.796 EOF 00:25:28.796 )") 00:25:28.796 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:25:28.796 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:25:28.796 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:25:28.796 { 00:25:28.796 "params": { 00:25:28.796 "name": "Nvme$subsystem", 00:25:28.796 "trtype": "$TEST_TRANSPORT", 00:25:28.796 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:28.796 "adrfam": "ipv4", 00:25:28.796 "trsvcid": "$NVMF_PORT", 00:25:28.796 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:28.796 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:28.796 "hdgst": ${hdgst:-false}, 00:25:28.796 "ddgst": ${ddgst:-false} 00:25:28.796 }, 00:25:28.796 "method": "bdev_nvme_attach_controller" 00:25:28.796 } 00:25:28.796 EOF 00:25:28.796 )") 00:25:28.796 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:25:28.796 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:25:28.796 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:25:28.796 { 00:25:28.796 "params": { 00:25:28.796 "name": "Nvme$subsystem", 00:25:28.796 "trtype": "$TEST_TRANSPORT", 00:25:28.796 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:28.796 "adrfam": "ipv4", 00:25:28.796 "trsvcid": "$NVMF_PORT", 00:25:28.796 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:28.796 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:28.796 "hdgst": ${hdgst:-false}, 00:25:28.796 "ddgst": ${ddgst:-false} 00:25:28.796 }, 00:25:28.796 "method": "bdev_nvme_attach_controller" 00:25:28.796 } 00:25:28.796 EOF 00:25:28.796 )") 00:25:28.796 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:25:28.796 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:25:28.796 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:25:28.796 { 00:25:28.796 "params": { 00:25:28.796 "name": "Nvme$subsystem", 00:25:28.796 "trtype": "$TEST_TRANSPORT", 00:25:28.796 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:28.796 "adrfam": "ipv4", 00:25:28.796 "trsvcid": "$NVMF_PORT", 00:25:28.796 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:28.796 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:28.796 "hdgst": ${hdgst:-false}, 00:25:28.796 "ddgst": ${ddgst:-false} 00:25:28.796 }, 00:25:28.796 "method": "bdev_nvme_attach_controller" 00:25:28.796 } 00:25:28.796 EOF 00:25:28.796 )") 00:25:28.796 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:25:28.796 [2024-12-05 12:09:02.841317] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:25:28.796 [2024-12-05 12:09:02.841366] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:25:28.796 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:25:28.796 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:25:28.796 { 00:25:28.796 "params": { 00:25:28.796 "name": "Nvme$subsystem", 00:25:28.796 "trtype": "$TEST_TRANSPORT", 00:25:28.796 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:28.796 "adrfam": "ipv4", 00:25:28.796 "trsvcid": "$NVMF_PORT", 00:25:28.796 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:28.796 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:28.796 "hdgst": ${hdgst:-false}, 00:25:28.796 "ddgst": ${ddgst:-false} 00:25:28.796 }, 00:25:28.796 "method": "bdev_nvme_attach_controller" 00:25:28.796 } 00:25:28.796 EOF 00:25:28.796 )") 00:25:28.796 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:25:28.796 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:25:28.796 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:25:28.796 { 00:25:28.796 "params": { 00:25:28.796 "name": "Nvme$subsystem", 00:25:28.796 "trtype": "$TEST_TRANSPORT", 00:25:28.796 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:28.796 "adrfam": "ipv4", 00:25:28.796 "trsvcid": "$NVMF_PORT", 00:25:28.796 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:28.796 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:28.796 "hdgst": ${hdgst:-false}, 00:25:28.796 "ddgst": ${ddgst:-false} 00:25:28.796 }, 00:25:28.796 "method": "bdev_nvme_attach_controller" 00:25:28.796 } 00:25:28.796 EOF 00:25:28.796 )") 00:25:28.796 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:25:28.796 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:25:28.797 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:25:28.797 { 00:25:28.797 "params": { 00:25:28.797 "name": "Nvme$subsystem", 00:25:28.797 "trtype": "$TEST_TRANSPORT", 00:25:28.797 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:28.797 "adrfam": "ipv4", 00:25:28.797 "trsvcid": "$NVMF_PORT", 00:25:28.797 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:28.797 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:28.797 "hdgst": ${hdgst:-false}, 00:25:28.797 "ddgst": ${ddgst:-false} 00:25:28.797 }, 00:25:28.797 "method": "bdev_nvme_attach_controller" 00:25:28.797 } 00:25:28.797 EOF 00:25:28.797 )") 00:25:28.797 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:25:28.797 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:25:28.797 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:25:28.797 { 00:25:28.797 "params": { 00:25:28.797 "name": "Nvme$subsystem", 00:25:28.797 "trtype": "$TEST_TRANSPORT", 00:25:28.797 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:28.797 "adrfam": "ipv4", 00:25:28.797 "trsvcid": "$NVMF_PORT", 00:25:28.797 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:28.797 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:28.797 "hdgst": ${hdgst:-false}, 00:25:28.797 "ddgst": ${ddgst:-false} 00:25:28.797 }, 00:25:28.797 "method": "bdev_nvme_attach_controller" 00:25:28.797 } 00:25:28.797 EOF 00:25:28.797 )") 00:25:28.797 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:25:28.797 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@396 -- # jq . 00:25:28.797 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@397 -- # IFS=, 00:25:28.797 12:09:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:25:28.797 "params": { 00:25:28.797 "name": "Nvme1", 00:25:28.797 "trtype": "tcp", 00:25:28.797 "traddr": "10.0.0.2", 00:25:28.797 "adrfam": "ipv4", 00:25:28.797 "trsvcid": "4420", 00:25:28.797 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:28.797 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:28.797 "hdgst": false, 00:25:28.797 "ddgst": false 00:25:28.797 }, 00:25:28.797 "method": "bdev_nvme_attach_controller" 00:25:28.797 },{ 00:25:28.797 "params": { 00:25:28.797 "name": "Nvme2", 00:25:28.797 "trtype": "tcp", 00:25:28.797 "traddr": "10.0.0.2", 00:25:28.797 "adrfam": "ipv4", 00:25:28.797 "trsvcid": "4420", 00:25:28.797 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:28.797 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:28.797 "hdgst": false, 00:25:28.797 "ddgst": false 00:25:28.797 }, 00:25:28.797 "method": "bdev_nvme_attach_controller" 00:25:28.797 },{ 00:25:28.797 "params": { 00:25:28.797 "name": "Nvme3", 00:25:28.797 "trtype": "tcp", 00:25:28.797 "traddr": "10.0.0.2", 00:25:28.797 "adrfam": "ipv4", 00:25:28.797 "trsvcid": "4420", 00:25:28.797 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:28.797 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:28.797 "hdgst": false, 00:25:28.797 "ddgst": false 00:25:28.797 }, 00:25:28.797 "method": "bdev_nvme_attach_controller" 00:25:28.797 },{ 00:25:28.797 "params": { 00:25:28.797 "name": "Nvme4", 00:25:28.797 "trtype": "tcp", 00:25:28.797 "traddr": "10.0.0.2", 00:25:28.797 "adrfam": "ipv4", 00:25:28.797 "trsvcid": "4420", 00:25:28.797 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:28.797 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:28.797 "hdgst": false, 00:25:28.797 "ddgst": false 00:25:28.797 }, 00:25:28.797 "method": "bdev_nvme_attach_controller" 00:25:28.797 },{ 00:25:28.797 "params": { 00:25:28.797 "name": "Nvme5", 00:25:28.797 "trtype": "tcp", 00:25:28.797 "traddr": "10.0.0.2", 00:25:28.797 "adrfam": "ipv4", 00:25:28.797 "trsvcid": "4420", 00:25:28.797 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:28.797 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:28.797 "hdgst": false, 00:25:28.797 "ddgst": false 00:25:28.797 }, 00:25:28.797 "method": "bdev_nvme_attach_controller" 00:25:28.797 },{ 00:25:28.797 "params": { 00:25:28.797 "name": "Nvme6", 00:25:28.797 "trtype": "tcp", 00:25:28.797 "traddr": "10.0.0.2", 00:25:28.797 "adrfam": "ipv4", 00:25:28.797 "trsvcid": "4420", 00:25:28.797 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:28.797 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:28.797 "hdgst": false, 00:25:28.797 "ddgst": false 00:25:28.797 }, 00:25:28.797 "method": "bdev_nvme_attach_controller" 00:25:28.797 },{ 00:25:28.797 "params": { 00:25:28.797 "name": "Nvme7", 00:25:28.797 "trtype": "tcp", 00:25:28.797 "traddr": "10.0.0.2", 00:25:28.797 "adrfam": "ipv4", 00:25:28.797 "trsvcid": "4420", 00:25:28.797 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:28.797 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:28.797 "hdgst": false, 00:25:28.797 "ddgst": false 00:25:28.797 }, 00:25:28.797 "method": "bdev_nvme_attach_controller" 00:25:28.797 },{ 00:25:28.797 "params": { 00:25:28.797 "name": "Nvme8", 00:25:28.797 "trtype": "tcp", 00:25:28.797 "traddr": "10.0.0.2", 00:25:28.797 "adrfam": "ipv4", 00:25:28.797 "trsvcid": "4420", 00:25:28.797 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:28.797 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:28.797 "hdgst": false, 00:25:28.797 "ddgst": false 00:25:28.797 }, 00:25:28.797 "method": "bdev_nvme_attach_controller" 00:25:28.797 },{ 00:25:28.797 "params": { 00:25:28.797 "name": "Nvme9", 00:25:28.797 "trtype": "tcp", 00:25:28.797 "traddr": "10.0.0.2", 00:25:28.797 "adrfam": "ipv4", 00:25:28.797 "trsvcid": "4420", 00:25:28.797 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:28.797 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:28.797 "hdgst": false, 00:25:28.797 "ddgst": false 00:25:28.797 }, 00:25:28.797 "method": "bdev_nvme_attach_controller" 00:25:28.797 },{ 00:25:28.797 "params": { 00:25:28.797 "name": "Nvme10", 00:25:28.797 "trtype": "tcp", 00:25:28.797 "traddr": "10.0.0.2", 00:25:28.797 "adrfam": "ipv4", 00:25:28.797 "trsvcid": "4420", 00:25:28.797 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:28.797 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:28.797 "hdgst": false, 00:25:28.797 "ddgst": false 00:25:28.797 }, 00:25:28.797 "method": "bdev_nvme_attach_controller" 00:25:28.797 }' 00:25:28.797 [2024-12-05 12:09:02.916228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:28.797 [2024-12-05 12:09:02.957407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:30.699 12:09:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:30.699 12:09:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:25:30.699 12:09:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:30.699 12:09:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.699 12:09:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:30.699 12:09:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.699 12:09:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 141699 00:25:30.699 12:09:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:25:30.699 12:09:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:25:31.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 141699 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:25:31.652 12:09:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 141421 00:25:31.652 12:09:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:25:31.652 12:09:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:31.652 12:09:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # config=() 00:25:31.652 12:09:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # local subsystem config 00:25:31.652 12:09:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:25:31.652 12:09:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:25:31.652 { 00:25:31.652 "params": { 00:25:31.652 "name": "Nvme$subsystem", 00:25:31.652 "trtype": "$TEST_TRANSPORT", 00:25:31.652 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:31.652 "adrfam": "ipv4", 00:25:31.652 "trsvcid": "$NVMF_PORT", 00:25:31.652 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:31.652 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:31.652 "hdgst": ${hdgst:-false}, 00:25:31.652 "ddgst": ${ddgst:-false} 00:25:31.652 }, 00:25:31.652 "method": "bdev_nvme_attach_controller" 00:25:31.652 } 00:25:31.652 EOF 00:25:31.652 )") 00:25:31.652 12:09:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:25:31.653 12:09:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:25:31.653 12:09:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:25:31.653 { 00:25:31.653 "params": { 00:25:31.653 "name": "Nvme$subsystem", 00:25:31.653 "trtype": "$TEST_TRANSPORT", 00:25:31.653 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:31.653 "adrfam": "ipv4", 00:25:31.653 "trsvcid": "$NVMF_PORT", 00:25:31.653 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:31.653 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:31.653 "hdgst": ${hdgst:-false}, 00:25:31.653 "ddgst": ${ddgst:-false} 00:25:31.653 }, 00:25:31.653 "method": "bdev_nvme_attach_controller" 00:25:31.653 } 00:25:31.653 EOF 00:25:31.653 )") 00:25:31.653 12:09:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:25:31.653 12:09:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:25:31.653 12:09:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:25:31.653 { 00:25:31.653 "params": { 00:25:31.653 "name": "Nvme$subsystem", 00:25:31.653 "trtype": "$TEST_TRANSPORT", 00:25:31.653 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:31.653 "adrfam": "ipv4", 00:25:31.653 "trsvcid": "$NVMF_PORT", 00:25:31.653 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:31.653 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:31.653 "hdgst": ${hdgst:-false}, 00:25:31.653 "ddgst": ${ddgst:-false} 00:25:31.653 }, 00:25:31.653 "method": "bdev_nvme_attach_controller" 00:25:31.653 } 00:25:31.653 EOF 00:25:31.653 )") 00:25:31.653 12:09:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:25:31.653 12:09:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:25:31.653 12:09:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:25:31.653 { 00:25:31.653 "params": { 00:25:31.653 "name": "Nvme$subsystem", 00:25:31.653 "trtype": "$TEST_TRANSPORT", 00:25:31.653 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:31.653 "adrfam": "ipv4", 00:25:31.653 "trsvcid": "$NVMF_PORT", 00:25:31.653 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:31.653 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:31.653 "hdgst": ${hdgst:-false}, 00:25:31.653 "ddgst": ${ddgst:-false} 00:25:31.653 }, 00:25:31.653 "method": "bdev_nvme_attach_controller" 00:25:31.653 } 00:25:31.653 EOF 00:25:31.653 )") 00:25:31.653 12:09:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:25:31.653 12:09:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:25:31.653 12:09:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:25:31.653 { 00:25:31.653 "params": { 00:25:31.653 "name": "Nvme$subsystem", 00:25:31.653 "trtype": "$TEST_TRANSPORT", 00:25:31.653 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:31.653 "adrfam": "ipv4", 00:25:31.653 "trsvcid": "$NVMF_PORT", 00:25:31.653 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:31.653 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:31.653 "hdgst": ${hdgst:-false}, 00:25:31.653 "ddgst": ${ddgst:-false} 00:25:31.653 }, 00:25:31.653 "method": "bdev_nvme_attach_controller" 00:25:31.653 } 00:25:31.653 EOF 00:25:31.653 )") 00:25:31.653 12:09:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:25:31.653 12:09:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:25:31.653 12:09:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:25:31.653 { 00:25:31.653 "params": { 00:25:31.653 "name": "Nvme$subsystem", 00:25:31.653 "trtype": "$TEST_TRANSPORT", 00:25:31.653 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:31.653 "adrfam": "ipv4", 00:25:31.653 "trsvcid": "$NVMF_PORT", 00:25:31.653 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:31.653 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:31.653 "hdgst": ${hdgst:-false}, 00:25:31.653 "ddgst": ${ddgst:-false} 00:25:31.653 }, 00:25:31.653 "method": "bdev_nvme_attach_controller" 00:25:31.653 } 00:25:31.653 EOF 00:25:31.653 )") 00:25:31.653 12:09:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:25:31.653 12:09:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:25:31.653 12:09:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:25:31.653 { 00:25:31.653 "params": { 00:25:31.653 "name": "Nvme$subsystem", 00:25:31.653 "trtype": "$TEST_TRANSPORT", 00:25:31.653 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:31.653 "adrfam": "ipv4", 00:25:31.653 "trsvcid": "$NVMF_PORT", 00:25:31.653 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:31.653 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:31.653 "hdgst": ${hdgst:-false}, 00:25:31.653 "ddgst": ${ddgst:-false} 00:25:31.653 }, 00:25:31.653 "method": "bdev_nvme_attach_controller" 00:25:31.653 } 00:25:31.653 EOF 00:25:31.653 )") 00:25:31.653 12:09:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:25:31.653 [2024-12-05 12:09:05.775867] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:25:31.653 [2024-12-05 12:09:05.775918] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142191 ] 00:25:31.653 12:09:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:25:31.653 12:09:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:25:31.653 { 00:25:31.653 "params": { 00:25:31.653 "name": "Nvme$subsystem", 00:25:31.653 "trtype": "$TEST_TRANSPORT", 00:25:31.653 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:31.653 "adrfam": "ipv4", 00:25:31.653 "trsvcid": "$NVMF_PORT", 00:25:31.653 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:31.653 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:31.653 "hdgst": ${hdgst:-false}, 00:25:31.653 "ddgst": ${ddgst:-false} 00:25:31.653 }, 00:25:31.653 "method": "bdev_nvme_attach_controller" 00:25:31.653 } 00:25:31.653 EOF 00:25:31.653 )") 00:25:31.653 12:09:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:25:31.653 12:09:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:25:31.653 12:09:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:25:31.653 { 00:25:31.653 "params": { 00:25:31.653 "name": "Nvme$subsystem", 00:25:31.653 "trtype": "$TEST_TRANSPORT", 00:25:31.653 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:31.653 "adrfam": "ipv4", 00:25:31.653 "trsvcid": "$NVMF_PORT", 00:25:31.653 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:31.653 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:31.653 "hdgst": ${hdgst:-false}, 00:25:31.653 "ddgst": ${ddgst:-false} 00:25:31.653 }, 00:25:31.653 "method": "bdev_nvme_attach_controller" 00:25:31.653 } 00:25:31.653 EOF 00:25:31.653 )") 00:25:31.653 12:09:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:25:31.653 12:09:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:25:31.653 12:09:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:25:31.653 { 00:25:31.653 "params": { 00:25:31.653 "name": "Nvme$subsystem", 00:25:31.653 "trtype": "$TEST_TRANSPORT", 00:25:31.653 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:31.653 "adrfam": "ipv4", 00:25:31.653 "trsvcid": "$NVMF_PORT", 00:25:31.653 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:31.653 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:31.654 "hdgst": ${hdgst:-false}, 00:25:31.654 "ddgst": ${ddgst:-false} 00:25:31.654 }, 00:25:31.654 "method": "bdev_nvme_attach_controller" 00:25:31.654 } 00:25:31.654 EOF 00:25:31.654 )") 00:25:31.654 12:09:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:25:31.654 12:09:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@396 -- # jq . 00:25:31.654 12:09:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@397 -- # IFS=, 00:25:31.654 12:09:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:25:31.654 "params": { 00:25:31.654 "name": "Nvme1", 00:25:31.654 "trtype": "tcp", 00:25:31.654 "traddr": "10.0.0.2", 00:25:31.654 "adrfam": "ipv4", 00:25:31.654 "trsvcid": "4420", 00:25:31.654 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:31.654 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:31.654 "hdgst": false, 00:25:31.654 "ddgst": false 00:25:31.654 }, 00:25:31.654 "method": "bdev_nvme_attach_controller" 00:25:31.654 },{ 00:25:31.654 "params": { 00:25:31.654 "name": "Nvme2", 00:25:31.654 "trtype": "tcp", 00:25:31.654 "traddr": "10.0.0.2", 00:25:31.654 "adrfam": "ipv4", 00:25:31.654 "trsvcid": "4420", 00:25:31.654 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:31.654 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:31.654 "hdgst": false, 00:25:31.654 "ddgst": false 00:25:31.654 }, 00:25:31.654 "method": "bdev_nvme_attach_controller" 00:25:31.654 },{ 00:25:31.654 "params": { 00:25:31.654 "name": "Nvme3", 00:25:31.654 "trtype": "tcp", 00:25:31.654 "traddr": "10.0.0.2", 00:25:31.654 "adrfam": "ipv4", 00:25:31.654 "trsvcid": "4420", 00:25:31.654 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:31.654 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:31.654 "hdgst": false, 00:25:31.654 "ddgst": false 00:25:31.654 }, 00:25:31.654 "method": "bdev_nvme_attach_controller" 00:25:31.654 },{ 00:25:31.654 "params": { 00:25:31.654 "name": "Nvme4", 00:25:31.654 "trtype": "tcp", 00:25:31.654 "traddr": "10.0.0.2", 00:25:31.654 "adrfam": "ipv4", 00:25:31.654 "trsvcid": "4420", 00:25:31.654 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:31.654 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:31.654 "hdgst": false, 00:25:31.654 "ddgst": false 00:25:31.654 }, 00:25:31.654 "method": "bdev_nvme_attach_controller" 00:25:31.654 },{ 00:25:31.654 "params": { 00:25:31.654 "name": "Nvme5", 00:25:31.654 "trtype": "tcp", 00:25:31.654 "traddr": "10.0.0.2", 00:25:31.654 "adrfam": "ipv4", 00:25:31.654 "trsvcid": "4420", 00:25:31.654 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:31.654 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:31.654 "hdgst": false, 00:25:31.654 "ddgst": false 00:25:31.654 }, 00:25:31.654 "method": "bdev_nvme_attach_controller" 00:25:31.654 },{ 00:25:31.654 "params": { 00:25:31.654 "name": "Nvme6", 00:25:31.654 "trtype": "tcp", 00:25:31.654 "traddr": "10.0.0.2", 00:25:31.654 "adrfam": "ipv4", 00:25:31.654 "trsvcid": "4420", 00:25:31.654 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:31.654 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:31.654 "hdgst": false, 00:25:31.654 "ddgst": false 00:25:31.654 }, 00:25:31.654 "method": "bdev_nvme_attach_controller" 00:25:31.654 },{ 00:25:31.654 "params": { 00:25:31.654 "name": "Nvme7", 00:25:31.654 "trtype": "tcp", 00:25:31.654 "traddr": "10.0.0.2", 00:25:31.654 "adrfam": "ipv4", 00:25:31.654 "trsvcid": "4420", 00:25:31.654 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:31.654 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:31.654 "hdgst": false, 00:25:31.654 "ddgst": false 00:25:31.654 }, 00:25:31.654 "method": "bdev_nvme_attach_controller" 00:25:31.654 },{ 00:25:31.654 "params": { 00:25:31.654 "name": "Nvme8", 00:25:31.654 "trtype": "tcp", 00:25:31.654 "traddr": "10.0.0.2", 00:25:31.654 "adrfam": "ipv4", 00:25:31.654 "trsvcid": "4420", 00:25:31.654 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:31.654 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:31.654 "hdgst": false, 00:25:31.654 "ddgst": false 00:25:31.654 }, 00:25:31.654 "method": "bdev_nvme_attach_controller" 00:25:31.654 },{ 00:25:31.654 "params": { 00:25:31.654 "name": "Nvme9", 00:25:31.654 "trtype": "tcp", 00:25:31.654 "traddr": "10.0.0.2", 00:25:31.654 "adrfam": "ipv4", 00:25:31.654 "trsvcid": "4420", 00:25:31.654 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:31.654 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:31.654 "hdgst": false, 00:25:31.654 "ddgst": false 00:25:31.654 }, 00:25:31.654 "method": "bdev_nvme_attach_controller" 00:25:31.654 },{ 00:25:31.654 "params": { 00:25:31.654 "name": "Nvme10", 00:25:31.654 "trtype": "tcp", 00:25:31.654 "traddr": "10.0.0.2", 00:25:31.654 "adrfam": "ipv4", 00:25:31.654 "trsvcid": "4420", 00:25:31.654 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:31.654 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:31.654 "hdgst": false, 00:25:31.654 "ddgst": false 00:25:31.654 }, 00:25:31.654 "method": "bdev_nvme_attach_controller" 00:25:31.654 }' 00:25:31.911 [2024-12-05 12:09:05.855598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:31.911 [2024-12-05 12:09:05.896702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:33.283 Running I/O for 1 seconds... 00:25:34.671 2209.00 IOPS, 138.06 MiB/s 00:25:34.671 Latency(us) 00:25:34.671 [2024-12-05T11:09:08.867Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:34.671 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:34.671 Verification LBA range: start 0x0 length 0x400 00:25:34.671 Nvme1n1 : 1.03 247.54 15.47 0.00 0.00 256091.92 16852.11 224694.86 00:25:34.671 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:34.671 Verification LBA range: start 0x0 length 0x400 00:25:34.671 Nvme2n1 : 1.04 245.61 15.35 0.00 0.00 254236.04 18599.74 216705.71 00:25:34.671 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:34.671 Verification LBA range: start 0x0 length 0x400 00:25:34.671 Nvme3n1 : 1.11 288.12 18.01 0.00 0.00 213951.63 15042.07 208716.56 00:25:34.671 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:34.671 Verification LBA range: start 0x0 length 0x400 00:25:34.671 Nvme4n1 : 1.11 291.13 18.20 0.00 0.00 207597.03 7396.21 213709.78 00:25:34.671 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:34.671 Verification LBA range: start 0x0 length 0x400 00:25:34.671 Nvme5n1 : 1.12 285.52 17.84 0.00 0.00 209670.83 14917.24 214708.42 00:25:34.671 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:34.671 Verification LBA range: start 0x0 length 0x400 00:25:34.671 Nvme6n1 : 1.13 283.37 17.71 0.00 0.00 208415.11 15166.90 230686.72 00:25:34.671 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:34.671 Verification LBA range: start 0x0 length 0x400 00:25:34.671 Nvme7n1 : 1.12 289.73 18.11 0.00 0.00 200213.08 2356.18 204721.98 00:25:34.671 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:34.671 Verification LBA range: start 0x0 length 0x400 00:25:34.671 Nvme8n1 : 1.13 282.24 17.64 0.00 0.00 203153.31 13107.20 223696.21 00:25:34.671 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:34.671 Verification LBA range: start 0x0 length 0x400 00:25:34.671 Nvme9n1 : 1.14 285.08 17.82 0.00 0.00 198041.31 1981.68 220700.28 00:25:34.671 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:34.671 Verification LBA range: start 0x0 length 0x400 00:25:34.671 Nvme10n1 : 1.14 284.58 17.79 0.00 0.00 195471.37 438.86 236678.58 00:25:34.671 [2024-12-05T11:09:08.867Z] =================================================================================================================== 00:25:34.671 [2024-12-05T11:09:08.867Z] Total : 2782.91 173.93 0.00 0.00 212922.11 438.86 236678.58 00:25:34.671 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:25:34.671 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:25:34.671 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:34.671 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:34.671 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:25:34.671 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # nvmfcleanup 00:25:34.671 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@99 -- # sync 00:25:34.671 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:25:34.671 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # set +e 00:25:34.671 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # for i in {1..20} 00:25:34.671 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:25:34.671 rmmod nvme_tcp 00:25:34.671 rmmod nvme_fabrics 00:25:34.671 rmmod nvme_keyring 00:25:34.671 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:25:34.671 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # set -e 00:25:34.671 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # return 0 00:25:34.671 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # '[' -n 141421 ']' 00:25:34.671 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@337 -- # killprocess 141421 00:25:34.671 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 141421 ']' 00:25:34.671 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 141421 00:25:34.671 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:25:34.671 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:34.671 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 141421 00:25:34.930 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:34.930 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:34.930 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 141421' 00:25:34.930 killing process with pid 141421 00:25:34.930 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 141421 00:25:34.930 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 141421 00:25:35.189 12:09:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:25:35.189 12:09:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # nvmf_fini 00:25:35.189 12:09:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@264 -- # local dev 00:25:35.189 12:09:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@267 -- # remove_target_ns 00:25:35.189 12:09:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:25:35.189 12:09:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:25:35.189 12:09:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_target_ns 00:25:37.725 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@268 -- # delete_main_bridge 00:25:37.725 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:25:37.725 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@130 -- # return 0 00:25:37.725 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:25:37.725 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:25:37.725 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:25:37.725 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:25:37.725 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:25:37.725 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:25:37.726 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:25:37.726 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:25:37.726 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:25:37.726 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:25:37.726 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:25:37.726 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:25:37.726 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:25:37.726 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:25:37.726 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:25:37.726 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:25:37.726 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:25:37.726 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@41 -- # _dev=0 00:25:37.726 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@41 -- # dev_map=() 00:25:37.726 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@284 -- # iptr 00:25:37.726 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@542 -- # iptables-save 00:25:37.726 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:25:37.726 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@542 -- # iptables-restore 00:25:37.726 00:25:37.726 real 0m15.504s 00:25:37.726 user 0m34.561s 00:25:37.726 sys 0m5.910s 00:25:37.726 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:37.726 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:37.726 ************************************ 00:25:37.726 END TEST nvmf_shutdown_tc1 00:25:37.726 ************************************ 00:25:37.726 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:25:37.726 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:37.726 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:37.726 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:37.726 ************************************ 00:25:37.726 START TEST nvmf_shutdown_tc2 00:25:37.726 ************************************ 00:25:37.726 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:25:37.726 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:25:37.726 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:25:37.726 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:25:37.726 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:37.726 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # prepare_net_devs 00:25:37.726 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # local -g is_hw=no 00:25:37.726 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # remove_target_ns 00:25:37.726 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:25:37.726 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:25:37.726 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_target_ns 00:25:37.726 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:25:37.726 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:25:37.726 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # xtrace_disable 00:25:37.726 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:37.726 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:37.726 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@131 -- # pci_devs=() 00:25:37.726 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@131 -- # local -a pci_devs 00:25:37.726 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@132 -- # pci_net_devs=() 00:25:37.726 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:25:37.726 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@133 -- # pci_drivers=() 00:25:37.726 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@133 -- # local -A pci_drivers 00:25:37.726 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@135 -- # net_devs=() 00:25:37.726 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@135 -- # local -ga net_devs 00:25:37.726 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@136 -- # e810=() 00:25:37.726 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@136 -- # local -ga e810 00:25:37.726 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@137 -- # x722=() 00:25:37.726 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@137 -- # local -ga x722 00:25:37.726 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@138 -- # mlx=() 00:25:37.726 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@138 -- # local -ga mlx 00:25:37.726 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:37.726 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:37.726 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:37.726 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:37.726 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:37.726 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:37.726 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:37.726 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:37.726 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:37.726 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:37.726 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:37.726 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:37.726 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:25:37.726 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:25:37.726 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:25:37.726 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:25:37.727 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:25:37.727 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:25:37.727 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:25:37.727 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:37.727 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:37.727 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:25:37.727 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:25:37.727 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:37.727 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:37.727 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:25:37.727 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:25:37.727 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:37.727 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:37.727 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:25:37.727 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:25:37.727 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:37.727 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:37.727 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:25:37.727 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:25:37.727 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:25:37.727 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:25:37.727 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:25:37.727 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:37.727 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:25:37.727 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:37.727 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # [[ up == up ]] 00:25:37.727 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:25:37.727 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:37.727 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:37.727 Found net devices under 0000:86:00.0: cvl_0_0 00:25:37.727 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:25:37.727 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:25:37.727 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:37.727 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:25:37.727 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:37.727 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # [[ up == up ]] 00:25:37.727 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:25:37.727 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:37.727 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:37.727 Found net devices under 0000:86:00.1: cvl_0_1 00:25:37.727 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:25:37.727 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:25:37.727 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:25:37.727 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # is_hw=yes 00:25:37.727 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:25:37.727 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:25:37.727 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:25:37.727 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:25:37.727 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@257 -- # create_target_ns 00:25:37.727 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:25:37.727 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:25:37.727 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:25:37.727 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:37.727 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:25:37.727 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:25:37.727 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:37.727 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:37.727 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:25:37.727 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:25:37.727 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:25:37.727 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:25:37.727 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@27 -- # local -gA dev_map 00:25:37.727 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@28 -- # local -g _dev 00:25:37.727 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:25:37.727 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:25:37.727 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:37.727 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:25:37.727 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@44 -- # ips=() 00:25:37.727 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:25:37.727 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:25:37.727 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:25:37.727 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:25:37.727 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:25:37.727 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:25:37.727 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:25:37.727 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@11 -- # local val=167772161 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:25:37.728 10.0.0.1 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@11 -- # local val=167772162 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:25:37.728 10.0.0.2 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@38 -- # ping_ips 1 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@107 -- # local dev=initiator0 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:25:37.728 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:25:37.729 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:25:37.729 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:37.729 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:37.729 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:25:37.729 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:25:37.729 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:37.729 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.336 ms 00:25:37.729 00:25:37.729 --- 10.0.0.1 ping statistics --- 00:25:37.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:37.729 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:25:37.729 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:25:37.729 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:25:37.729 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:25:37.729 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:25:37.729 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:37.729 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:37.729 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@168 -- # get_net_dev target0 00:25:37.729 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@107 -- # local dev=target0 00:25:37.729 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:25:37.729 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:25:37.729 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:25:37.729 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:25:37.729 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:25:37.729 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:25:37.729 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:25:37.729 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:25:37.729 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:25:37.729 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:25:37.729 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:25:37.729 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:25:37.729 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:25:37.729 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:25:37.729 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:37.729 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:25:37.729 00:25:37.729 --- 10.0.0.2 ping statistics --- 00:25:37.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:37.729 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:25:37.729 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@98 -- # (( pair++ )) 00:25:37.729 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:25:37.729 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:37.729 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # return 0 00:25:37.729 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:25:37.729 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:25:37.729 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:25:37.729 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:25:37.729 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:25:37.729 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:25:37.729 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:25:37.729 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:25:37.729 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:25:37.729 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:25:37.729 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@107 -- # local dev=initiator0 00:25:37.729 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:25:37.729 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:25:37.729 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:25:37.729 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:25:37.729 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:25:37.729 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:25:37.729 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:25:37.729 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:25:37.729 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:25:37.729 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:37.729 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:25:37.729 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:25:37.729 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:25:37.729 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:25:37.729 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:25:37.729 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:25:37.729 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@107 -- # local dev=initiator1 00:25:37.729 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:25:37.729 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:25:37.729 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@109 -- # return 1 00:25:37.729 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@168 -- # dev= 00:25:37.729 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@169 -- # return 0 00:25:37.729 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:25:37.729 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:25:37.729 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:25:37.729 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:25:37.729 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:25:37.729 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:37.729 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:37.730 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@168 -- # get_net_dev target0 00:25:37.730 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@107 -- # local dev=target0 00:25:37.730 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:25:37.730 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:25:37.730 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:25:37.730 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:25:37.730 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:25:37.730 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:25:37.730 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:25:37.730 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:25:37.730 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:25:37.730 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:37.730 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:25:37.730 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:25:37.730 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:25:37.730 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:25:37.730 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:37.730 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:37.730 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@168 -- # get_net_dev target1 00:25:37.730 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@107 -- # local dev=target1 00:25:37.730 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:25:37.730 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:25:37.730 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@109 -- # return 1 00:25:37.730 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@168 -- # dev= 00:25:37.730 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@169 -- # return 0 00:25:37.730 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:25:37.730 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:37.730 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:25:37.730 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:25:37.730 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:37.730 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:25:37.730 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:25:37.730 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:25:37.730 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:25:37.730 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:37.730 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:37.730 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # nvmfpid=143625 00:25:37.730 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # waitforlisten 143625 00:25:37.730 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:37.730 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 143625 ']' 00:25:37.730 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:37.730 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:37.730 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:37.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:37.730 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:37.730 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:37.730 [2024-12-05 12:09:11.900843] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:25:37.730 [2024-12-05 12:09:11.900892] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:37.989 [2024-12-05 12:09:11.981423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:37.989 [2024-12-05 12:09:12.023559] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:37.989 [2024-12-05 12:09:12.023597] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:37.989 [2024-12-05 12:09:12.023604] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:37.989 [2024-12-05 12:09:12.023610] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:37.989 [2024-12-05 12:09:12.023615] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:37.989 [2024-12-05 12:09:12.025089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:37.989 [2024-12-05 12:09:12.025195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:37.989 [2024-12-05 12:09:12.025301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:37.989 [2024-12-05 12:09:12.025301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:25:38.555 12:09:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:38.555 12:09:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:25:38.555 12:09:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:25:38.555 12:09:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:38.555 12:09:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:38.814 12:09:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:38.814 12:09:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:38.814 12:09:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.814 12:09:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:38.814 [2024-12-05 12:09:12.777208] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:38.814 12:09:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.814 12:09:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:25:38.814 12:09:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:25:38.814 12:09:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:38.814 12:09:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:38.814 12:09:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:38.814 12:09:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:38.814 12:09:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:38.814 12:09:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:38.814 12:09:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:38.814 12:09:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:38.815 12:09:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:38.815 12:09:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:38.815 12:09:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:38.815 12:09:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:38.815 12:09:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:38.815 12:09:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:38.815 12:09:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:38.815 12:09:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:38.815 12:09:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:38.815 12:09:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:38.815 12:09:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:38.815 12:09:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:38.815 12:09:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:38.815 12:09:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:38.815 12:09:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:38.815 12:09:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:25:38.815 12:09:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.815 12:09:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:38.815 Malloc1 00:25:38.815 [2024-12-05 12:09:12.890928] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:38.815 Malloc2 00:25:38.815 Malloc3 00:25:38.815 Malloc4 00:25:39.074 Malloc5 00:25:39.074 Malloc6 00:25:39.074 Malloc7 00:25:39.074 Malloc8 00:25:39.074 Malloc9 00:25:39.074 Malloc10 00:25:39.334 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.334 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:25:39.334 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:39.334 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:39.334 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=143905 00:25:39.334 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 143905 /var/tmp/bdevperf.sock 00:25:39.334 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 143905 ']' 00:25:39.334 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:39.334 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:25:39.334 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:39.334 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:39.334 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:39.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:39.334 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # config=() 00:25:39.334 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:39.334 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # local subsystem config 00:25:39.334 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:39.334 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:25:39.334 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:25:39.334 { 00:25:39.334 "params": { 00:25:39.334 "name": "Nvme$subsystem", 00:25:39.334 "trtype": "$TEST_TRANSPORT", 00:25:39.334 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:39.334 "adrfam": "ipv4", 00:25:39.334 "trsvcid": "$NVMF_PORT", 00:25:39.334 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:39.334 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:39.334 "hdgst": ${hdgst:-false}, 00:25:39.334 "ddgst": ${ddgst:-false} 00:25:39.334 }, 00:25:39.334 "method": "bdev_nvme_attach_controller" 00:25:39.334 } 00:25:39.334 EOF 00:25:39.334 )") 00:25:39.334 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:25:39.334 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:25:39.334 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:25:39.334 { 00:25:39.334 "params": { 00:25:39.334 "name": "Nvme$subsystem", 00:25:39.334 "trtype": "$TEST_TRANSPORT", 00:25:39.334 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:39.334 "adrfam": "ipv4", 00:25:39.334 "trsvcid": "$NVMF_PORT", 00:25:39.334 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:39.334 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:39.334 "hdgst": ${hdgst:-false}, 00:25:39.334 "ddgst": ${ddgst:-false} 00:25:39.334 }, 00:25:39.334 "method": "bdev_nvme_attach_controller" 00:25:39.334 } 00:25:39.334 EOF 00:25:39.334 )") 00:25:39.334 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:25:39.334 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:25:39.334 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:25:39.334 { 00:25:39.334 "params": { 00:25:39.334 "name": "Nvme$subsystem", 00:25:39.334 "trtype": "$TEST_TRANSPORT", 00:25:39.334 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:39.334 "adrfam": "ipv4", 00:25:39.334 "trsvcid": "$NVMF_PORT", 00:25:39.334 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:39.334 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:39.334 "hdgst": ${hdgst:-false}, 00:25:39.334 "ddgst": ${ddgst:-false} 00:25:39.334 }, 00:25:39.334 "method": "bdev_nvme_attach_controller" 00:25:39.334 } 00:25:39.334 EOF 00:25:39.334 )") 00:25:39.334 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:25:39.334 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:25:39.334 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:25:39.334 { 00:25:39.334 "params": { 00:25:39.334 "name": "Nvme$subsystem", 00:25:39.334 "trtype": "$TEST_TRANSPORT", 00:25:39.334 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:39.334 "adrfam": "ipv4", 00:25:39.334 "trsvcid": "$NVMF_PORT", 00:25:39.334 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:39.335 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:39.335 "hdgst": ${hdgst:-false}, 00:25:39.335 "ddgst": ${ddgst:-false} 00:25:39.335 }, 00:25:39.335 "method": "bdev_nvme_attach_controller" 00:25:39.335 } 00:25:39.335 EOF 00:25:39.335 )") 00:25:39.335 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:25:39.335 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:25:39.335 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:25:39.335 { 00:25:39.335 "params": { 00:25:39.335 "name": "Nvme$subsystem", 00:25:39.335 "trtype": "$TEST_TRANSPORT", 00:25:39.335 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:39.335 "adrfam": "ipv4", 00:25:39.335 "trsvcid": "$NVMF_PORT", 00:25:39.335 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:39.335 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:39.335 "hdgst": ${hdgst:-false}, 00:25:39.335 "ddgst": ${ddgst:-false} 00:25:39.335 }, 00:25:39.335 "method": "bdev_nvme_attach_controller" 00:25:39.335 } 00:25:39.335 EOF 00:25:39.335 )") 00:25:39.335 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:25:39.335 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:25:39.335 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:25:39.335 { 00:25:39.335 "params": { 00:25:39.335 "name": "Nvme$subsystem", 00:25:39.335 "trtype": "$TEST_TRANSPORT", 00:25:39.335 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:39.335 "adrfam": "ipv4", 00:25:39.335 "trsvcid": "$NVMF_PORT", 00:25:39.335 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:39.335 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:39.335 "hdgst": ${hdgst:-false}, 00:25:39.335 "ddgst": ${ddgst:-false} 00:25:39.335 }, 00:25:39.335 "method": "bdev_nvme_attach_controller" 00:25:39.335 } 00:25:39.335 EOF 00:25:39.335 )") 00:25:39.335 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:25:39.335 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:25:39.335 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:25:39.335 { 00:25:39.335 "params": { 00:25:39.335 "name": "Nvme$subsystem", 00:25:39.335 "trtype": "$TEST_TRANSPORT", 00:25:39.335 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:39.335 "adrfam": "ipv4", 00:25:39.335 "trsvcid": "$NVMF_PORT", 00:25:39.335 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:39.335 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:39.335 "hdgst": ${hdgst:-false}, 00:25:39.335 "ddgst": ${ddgst:-false} 00:25:39.335 }, 00:25:39.335 "method": "bdev_nvme_attach_controller" 00:25:39.335 } 00:25:39.335 EOF 00:25:39.335 )") 00:25:39.335 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:25:39.335 [2024-12-05 12:09:13.377175] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:25:39.335 [2024-12-05 12:09:13.377226] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143905 ] 00:25:39.335 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:25:39.335 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:25:39.335 { 00:25:39.335 "params": { 00:25:39.335 "name": "Nvme$subsystem", 00:25:39.335 "trtype": "$TEST_TRANSPORT", 00:25:39.335 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:39.335 "adrfam": "ipv4", 00:25:39.335 "trsvcid": "$NVMF_PORT", 00:25:39.335 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:39.335 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:39.335 "hdgst": ${hdgst:-false}, 00:25:39.335 "ddgst": ${ddgst:-false} 00:25:39.335 }, 00:25:39.335 "method": "bdev_nvme_attach_controller" 00:25:39.335 } 00:25:39.335 EOF 00:25:39.335 )") 00:25:39.335 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:25:39.335 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:25:39.335 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:25:39.335 { 00:25:39.335 "params": { 00:25:39.335 "name": "Nvme$subsystem", 00:25:39.335 "trtype": "$TEST_TRANSPORT", 00:25:39.335 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:39.335 "adrfam": "ipv4", 00:25:39.335 "trsvcid": "$NVMF_PORT", 00:25:39.335 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:39.335 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:39.335 "hdgst": ${hdgst:-false}, 00:25:39.335 "ddgst": ${ddgst:-false} 00:25:39.335 }, 00:25:39.335 "method": "bdev_nvme_attach_controller" 00:25:39.335 } 00:25:39.335 EOF 00:25:39.335 )") 00:25:39.335 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:25:39.335 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:25:39.335 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:25:39.335 { 00:25:39.335 "params": { 00:25:39.335 "name": "Nvme$subsystem", 00:25:39.335 "trtype": "$TEST_TRANSPORT", 00:25:39.335 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:39.335 "adrfam": "ipv4", 00:25:39.335 "trsvcid": "$NVMF_PORT", 00:25:39.335 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:39.335 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:39.335 "hdgst": ${hdgst:-false}, 00:25:39.335 "ddgst": ${ddgst:-false} 00:25:39.335 }, 00:25:39.335 "method": "bdev_nvme_attach_controller" 00:25:39.335 } 00:25:39.335 EOF 00:25:39.335 )") 00:25:39.335 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:25:39.335 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@396 -- # jq . 00:25:39.335 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@397 -- # IFS=, 00:25:39.335 12:09:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:25:39.335 "params": { 00:25:39.335 "name": "Nvme1", 00:25:39.335 "trtype": "tcp", 00:25:39.335 "traddr": "10.0.0.2", 00:25:39.335 "adrfam": "ipv4", 00:25:39.335 "trsvcid": "4420", 00:25:39.335 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:39.335 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:39.335 "hdgst": false, 00:25:39.335 "ddgst": false 00:25:39.335 }, 00:25:39.335 "method": "bdev_nvme_attach_controller" 00:25:39.335 },{ 00:25:39.335 "params": { 00:25:39.335 "name": "Nvme2", 00:25:39.335 "trtype": "tcp", 00:25:39.335 "traddr": "10.0.0.2", 00:25:39.335 "adrfam": "ipv4", 00:25:39.335 "trsvcid": "4420", 00:25:39.335 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:39.335 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:39.335 "hdgst": false, 00:25:39.335 "ddgst": false 00:25:39.335 }, 00:25:39.335 "method": "bdev_nvme_attach_controller" 00:25:39.335 },{ 00:25:39.335 "params": { 00:25:39.335 "name": "Nvme3", 00:25:39.335 "trtype": "tcp", 00:25:39.335 "traddr": "10.0.0.2", 00:25:39.335 "adrfam": "ipv4", 00:25:39.335 "trsvcid": "4420", 00:25:39.335 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:39.335 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:39.335 "hdgst": false, 00:25:39.335 "ddgst": false 00:25:39.335 }, 00:25:39.335 "method": "bdev_nvme_attach_controller" 00:25:39.335 },{ 00:25:39.335 "params": { 00:25:39.335 "name": "Nvme4", 00:25:39.335 "trtype": "tcp", 00:25:39.335 "traddr": "10.0.0.2", 00:25:39.335 "adrfam": "ipv4", 00:25:39.335 "trsvcid": "4420", 00:25:39.335 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:39.335 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:39.335 "hdgst": false, 00:25:39.335 "ddgst": false 00:25:39.335 }, 00:25:39.335 "method": "bdev_nvme_attach_controller" 00:25:39.335 },{ 00:25:39.335 "params": { 00:25:39.335 "name": "Nvme5", 00:25:39.335 "trtype": "tcp", 00:25:39.335 "traddr": "10.0.0.2", 00:25:39.335 "adrfam": "ipv4", 00:25:39.335 "trsvcid": "4420", 00:25:39.335 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:39.335 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:39.335 "hdgst": false, 00:25:39.335 "ddgst": false 00:25:39.335 }, 00:25:39.335 "method": "bdev_nvme_attach_controller" 00:25:39.335 },{ 00:25:39.335 "params": { 00:25:39.335 "name": "Nvme6", 00:25:39.335 "trtype": "tcp", 00:25:39.335 "traddr": "10.0.0.2", 00:25:39.335 "adrfam": "ipv4", 00:25:39.335 "trsvcid": "4420", 00:25:39.335 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:39.335 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:39.335 "hdgst": false, 00:25:39.335 "ddgst": false 00:25:39.335 }, 00:25:39.335 "method": "bdev_nvme_attach_controller" 00:25:39.335 },{ 00:25:39.335 "params": { 00:25:39.335 "name": "Nvme7", 00:25:39.335 "trtype": "tcp", 00:25:39.335 "traddr": "10.0.0.2", 00:25:39.335 "adrfam": "ipv4", 00:25:39.335 "trsvcid": "4420", 00:25:39.335 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:39.335 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:39.336 "hdgst": false, 00:25:39.336 "ddgst": false 00:25:39.336 }, 00:25:39.336 "method": "bdev_nvme_attach_controller" 00:25:39.336 },{ 00:25:39.336 "params": { 00:25:39.336 "name": "Nvme8", 00:25:39.336 "trtype": "tcp", 00:25:39.336 "traddr": "10.0.0.2", 00:25:39.336 "adrfam": "ipv4", 00:25:39.336 "trsvcid": "4420", 00:25:39.336 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:39.336 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:39.336 "hdgst": false, 00:25:39.336 "ddgst": false 00:25:39.336 }, 00:25:39.336 "method": "bdev_nvme_attach_controller" 00:25:39.336 },{ 00:25:39.336 "params": { 00:25:39.336 "name": "Nvme9", 00:25:39.336 "trtype": "tcp", 00:25:39.336 "traddr": "10.0.0.2", 00:25:39.336 "adrfam": "ipv4", 00:25:39.336 "trsvcid": "4420", 00:25:39.336 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:39.336 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:39.336 "hdgst": false, 00:25:39.336 "ddgst": false 00:25:39.336 }, 00:25:39.336 "method": "bdev_nvme_attach_controller" 00:25:39.336 },{ 00:25:39.336 "params": { 00:25:39.336 "name": "Nvme10", 00:25:39.336 "trtype": "tcp", 00:25:39.336 "traddr": "10.0.0.2", 00:25:39.336 "adrfam": "ipv4", 00:25:39.336 "trsvcid": "4420", 00:25:39.336 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:39.336 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:39.336 "hdgst": false, 00:25:39.336 "ddgst": false 00:25:39.336 }, 00:25:39.336 "method": "bdev_nvme_attach_controller" 00:25:39.336 }' 00:25:39.336 [2024-12-05 12:09:13.457116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:39.336 [2024-12-05 12:09:13.499091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:41.238 Running I/O for 10 seconds... 00:25:41.238 12:09:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:41.238 12:09:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:25:41.238 12:09:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:41.238 12:09:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.238 12:09:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:41.238 12:09:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.238 12:09:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:25:41.238 12:09:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:25:41.238 12:09:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:25:41.238 12:09:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:25:41.238 12:09:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:25:41.238 12:09:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:25:41.238 12:09:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:25:41.238 12:09:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:41.238 12:09:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:25:41.238 12:09:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.238 12:09:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:41.238 12:09:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.238 12:09:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:25:41.238 12:09:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:25:41.238 12:09:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:25:41.496 12:09:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:25:41.496 12:09:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:25:41.496 12:09:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:41.496 12:09:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:25:41.496 12:09:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.496 12:09:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:41.496 12:09:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.496 12:09:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:25:41.496 12:09:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:25:41.496 12:09:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:25:41.496 12:09:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:25:41.496 12:09:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:25:41.496 12:09:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 143905 00:25:41.496 12:09:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 143905 ']' 00:25:41.496 12:09:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 143905 00:25:41.496 12:09:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:25:41.496 12:09:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:41.496 12:09:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 143905 00:25:41.754 12:09:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:41.754 12:09:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:41.754 12:09:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 143905' 00:25:41.754 killing process with pid 143905 00:25:41.754 12:09:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 143905 00:25:41.754 12:09:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 143905 00:25:41.754 Received shutdown signal, test time was about 0.768550 seconds 00:25:41.754 00:25:41.754 Latency(us) 00:25:41.754 [2024-12-05T11:09:15.950Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:41.754 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:41.754 Verification LBA range: start 0x0 length 0x400 00:25:41.754 Nvme1n1 : 0.74 259.48 16.22 0.00 0.00 243207.48 18599.74 216705.71 00:25:41.754 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:41.754 Verification LBA range: start 0x0 length 0x400 00:25:41.754 Nvme2n1 : 0.76 334.81 20.93 0.00 0.00 183381.21 10173.68 213709.78 00:25:41.754 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:41.754 Verification LBA range: start 0x0 length 0x400 00:25:41.754 Nvme3n1 : 0.77 333.38 20.84 0.00 0.00 181180.95 23218.47 205720.62 00:25:41.754 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:41.754 Verification LBA range: start 0x0 length 0x400 00:25:41.754 Nvme4n1 : 0.75 287.68 17.98 0.00 0.00 200451.76 15915.89 210713.84 00:25:41.754 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:41.754 Verification LBA range: start 0x0 length 0x400 00:25:41.754 Nvme5n1 : 0.73 262.01 16.38 0.00 0.00 219974.38 17476.27 203723.34 00:25:41.754 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:41.754 Verification LBA range: start 0x0 length 0x400 00:25:41.754 Nvme6n1 : 0.75 254.46 15.90 0.00 0.00 222208.33 15978.30 219701.64 00:25:41.754 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:41.754 Verification LBA range: start 0x0 length 0x400 00:25:41.754 Nvme7n1 : 0.75 257.44 16.09 0.00 0.00 213863.21 14230.67 216705.71 00:25:41.754 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:41.754 Verification LBA range: start 0x0 length 0x400 00:25:41.754 Nvme8n1 : 0.74 258.37 16.15 0.00 0.00 208124.91 15978.30 198730.12 00:25:41.754 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:41.754 Verification LBA range: start 0x0 length 0x400 00:25:41.754 Nvme9n1 : 0.76 253.12 15.82 0.00 0.00 208138.57 17101.78 221698.93 00:25:41.754 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:41.754 Verification LBA range: start 0x0 length 0x400 00:25:41.754 Nvme10n1 : 0.76 252.11 15.76 0.00 0.00 204023.39 18350.08 241671.80 00:25:41.754 [2024-12-05T11:09:15.950Z] =================================================================================================================== 00:25:41.754 [2024-12-05T11:09:15.950Z] Total : 2752.87 172.05 0.00 0.00 206748.80 10173.68 241671.80 00:25:42.013 12:09:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:25:42.947 12:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 143625 00:25:42.947 12:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:25:42.947 12:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:25:42.947 12:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:42.947 12:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:42.947 12:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:25:42.947 12:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # nvmfcleanup 00:25:42.947 12:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@99 -- # sync 00:25:42.947 12:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:25:42.947 12:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # set +e 00:25:42.947 12:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # for i in {1..20} 00:25:42.947 12:09:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:25:42.947 rmmod nvme_tcp 00:25:42.947 rmmod nvme_fabrics 00:25:42.947 rmmod nvme_keyring 00:25:42.947 12:09:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:25:42.947 12:09:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # set -e 00:25:42.947 12:09:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # return 0 00:25:42.947 12:09:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # '[' -n 143625 ']' 00:25:42.947 12:09:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@337 -- # killprocess 143625 00:25:42.947 12:09:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 143625 ']' 00:25:42.947 12:09:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 143625 00:25:42.947 12:09:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:25:42.947 12:09:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:42.948 12:09:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 143625 00:25:42.948 12:09:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:42.948 12:09:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:42.948 12:09:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 143625' 00:25:42.948 killing process with pid 143625 00:25:42.948 12:09:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 143625 00:25:42.948 12:09:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 143625 00:25:43.515 12:09:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:25:43.515 12:09:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # nvmf_fini 00:25:43.515 12:09:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@264 -- # local dev 00:25:43.515 12:09:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@267 -- # remove_target_ns 00:25:43.515 12:09:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:25:43.515 12:09:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:25:43.515 12:09:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_target_ns 00:25:45.419 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@268 -- # delete_main_bridge 00:25:45.419 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:25:45.419 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@130 -- # return 0 00:25:45.419 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:25:45.419 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:25:45.419 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:25:45.419 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:25:45.419 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:25:45.419 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:25:45.419 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:25:45.419 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:25:45.419 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:25:45.419 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:25:45.419 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:25:45.419 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:25:45.419 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:25:45.419 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:25:45.419 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:25:45.419 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:25:45.419 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:25:45.419 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@41 -- # _dev=0 00:25:45.419 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@41 -- # dev_map=() 00:25:45.419 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@284 -- # iptr 00:25:45.419 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@542 -- # iptables-save 00:25:45.419 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:25:45.419 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@542 -- # iptables-restore 00:25:45.419 00:25:45.419 real 0m8.157s 00:25:45.419 user 0m24.492s 00:25:45.419 sys 0m1.405s 00:25:45.419 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:45.419 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:45.419 ************************************ 00:25:45.419 END TEST nvmf_shutdown_tc2 00:25:45.419 ************************************ 00:25:45.419 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:25:45.419 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:45.419 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:45.419 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:45.679 ************************************ 00:25:45.679 START TEST nvmf_shutdown_tc3 00:25:45.679 ************************************ 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # prepare_net_devs 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # local -g is_hw=no 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # remove_target_ns 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_target_ns 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # xtrace_disable 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@131 -- # pci_devs=() 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@131 -- # local -a pci_devs 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@132 -- # pci_net_devs=() 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@133 -- # pci_drivers=() 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@133 -- # local -A pci_drivers 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@135 -- # net_devs=() 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@135 -- # local -ga net_devs 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@136 -- # e810=() 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@136 -- # local -ga e810 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@137 -- # x722=() 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@137 -- # local -ga x722 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@138 -- # mlx=() 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@138 -- # local -ga mlx 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:45.680 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:45.680 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # [[ up == up ]] 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:45.680 Found net devices under 0000:86:00.0: cvl_0_0 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # [[ up == up ]] 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:45.680 Found net devices under 0000:86:00.1: cvl_0_1 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:25:45.680 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # is_hw=yes 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@257 -- # create_target_ns 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@27 -- # local -gA dev_map 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@28 -- # local -g _dev 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@44 -- # ips=() 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@11 -- # local val=167772161 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:25:45.681 10.0.0.1 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@11 -- # local val=167772162 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:25:45.681 10.0.0.2 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:45.681 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:25:45.682 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:25:45.942 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:25:45.942 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:25:45.942 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:25:45.942 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:25:45.942 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:25:45.942 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:25:45.942 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:25:45.942 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:25:45.942 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:45.942 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@38 -- # ping_ips 1 00:25:45.942 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:25:45.942 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:25:45.942 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:25:45.942 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:25:45.942 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:25:45.942 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:25:45.942 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:25:45.942 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:25:45.942 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:25:45.942 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@107 -- # local dev=initiator0 00:25:45.942 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:25:45.942 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:25:45.942 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:25:45.942 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:25:45.942 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:25:45.942 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:25:45.942 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:25:45.942 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:25:45.942 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:25:45.942 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:25:45.942 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:25:45.942 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:45.942 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:45.942 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:25:45.942 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:25:45.942 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:45.942 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.420 ms 00:25:45.942 00:25:45.942 --- 10.0.0.1 ping statistics --- 00:25:45.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:45.942 rtt min/avg/max/mdev = 0.420/0.420/0.420/0.000 ms 00:25:45.942 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:25:45.942 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:25:45.942 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:25:45.942 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:25:45.942 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:45.942 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:45.942 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@168 -- # get_net_dev target0 00:25:45.942 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@107 -- # local dev=target0 00:25:45.942 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:25:45.942 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:25:45.942 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:25:45.942 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:25:45.942 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:25:45.942 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:25:45.942 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:25:45.942 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:25:45.942 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:25:45.942 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:25:45.942 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:25:45.942 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:25:45.942 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:25:45.942 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:25:45.942 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:45.942 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.116 ms 00:25:45.942 00:25:45.942 --- 10.0.0.2 ping statistics --- 00:25:45.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:45.942 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:25:45.942 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@98 -- # (( pair++ )) 00:25:45.943 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:25:45.943 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:45.943 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # return 0 00:25:45.943 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:25:45.943 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:25:45.943 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:25:45.943 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:25:45.943 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:25:45.943 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:25:45.943 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:25:45.943 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:25:45.943 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:25:45.943 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:25:45.943 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@107 -- # local dev=initiator0 00:25:45.943 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:25:45.943 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:25:45.943 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:25:45.943 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:25:45.943 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:25:45.943 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:25:45.943 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:25:45.943 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:25:45.943 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:25:45.943 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:45.943 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:25:45.943 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:25:45.943 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:25:45.943 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:25:45.943 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:25:45.943 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:25:45.943 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@107 -- # local dev=initiator1 00:25:45.943 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:25:45.943 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:25:45.943 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@109 -- # return 1 00:25:45.943 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@168 -- # dev= 00:25:45.943 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@169 -- # return 0 00:25:45.943 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:25:45.943 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:25:45.943 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:25:45.943 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:25:45.943 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:25:45.943 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:45.943 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:45.943 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@168 -- # get_net_dev target0 00:25:45.943 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@107 -- # local dev=target0 00:25:45.943 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:25:45.943 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:25:45.943 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:25:45.943 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:25:45.943 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:25:45.943 12:09:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:25:45.943 12:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:25:45.943 12:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:25:45.943 12:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:25:45.943 12:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:45.943 12:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:25:45.943 12:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:25:45.943 12:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:25:45.943 12:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:25:45.943 12:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:45.943 12:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:45.943 12:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@168 -- # get_net_dev target1 00:25:45.943 12:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@107 -- # local dev=target1 00:25:45.943 12:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:25:45.943 12:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:25:45.943 12:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@109 -- # return 1 00:25:45.943 12:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@168 -- # dev= 00:25:45.943 12:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@169 -- # return 0 00:25:45.943 12:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:25:45.943 12:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:45.943 12:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:25:45.943 12:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:25:45.943 12:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:45.943 12:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:25:45.943 12:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:25:45.943 12:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:25:45.943 12:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:25:45.943 12:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:45.943 12:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:45.943 12:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # nvmfpid=145214 00:25:45.943 12:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # waitforlisten 145214 00:25:45.943 12:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk ip netns exec nvmf_ns_spdk ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:45.943 12:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 145214 ']' 00:25:45.943 12:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:45.943 12:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:45.944 12:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:45.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:45.944 12:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:45.944 12:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:45.944 [2024-12-05 12:09:20.114390] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:25:45.944 [2024-12-05 12:09:20.114434] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:46.206 [2024-12-05 12:09:20.194245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:46.206 [2024-12-05 12:09:20.237518] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:46.206 [2024-12-05 12:09:20.237556] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:46.206 [2024-12-05 12:09:20.237563] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:46.206 [2024-12-05 12:09:20.237569] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:46.206 [2024-12-05 12:09:20.237575] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:46.206 [2024-12-05 12:09:20.239057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:46.206 [2024-12-05 12:09:20.239162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:46.206 [2024-12-05 12:09:20.239269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:46.206 [2024-12-05 12:09:20.239269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:25:46.772 12:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:46.772 12:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:25:46.772 12:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:25:46.772 12:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:46.772 12:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:47.030 12:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:47.030 12:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:47.030 12:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.030 12:09:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:47.030 [2024-12-05 12:09:21.003023] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:47.030 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.030 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:25:47.030 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:25:47.030 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:47.030 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:47.030 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:47.030 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:47.030 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:47.030 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:47.030 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:47.030 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:47.030 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:47.030 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:47.030 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:47.030 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:47.030 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:47.030 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:47.030 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:47.030 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:47.030 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:47.030 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:47.030 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:47.030 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:47.030 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:47.030 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:47.030 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:47.030 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:25:47.030 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.030 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:47.030 Malloc1 00:25:47.030 [2024-12-05 12:09:21.119779] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:47.030 Malloc2 00:25:47.030 Malloc3 00:25:47.030 Malloc4 00:25:47.289 Malloc5 00:25:47.289 Malloc6 00:25:47.289 Malloc7 00:25:47.289 Malloc8 00:25:47.289 Malloc9 00:25:47.548 Malloc10 00:25:47.548 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.548 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:25:47.548 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:47.548 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:47.548 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=145488 00:25:47.548 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 145488 /var/tmp/bdevperf.sock 00:25:47.548 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 145488 ']' 00:25:47.548 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:47.548 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:25:47.548 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:47.548 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:47.548 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:47.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:47.548 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # config=() 00:25:47.548 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:47.548 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # local subsystem config 00:25:47.548 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:47.548 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:25:47.548 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:25:47.548 { 00:25:47.548 "params": { 00:25:47.548 "name": "Nvme$subsystem", 00:25:47.548 "trtype": "$TEST_TRANSPORT", 00:25:47.548 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:47.548 "adrfam": "ipv4", 00:25:47.548 "trsvcid": "$NVMF_PORT", 00:25:47.548 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:47.548 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:47.548 "hdgst": ${hdgst:-false}, 00:25:47.548 "ddgst": ${ddgst:-false} 00:25:47.548 }, 00:25:47.548 "method": "bdev_nvme_attach_controller" 00:25:47.548 } 00:25:47.548 EOF 00:25:47.548 )") 00:25:47.548 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:25:47.548 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:25:47.548 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:25:47.548 { 00:25:47.548 "params": { 00:25:47.548 "name": "Nvme$subsystem", 00:25:47.548 "trtype": "$TEST_TRANSPORT", 00:25:47.548 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:47.548 "adrfam": "ipv4", 00:25:47.548 "trsvcid": "$NVMF_PORT", 00:25:47.548 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:47.548 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:47.548 "hdgst": ${hdgst:-false}, 00:25:47.548 "ddgst": ${ddgst:-false} 00:25:47.548 }, 00:25:47.548 "method": "bdev_nvme_attach_controller" 00:25:47.548 } 00:25:47.548 EOF 00:25:47.548 )") 00:25:47.548 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:25:47.548 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:25:47.548 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:25:47.548 { 00:25:47.548 "params": { 00:25:47.548 "name": "Nvme$subsystem", 00:25:47.548 "trtype": "$TEST_TRANSPORT", 00:25:47.548 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:47.548 "adrfam": "ipv4", 00:25:47.548 "trsvcid": "$NVMF_PORT", 00:25:47.548 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:47.548 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:47.548 "hdgst": ${hdgst:-false}, 00:25:47.548 "ddgst": ${ddgst:-false} 00:25:47.548 }, 00:25:47.548 "method": "bdev_nvme_attach_controller" 00:25:47.548 } 00:25:47.548 EOF 00:25:47.548 )") 00:25:47.548 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:25:47.548 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:25:47.548 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:25:47.548 { 00:25:47.548 "params": { 00:25:47.548 "name": "Nvme$subsystem", 00:25:47.548 "trtype": "$TEST_TRANSPORT", 00:25:47.548 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:47.548 "adrfam": "ipv4", 00:25:47.548 "trsvcid": "$NVMF_PORT", 00:25:47.548 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:47.548 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:47.548 "hdgst": ${hdgst:-false}, 00:25:47.548 "ddgst": ${ddgst:-false} 00:25:47.548 }, 00:25:47.548 "method": "bdev_nvme_attach_controller" 00:25:47.548 } 00:25:47.548 EOF 00:25:47.548 )") 00:25:47.548 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:25:47.548 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:25:47.548 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:25:47.548 { 00:25:47.548 "params": { 00:25:47.548 "name": "Nvme$subsystem", 00:25:47.548 "trtype": "$TEST_TRANSPORT", 00:25:47.548 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:47.548 "adrfam": "ipv4", 00:25:47.548 "trsvcid": "$NVMF_PORT", 00:25:47.548 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:47.548 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:47.548 "hdgst": ${hdgst:-false}, 00:25:47.548 "ddgst": ${ddgst:-false} 00:25:47.548 }, 00:25:47.548 "method": "bdev_nvme_attach_controller" 00:25:47.548 } 00:25:47.548 EOF 00:25:47.548 )") 00:25:47.548 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:25:47.548 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:25:47.548 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:25:47.548 { 00:25:47.548 "params": { 00:25:47.548 "name": "Nvme$subsystem", 00:25:47.548 "trtype": "$TEST_TRANSPORT", 00:25:47.548 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:47.548 "adrfam": "ipv4", 00:25:47.548 "trsvcid": "$NVMF_PORT", 00:25:47.548 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:47.548 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:47.548 "hdgst": ${hdgst:-false}, 00:25:47.548 "ddgst": ${ddgst:-false} 00:25:47.548 }, 00:25:47.548 "method": "bdev_nvme_attach_controller" 00:25:47.548 } 00:25:47.548 EOF 00:25:47.548 )") 00:25:47.548 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:25:47.548 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:25:47.549 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:25:47.549 { 00:25:47.549 "params": { 00:25:47.549 "name": "Nvme$subsystem", 00:25:47.549 "trtype": "$TEST_TRANSPORT", 00:25:47.549 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:47.549 "adrfam": "ipv4", 00:25:47.549 "trsvcid": "$NVMF_PORT", 00:25:47.549 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:47.549 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:47.549 "hdgst": ${hdgst:-false}, 00:25:47.549 "ddgst": ${ddgst:-false} 00:25:47.549 }, 00:25:47.549 "method": "bdev_nvme_attach_controller" 00:25:47.549 } 00:25:47.549 EOF 00:25:47.549 )") 00:25:47.549 [2024-12-05 12:09:21.600277] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:25:47.549 [2024-12-05 12:09:21.600324] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145488 ] 00:25:47.549 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:25:47.549 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:25:47.549 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:25:47.549 { 00:25:47.549 "params": { 00:25:47.549 "name": "Nvme$subsystem", 00:25:47.549 "trtype": "$TEST_TRANSPORT", 00:25:47.549 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:47.549 "adrfam": "ipv4", 00:25:47.549 "trsvcid": "$NVMF_PORT", 00:25:47.549 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:47.549 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:47.549 "hdgst": ${hdgst:-false}, 00:25:47.549 "ddgst": ${ddgst:-false} 00:25:47.549 }, 00:25:47.549 "method": "bdev_nvme_attach_controller" 00:25:47.549 } 00:25:47.549 EOF 00:25:47.549 )") 00:25:47.549 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:25:47.549 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:25:47.549 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:25:47.549 { 00:25:47.549 "params": { 00:25:47.549 "name": "Nvme$subsystem", 00:25:47.549 "trtype": "$TEST_TRANSPORT", 00:25:47.549 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:47.549 "adrfam": "ipv4", 00:25:47.549 "trsvcid": "$NVMF_PORT", 00:25:47.549 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:47.549 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:47.549 "hdgst": ${hdgst:-false}, 00:25:47.549 "ddgst": ${ddgst:-false} 00:25:47.549 }, 00:25:47.549 "method": "bdev_nvme_attach_controller" 00:25:47.549 } 00:25:47.549 EOF 00:25:47.549 )") 00:25:47.549 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:25:47.549 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:25:47.549 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:25:47.549 { 00:25:47.549 "params": { 00:25:47.549 "name": "Nvme$subsystem", 00:25:47.549 "trtype": "$TEST_TRANSPORT", 00:25:47.549 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:47.549 "adrfam": "ipv4", 00:25:47.549 "trsvcid": "$NVMF_PORT", 00:25:47.549 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:47.549 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:47.549 "hdgst": ${hdgst:-false}, 00:25:47.549 "ddgst": ${ddgst:-false} 00:25:47.549 }, 00:25:47.549 "method": "bdev_nvme_attach_controller" 00:25:47.549 } 00:25:47.549 EOF 00:25:47.549 )") 00:25:47.549 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:25:47.549 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@396 -- # jq . 00:25:47.549 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@397 -- # IFS=, 00:25:47.549 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:25:47.549 "params": { 00:25:47.549 "name": "Nvme1", 00:25:47.549 "trtype": "tcp", 00:25:47.549 "traddr": "10.0.0.2", 00:25:47.549 "adrfam": "ipv4", 00:25:47.549 "trsvcid": "4420", 00:25:47.549 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:47.549 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:47.549 "hdgst": false, 00:25:47.549 "ddgst": false 00:25:47.549 }, 00:25:47.549 "method": "bdev_nvme_attach_controller" 00:25:47.549 },{ 00:25:47.549 "params": { 00:25:47.549 "name": "Nvme2", 00:25:47.549 "trtype": "tcp", 00:25:47.549 "traddr": "10.0.0.2", 00:25:47.549 "adrfam": "ipv4", 00:25:47.549 "trsvcid": "4420", 00:25:47.549 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:47.549 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:47.549 "hdgst": false, 00:25:47.549 "ddgst": false 00:25:47.549 }, 00:25:47.549 "method": "bdev_nvme_attach_controller" 00:25:47.549 },{ 00:25:47.549 "params": { 00:25:47.549 "name": "Nvme3", 00:25:47.549 "trtype": "tcp", 00:25:47.549 "traddr": "10.0.0.2", 00:25:47.549 "adrfam": "ipv4", 00:25:47.549 "trsvcid": "4420", 00:25:47.549 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:47.549 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:47.549 "hdgst": false, 00:25:47.549 "ddgst": false 00:25:47.549 }, 00:25:47.549 "method": "bdev_nvme_attach_controller" 00:25:47.549 },{ 00:25:47.549 "params": { 00:25:47.549 "name": "Nvme4", 00:25:47.549 "trtype": "tcp", 00:25:47.549 "traddr": "10.0.0.2", 00:25:47.549 "adrfam": "ipv4", 00:25:47.549 "trsvcid": "4420", 00:25:47.549 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:47.549 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:47.549 "hdgst": false, 00:25:47.549 "ddgst": false 00:25:47.549 }, 00:25:47.549 "method": "bdev_nvme_attach_controller" 00:25:47.549 },{ 00:25:47.549 "params": { 00:25:47.549 "name": "Nvme5", 00:25:47.549 "trtype": "tcp", 00:25:47.549 "traddr": "10.0.0.2", 00:25:47.549 "adrfam": "ipv4", 00:25:47.549 "trsvcid": "4420", 00:25:47.549 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:47.549 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:47.549 "hdgst": false, 00:25:47.549 "ddgst": false 00:25:47.549 }, 00:25:47.549 "method": "bdev_nvme_attach_controller" 00:25:47.549 },{ 00:25:47.549 "params": { 00:25:47.549 "name": "Nvme6", 00:25:47.549 "trtype": "tcp", 00:25:47.549 "traddr": "10.0.0.2", 00:25:47.549 "adrfam": "ipv4", 00:25:47.549 "trsvcid": "4420", 00:25:47.549 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:47.549 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:47.549 "hdgst": false, 00:25:47.549 "ddgst": false 00:25:47.549 }, 00:25:47.549 "method": "bdev_nvme_attach_controller" 00:25:47.549 },{ 00:25:47.549 "params": { 00:25:47.549 "name": "Nvme7", 00:25:47.549 "trtype": "tcp", 00:25:47.549 "traddr": "10.0.0.2", 00:25:47.549 "adrfam": "ipv4", 00:25:47.549 "trsvcid": "4420", 00:25:47.549 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:47.549 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:47.549 "hdgst": false, 00:25:47.549 "ddgst": false 00:25:47.549 }, 00:25:47.549 "method": "bdev_nvme_attach_controller" 00:25:47.549 },{ 00:25:47.549 "params": { 00:25:47.549 "name": "Nvme8", 00:25:47.549 "trtype": "tcp", 00:25:47.549 "traddr": "10.0.0.2", 00:25:47.549 "adrfam": "ipv4", 00:25:47.549 "trsvcid": "4420", 00:25:47.549 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:47.549 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:47.549 "hdgst": false, 00:25:47.549 "ddgst": false 00:25:47.549 }, 00:25:47.549 "method": "bdev_nvme_attach_controller" 00:25:47.549 },{ 00:25:47.549 "params": { 00:25:47.549 "name": "Nvme9", 00:25:47.549 "trtype": "tcp", 00:25:47.549 "traddr": "10.0.0.2", 00:25:47.549 "adrfam": "ipv4", 00:25:47.549 "trsvcid": "4420", 00:25:47.549 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:47.549 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:47.549 "hdgst": false, 00:25:47.549 "ddgst": false 00:25:47.549 }, 00:25:47.549 "method": "bdev_nvme_attach_controller" 00:25:47.549 },{ 00:25:47.549 "params": { 00:25:47.549 "name": "Nvme10", 00:25:47.549 "trtype": "tcp", 00:25:47.549 "traddr": "10.0.0.2", 00:25:47.549 "adrfam": "ipv4", 00:25:47.549 "trsvcid": "4420", 00:25:47.549 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:47.549 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:47.549 "hdgst": false, 00:25:47.549 "ddgst": false 00:25:47.549 }, 00:25:47.549 "method": "bdev_nvme_attach_controller" 00:25:47.549 }' 00:25:47.549 [2024-12-05 12:09:21.678049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:47.549 [2024-12-05 12:09:21.718628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:49.450 Running I/O for 10 seconds... 00:25:49.450 12:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:49.450 12:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:25:49.450 12:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:49.450 12:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.450 12:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:49.708 12:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.708 12:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:49.708 12:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:25:49.708 12:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:25:49.708 12:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:25:49.708 12:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:25:49.708 12:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:25:49.708 12:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:25:49.708 12:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:25:49.708 12:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:49.708 12:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:25:49.708 12:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.708 12:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:49.708 12:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.708 12:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:25:49.708 12:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:25:49.708 12:09:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:25:49.966 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:25:49.966 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:25:49.966 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:49.966 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:25:49.966 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.966 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:49.966 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.966 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:25:49.966 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:25:49.966 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:25:50.224 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:25:50.224 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:25:50.224 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:50.224 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:25:50.224 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.224 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:50.224 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.224 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:25:50.224 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:25:50.224 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:25:50.224 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:25:50.224 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:25:50.224 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 145214 00:25:50.224 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 145214 ']' 00:25:50.224 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 145214 00:25:50.224 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:25:50.224 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:50.224 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 145214 00:25:50.497 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:50.497 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:50.497 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 145214' 00:25:50.497 killing process with pid 145214 00:25:50.497 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 145214 00:25:50.497 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 145214 00:25:50.497 [2024-12-05 12:09:24.439410] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd8d0 is same with the state(6) to be set 00:25:50.497 [2024-12-05 12:09:24.439498] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd8d0 is same with the state(6) to be set 00:25:50.497 [2024-12-05 12:09:24.439506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd8d0 is same with the state(6) to be set 00:25:50.497 [2024-12-05 12:09:24.439513] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd8d0 is same with the state(6) to be set 00:25:50.497 [2024-12-05 12:09:24.439524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd8d0 is same with the state(6) to be set 00:25:50.497 [2024-12-05 12:09:24.439530] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd8d0 is same with the state(6) to be set 00:25:50.497 [2024-12-05 12:09:24.439536] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd8d0 is same with the state(6) to be set 00:25:50.497 [2024-12-05 12:09:24.439542] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd8d0 is same with the state(6) to be set 00:25:50.497 [2024-12-05 12:09:24.439548] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd8d0 is same with the state(6) to be set 00:25:50.497 [2024-12-05 12:09:24.439555] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd8d0 is same with the state(6) to be set 00:25:50.497 [2024-12-05 12:09:24.439561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd8d0 is same with the state(6) to be set 00:25:50.497 [2024-12-05 12:09:24.439567] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd8d0 is same with the state(6) to be set 00:25:50.497 [2024-12-05 12:09:24.439573] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd8d0 is same with the state(6) to be set 00:25:50.497 [2024-12-05 12:09:24.439580] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd8d0 is same with the state(6) to be set 00:25:50.497 [2024-12-05 12:09:24.439586] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd8d0 is same with the state(6) to be set 00:25:50.497 [2024-12-05 12:09:24.439592] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd8d0 is same with the state(6) to be set 00:25:50.497 [2024-12-05 12:09:24.439598] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd8d0 is same with the state(6) to be set 00:25:50.497 [2024-12-05 12:09:24.439604] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd8d0 is same with the state(6) to be set 00:25:50.497 [2024-12-05 12:09:24.439610] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd8d0 is same with the state(6) to be set 00:25:50.497 [2024-12-05 12:09:24.439616] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd8d0 is same with the state(6) to be set 00:25:50.497 [2024-12-05 12:09:24.439622] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd8d0 is same with the state(6) to be set 00:25:50.497 [2024-12-05 12:09:24.439629] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd8d0 is same with the state(6) to be set 00:25:50.497 [2024-12-05 12:09:24.439635] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd8d0 is same with the state(6) to be set 00:25:50.497 [2024-12-05 12:09:24.439641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd8d0 is same with the state(6) to be set 00:25:50.497 [2024-12-05 12:09:24.439648] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd8d0 is same with the state(6) to be set 00:25:50.497 [2024-12-05 12:09:24.439653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd8d0 is same with the state(6) to be set 00:25:50.497 [2024-12-05 12:09:24.439659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd8d0 is same with the state(6) to be set 00:25:50.497 [2024-12-05 12:09:24.439665] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd8d0 is same with the state(6) to be set 00:25:50.497 [2024-12-05 12:09:24.439671] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd8d0 is same with the state(6) to be set 00:25:50.497 [2024-12-05 12:09:24.439677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd8d0 is same with the state(6) to be set 00:25:50.497 [2024-12-05 12:09:24.439683] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd8d0 is same with the state(6) to be set 00:25:50.497 [2024-12-05 12:09:24.439691] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd8d0 is same with the state(6) to be set 00:25:50.497 [2024-12-05 12:09:24.439697] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd8d0 is same with the state(6) to be set 00:25:50.497 [2024-12-05 12:09:24.439703] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd8d0 is same with the state(6) to be set 00:25:50.497 [2024-12-05 12:09:24.439709] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd8d0 is same with the state(6) to be set 00:25:50.497 [2024-12-05 12:09:24.439715] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd8d0 is same with the state(6) to be set 00:25:50.497 [2024-12-05 12:09:24.439721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd8d0 is same with the state(6) to be set 00:25:50.497 [2024-12-05 12:09:24.439727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd8d0 is same with the state(6) to be set 00:25:50.497 [2024-12-05 12:09:24.439733] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd8d0 is same with the state(6) to be set 00:25:50.497 [2024-12-05 12:09:24.439740] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd8d0 is same with the state(6) to be set 00:25:50.497 [2024-12-05 12:09:24.439746] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd8d0 is same with the state(6) to be set 00:25:50.497 [2024-12-05 12:09:24.439752] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd8d0 is same with the state(6) to be set 00:25:50.497 [2024-12-05 12:09:24.439758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd8d0 is same with the state(6) to be set 00:25:50.497 [2024-12-05 12:09:24.439764] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd8d0 is same with the state(6) to be set 00:25:50.497 [2024-12-05 12:09:24.439770] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd8d0 is same with the state(6) to be set 00:25:50.497 [2024-12-05 12:09:24.439776] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd8d0 is same with the state(6) to be set 00:25:50.497 [2024-12-05 12:09:24.439783] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd8d0 is same with the state(6) to be set 00:25:50.497 [2024-12-05 12:09:24.439788] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd8d0 is same with the state(6) to be set 00:25:50.497 [2024-12-05 12:09:24.439794] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd8d0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.439800] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd8d0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.439806] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd8d0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.439812] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd8d0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.439818] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd8d0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.439824] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd8d0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.439830] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd8d0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.439836] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd8d0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.439842] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd8d0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.439848] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd8d0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.439855] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd8d0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.439861] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd8d0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.439867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd8d0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.439873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd8d0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.439879] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd8d0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.440806] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7004c0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.440844] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7004c0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.440852] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7004c0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.440859] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7004c0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.440865] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7004c0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.440871] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7004c0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.440877] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7004c0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.440884] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7004c0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.440889] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7004c0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.440896] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7004c0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.440902] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7004c0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.440908] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7004c0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.440914] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7004c0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.440920] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7004c0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.440926] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7004c0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.440933] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7004c0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.440939] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7004c0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.440945] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7004c0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.440951] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7004c0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.440957] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7004c0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.440963] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7004c0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.440970] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7004c0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.440979] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7004c0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.440986] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7004c0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.440992] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7004c0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.440999] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7004c0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.441005] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7004c0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.441011] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7004c0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.441017] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7004c0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.441023] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7004c0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.441029] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7004c0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.441035] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7004c0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.441041] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7004c0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.441049] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7004c0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.441055] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7004c0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.441061] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7004c0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.441067] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7004c0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.441072] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7004c0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.441079] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7004c0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.441085] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7004c0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.441091] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7004c0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.441097] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7004c0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.441103] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7004c0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.441109] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7004c0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.441115] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7004c0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.441121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7004c0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.441127] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7004c0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.441133] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7004c0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.441139] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7004c0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.441145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7004c0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.441152] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7004c0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.441159] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7004c0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.441164] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7004c0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.441170] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7004c0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.441176] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7004c0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.441181] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7004c0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.441187] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7004c0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.441194] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7004c0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.441200] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7004c0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.441206] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7004c0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.441212] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7004c0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.441218] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7004c0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.441224] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7004c0 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.443133] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe290 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.443157] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe290 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.443164] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe290 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.443171] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe290 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.443177] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe290 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.443184] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe290 is same with the state(6) to be set 00:25:50.498 [2024-12-05 12:09:24.443191] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe290 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.443197] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe290 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.443203] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe290 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.443209] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe290 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.443216] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe290 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.443222] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe290 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.443228] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe290 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.443234] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe290 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.443244] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe290 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.443251] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe290 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.443257] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe290 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.443263] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe290 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.443270] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe290 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.443276] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe290 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.443282] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe290 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.443288] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe290 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.443294] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe290 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.443300] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe290 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.443306] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe290 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.443313] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe290 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.443319] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe290 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.443325] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe290 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.443332] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe290 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.443338] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe290 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.443343] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe290 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.443349] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe290 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.443355] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe290 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.443362] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe290 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.443373] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe290 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.443380] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe290 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.443388] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe290 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.443394] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe290 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.443400] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe290 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.443406] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe290 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.443412] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe290 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.443514] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe290 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.443520] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe290 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.443526] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe290 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.443533] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe290 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.443539] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe290 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.443545] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe290 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.443551] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe290 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.443557] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe290 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.443563] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe290 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.443569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe290 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.443575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe290 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.443582] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe290 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.443588] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe290 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.443594] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe290 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.443600] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe290 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.443605] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe290 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.443611] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe290 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.443617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe290 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.443624] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe290 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.443630] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe290 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.443636] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe290 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.443643] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe290 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.444631] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe780 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.444657] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe780 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.444665] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe780 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.444671] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe780 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.444678] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe780 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.444684] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe780 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.444694] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe780 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.444700] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe780 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.444707] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe780 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.444713] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe780 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.444719] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe780 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.444726] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe780 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.444732] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe780 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.444739] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe780 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.444745] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe780 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.444751] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe780 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.444757] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe780 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.444763] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe780 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.444769] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe780 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.444775] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe780 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.444781] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe780 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.444787] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe780 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.444793] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe780 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.444799] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe780 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.444805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe780 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.444811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe780 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.444817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe780 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.444823] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe780 is same with the state(6) to be set 00:25:50.499 [2024-12-05 12:09:24.444829] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe780 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.444835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe780 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.444841] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe780 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.444847] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe780 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.444854] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe780 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.444862] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe780 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.444868] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe780 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.444874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe780 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.444880] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe780 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.444886] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe780 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.444892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe780 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.444898] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe780 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.444905] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe780 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.444910] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe780 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.444916] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe780 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.444922] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe780 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.444928] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe780 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.444934] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe780 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.444940] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe780 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.444945] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe780 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.444952] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe780 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.444958] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe780 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.444964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe780 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.444970] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe780 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.444975] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe780 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.444981] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe780 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.444987] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe780 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.444993] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe780 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.444999] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe780 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.445005] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe780 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.445011] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe780 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.445017] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe780 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.445025] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe780 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.445031] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe780 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.445037] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fe780 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.446138] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff160 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.446152] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff160 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.446158] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff160 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.446164] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff160 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.446170] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff160 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.446176] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff160 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.446183] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff160 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.446189] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff160 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.446195] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff160 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.446201] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff160 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.446207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff160 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.446213] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff160 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.446219] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff160 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.446225] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff160 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.446231] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff160 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.446237] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff160 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.446243] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff160 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.446249] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff160 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.446255] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff160 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.446260] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff160 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.446266] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff160 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.446273] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff160 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.446278] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff160 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.446285] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff160 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.446291] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff160 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.446300] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff160 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.446306] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff160 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.446312] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff160 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.446318] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff160 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.446325] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff160 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.446331] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff160 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.446337] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff160 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.446343] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff160 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.446350] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff160 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.446356] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff160 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.446362] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff160 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.446373] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff160 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.446380] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff160 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.446386] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff160 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.446392] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff160 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.446398] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff160 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.446405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff160 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.446411] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff160 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.446417] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff160 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.446424] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff160 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.446429] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff160 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.446436] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff160 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.446442] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff160 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.446449] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff160 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.446455] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff160 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.446461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff160 is same with the state(6) to be set 00:25:50.500 [2024-12-05 12:09:24.446467] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff160 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.446475] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff160 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.446480] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff160 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.446486] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff160 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.446492] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff160 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.446499] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff160 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.446504] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff160 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.446511] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff160 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.446516] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff160 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.446524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff160 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.446530] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff160 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.446536] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff160 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.447499] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff630 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.447513] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff630 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.447519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff630 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.447526] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff630 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.447532] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff630 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.447538] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff630 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.447544] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff630 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.447551] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff630 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.447557] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff630 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.447562] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff630 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.447569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff630 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.447575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff630 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.447581] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff630 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.447587] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff630 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.447593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff630 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.447598] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff630 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.447607] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff630 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.447613] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff630 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.447619] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff630 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.447625] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff630 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.447632] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff630 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.447639] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff630 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.447645] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff630 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.447651] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff630 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.447657] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff630 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.447663] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff630 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.447669] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff630 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.447675] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff630 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.447681] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff630 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.447687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff630 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.447693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff630 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.447699] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff630 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.447705] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff630 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.447711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff630 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.447724] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff630 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.447730] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff630 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.447737] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff630 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.447743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff630 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.447749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff630 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.447754] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff630 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.447760] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff630 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.447766] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff630 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.447772] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff630 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.447778] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff630 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.447786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff630 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.447792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff630 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.447797] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff630 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.447803] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff630 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.447810] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff630 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.447816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff630 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.447822] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff630 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.447828] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff630 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.447835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff630 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.447841] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff630 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.447847] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff630 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.447852] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff630 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.447858] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff630 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.447864] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff630 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.447870] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff630 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.447876] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff630 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.447882] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff630 is same with the state(6) to be set 00:25:50.501 [2024-12-05 12:09:24.447888] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff630 is same with the state(6) to be set 00:25:50.502 [2024-12-05 12:09:24.447894] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff630 is same with the state(6) to be set 00:25:50.502 [2024-12-05 12:09:24.448726] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffb00 is same with the state(6) to be set 00:25:50.502 [2024-12-05 12:09:24.448742] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffb00 is same with the state(6) to be set 00:25:50.502 [2024-12-05 12:09:24.448748] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffb00 is same with the state(6) to be set 00:25:50.502 [2024-12-05 12:09:24.448755] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffb00 is same with the state(6) to be set 00:25:50.502 [2024-12-05 12:09:24.448761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffb00 is same with the state(6) to be set 00:25:50.502 [2024-12-05 12:09:24.448767] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffb00 is same with the state(6) to be set 00:25:50.502 [2024-12-05 12:09:24.448773] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffb00 is same with the state(6) to be set 00:25:50.502 [2024-12-05 12:09:24.448779] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffb00 is same with the state(6) to be set 00:25:50.502 [2024-12-05 12:09:24.448788] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffb00 is same with the state(6) to be set 00:25:50.502 [2024-12-05 12:09:24.448795] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffb00 is same with the state(6) to be set 00:25:50.502 [2024-12-05 12:09:24.448801] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffb00 is same with the state(6) to be set 00:25:50.502 [2024-12-05 12:09:24.448807] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffb00 is same with the state(6) to be set 00:25:50.502 [2024-12-05 12:09:24.448812] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffb00 is same with the state(6) to be set 00:25:50.502 [2024-12-05 12:09:24.448819] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffb00 is same with the state(6) to be set 00:25:50.502 [2024-12-05 12:09:24.448825] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffb00 is same with the state(6) to be set 00:25:50.502 [2024-12-05 12:09:24.448831] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffb00 is same with the state(6) to be set 00:25:50.502 [2024-12-05 12:09:24.448837] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffb00 is same with the state(6) to be set 00:25:50.502 [2024-12-05 12:09:24.448843] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffb00 is same with the state(6) to be set 00:25:50.502 [2024-12-05 12:09:24.448849] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffb00 is same with the state(6) to be set 00:25:50.502 [2024-12-05 12:09:24.448856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffb00 is same with the state(6) to be set 00:25:50.502 [2024-12-05 12:09:24.448862] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffb00 is same with the state(6) to be set 00:25:50.502 [2024-12-05 12:09:24.448868] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffb00 is same with the state(6) to be set 00:25:50.502 [2024-12-05 12:09:24.448874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffb00 is same with the state(6) to be set 00:25:50.502 [2024-12-05 12:09:24.448880] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffb00 is same with the state(6) to be set 00:25:50.502 [2024-12-05 12:09:24.448886] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffb00 is same with the state(6) to be set 00:25:50.502 [2024-12-05 12:09:24.448892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffb00 is same with the state(6) to be set 00:25:50.502 [2024-12-05 12:09:24.448898] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffb00 is same with the state(6) to be set 00:25:50.502 [2024-12-05 12:09:24.448904] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffb00 is same with the state(6) to be set 00:25:50.502 [2024-12-05 12:09:24.448910] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffb00 is same with the state(6) to be set 00:25:50.502 [2024-12-05 12:09:24.448916] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffb00 is same with the state(6) to be set 00:25:50.502 [2024-12-05 12:09:24.448909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-12-05 12:09:24.448924] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffb00 is same with tid:0 cdw10:00000000 cdw11:00000000 00:25:50.502 he state(6) to be set 00:25:50.502 [2024-12-05 12:09:24.448934] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffb00 is same with the state(6) to be set 00:25:50.502 [2024-12-05 12:09:24.448940] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffb00 is same with the state(6) to be set 00:25:50.502 [2024-12-05 12:09:24.448940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.502 [2024-12-05 12:09:24.448947] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffb00 is same with the state(6) to be set 00:25:50.502 [2024-12-05 12:09:24.448954] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffb00 is same with the state(6) to be set 00:25:50.502 [2024-12-05 12:09:24.448955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.502 [2024-12-05 12:09:24.448961] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffb00 is same with the state(6) to be set 00:25:50.502 [2024-12-05 12:09:24.448964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.502 [2024-12-05 12:09:24.448968] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffb00 is same with the state(6) to be set 00:25:50.502 [2024-12-05 12:09:24.448972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.502 [2024-12-05 12:09:24.448975] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffb00 is same with the state(6) to be set 00:25:50.502 [2024-12-05 12:09:24.448980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.502 [2024-12-05 12:09:24.448982] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffb00 is same with the state(6) to be set 00:25:50.502 [2024-12-05 12:09:24.448989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-12-05 12:09:24.448989] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffb00 is same with tid:0 cdw10:00000000 cdw11:00000000 00:25:50.502 he state(6) to be set 00:25:50.502 [2024-12-05 12:09:24.448999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-12-05 12:09:24.448999] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffb00 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.502 he state(6) to be set 00:25:50.502 [2024-12-05 12:09:24.449008] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffb00 is same with t[2024-12-05 12:09:24.449009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c37ce0 is same he state(6) to be set 00:25:50.502 with the state(6) to be set 00:25:50.502 [2024-12-05 12:09:24.449018] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffb00 is same with the state(6) to be set 00:25:50.502 [2024-12-05 12:09:24.449025] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffb00 is same with the state(6) to be set 00:25:50.502 [2024-12-05 12:09:24.449032] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffb00 is same with the state(6) to be set 00:25:50.502 [2024-12-05 12:09:24.449038] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffb00 is same with the state(6) to be set 00:25:50.502 [2024-12-05 12:09:24.449041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-12-05 12:09:24.449043] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffb00 is same with tid:0 cdw10:00000000 cdw11:00000000 00:25:50.502 he state(6) to be set 00:25:50.502 [2024-12-05 12:09:24.449051] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffb00 is same with the state(6) to be set 00:25:50.502 [2024-12-05 12:09:24.449052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.502 [2024-12-05 12:09:24.449058] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffb00 is same with the state(6) to be set 00:25:50.502 [2024-12-05 12:09:24.449061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.502 [2024-12-05 12:09:24.449064] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffb00 is same with the state(6) to be set 00:25:50.502 [2024-12-05 12:09:24.449069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.502 [2024-12-05 12:09:24.449073] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffb00 is same with the state(6) to be set 00:25:50.502 [2024-12-05 12:09:24.449078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.502 [2024-12-05 12:09:24.449080] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffb00 is same with the state(6) to be set 00:25:50.502 [2024-12-05 12:09:24.449085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.502 [2024-12-05 12:09:24.449087] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffb00 is same with the state(6) to be set 00:25:50.502 [2024-12-05 12:09:24.449094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-12-05 12:09:24.449094] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffb00 is same with tid:0 cdw10:00000000 cdw11:00000000 00:25:50.502 he state(6) to be set 00:25:50.502 [2024-12-05 12:09:24.449104] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffb00 is same with t[2024-12-05 12:09:24.449104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(6) to be set 00:25:50.502 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.502 [2024-12-05 12:09:24.449113] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffb00 is same with the state(6) to be set 00:25:50.502 [2024-12-05 12:09:24.449115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1806490 is same with the state(6) to be set 00:25:50.502 [2024-12-05 12:09:24.449120] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffb00 is same with the state(6) to be set 00:25:50.502 [2024-12-05 12:09:24.449127] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffb00 is same with the state(6) to be set 00:25:50.502 [2024-12-05 12:09:24.449133] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffb00 is same with the state(6) to be set 00:25:50.502 [2024-12-05 12:09:24.449138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-12-05 12:09:24.449139] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffb00 is same with tid:0 cdw10:00000000 cdw11:00000000 00:25:50.502 he state(6) to be set 00:25:50.502 [2024-12-05 12:09:24.449149] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffb00 is same with t[2024-12-05 12:09:24.449149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(6) to be set 00:25:50.502 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.502 [2024-12-05 12:09:24.449157] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffb00 is same with the state(6) to be set 00:25:50.503 [2024-12-05 12:09:24.449159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.503 [2024-12-05 12:09:24.449164] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffb00 is same with the state(6) to be set 00:25:50.503 [2024-12-05 12:09:24.449168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.503 [2024-12-05 12:09:24.449176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.503 [2024-12-05 12:09:24.449182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.503 [2024-12-05 12:09:24.449189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.503 [2024-12-05 12:09:24.449198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.503 [2024-12-05 12:09:24.449205] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800200 is same with the state(6) to be set 00:25:50.503 [2024-12-05 12:09:24.449230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.503 [2024-12-05 12:09:24.449238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.503 [2024-12-05 12:09:24.449245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.503 [2024-12-05 12:09:24.449252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.503 [2024-12-05 12:09:24.449259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.503 [2024-12-05 12:09:24.449266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.503 [2024-12-05 12:09:24.449273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.503 [2024-12-05 12:09:24.449280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.503 [2024-12-05 12:09:24.449286] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1720610 is same with the state(6) to be set 00:25:50.503 [2024-12-05 12:09:24.449311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.503 [2024-12-05 12:09:24.449319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.503 [2024-12-05 12:09:24.449326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.503 [2024-12-05 12:09:24.449333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.503 [2024-12-05 12:09:24.449340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.503 [2024-12-05 12:09:24.449346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.503 [2024-12-05 12:09:24.449354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.503 [2024-12-05 12:09:24.449361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.503 [2024-12-05 12:09:24.449372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2fec0 is same with the state(6) to be set 00:25:50.503 [2024-12-05 12:09:24.449396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.503 [2024-12-05 12:09:24.449404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.503 [2024-12-05 12:09:24.449411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.503 [2024-12-05 12:09:24.449418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.503 [2024-12-05 12:09:24.449425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.503 [2024-12-05 12:09:24.449432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.503 [2024-12-05 12:09:24.449441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.503 [2024-12-05 12:09:24.449447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.503 [2024-12-05 12:09:24.449454] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180c1c0 is same with the state(6) to be set 00:25:50.503 [2024-12-05 12:09:24.449475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.503 [2024-12-05 12:09:24.449483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.503 [2024-12-05 12:09:24.449490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.503 [2024-12-05 12:09:24.449497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.503 [2024-12-05 12:09:24.449504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.503 [2024-12-05 12:09:24.449510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.503 [2024-12-05 12:09:24.449517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.503 [2024-12-05 12:09:24.449523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.503 [2024-12-05 12:09:24.449530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c375f0 is same with the state(6) to be set 00:25:50.503 [2024-12-05 12:09:24.449557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.503 [2024-12-05 12:09:24.449564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.503 [2024-12-05 12:09:24.449573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.503 [2024-12-05 12:09:24.449579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.503 [2024-12-05 12:09:24.449587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.503 [2024-12-05 12:09:24.449593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.503 [2024-12-05 12:09:24.449600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.503 [2024-12-05 12:09:24.449607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.503 [2024-12-05 12:09:24.449613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69e00 is same with the state(6) to be set 00:25:50.503 [2024-12-05 12:09:24.449636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.503 [2024-12-05 12:09:24.449644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.503 [2024-12-05 12:09:24.449651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.503 [2024-12-05 12:09:24.449658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.503 [2024-12-05 12:09:24.449667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.503 [2024-12-05 12:09:24.449673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.503 [2024-12-05 12:09:24.449680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.503 [2024-12-05 12:09:24.449687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.503 [2024-12-05 12:09:24.449693] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c796e0 is same with the state(6) to be set 00:25:50.503 [2024-12-05 12:09:24.449751] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fffd0 is same with the state(6) to be set 00:25:50.503 [2024-12-05 12:09:24.449771] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fffd0 is same with the state(6) to be set 00:25:50.503 [2024-12-05 12:09:24.450045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.503 [2024-12-05 12:09:24.450066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.503 [2024-12-05 12:09:24.450081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.503 [2024-12-05 12:09:24.450088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.503 [2024-12-05 12:09:24.450097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.503 [2024-12-05 12:09:24.450104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.503 [2024-12-05 12:09:24.450113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.503 [2024-12-05 12:09:24.450120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.503 [2024-12-05 12:09:24.450128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.503 [2024-12-05 12:09:24.450134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.503 [2024-12-05 12:09:24.450142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.503 [2024-12-05 12:09:24.450149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.503 [2024-12-05 12:09:24.450160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.503 [2024-12-05 12:09:24.450166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.503 [2024-12-05 12:09:24.450174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.503 [2024-12-05 12:09:24.450181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.504 [2024-12-05 12:09:24.450189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.504 [2024-12-05 12:09:24.450196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.504 [2024-12-05 12:09:24.450206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.504 [2024-12-05 12:09:24.450213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.504 [2024-12-05 12:09:24.450222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.504 [2024-12-05 12:09:24.450229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.504 [2024-12-05 12:09:24.450236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.504 [2024-12-05 12:09:24.450243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.504 [2024-12-05 12:09:24.450251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.504 [2024-12-05 12:09:24.450258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.504 [2024-12-05 12:09:24.450266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.504 [2024-12-05 12:09:24.450272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.504 [2024-12-05 12:09:24.450280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.504 [2024-12-05 12:09:24.450286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.504 [2024-12-05 12:09:24.450294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.504 [2024-12-05 12:09:24.450301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.504 [2024-12-05 12:09:24.450309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.504 [2024-12-05 12:09:24.450315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.504 [2024-12-05 12:09:24.450323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.504 [2024-12-05 12:09:24.450329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.504 [2024-12-05 12:09:24.450337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.504 [2024-12-05 12:09:24.450344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.504 [2024-12-05 12:09:24.450352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.504 [2024-12-05 12:09:24.450358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.504 [2024-12-05 12:09:24.450366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.504 [2024-12-05 12:09:24.450378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.504 [2024-12-05 12:09:24.450387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.504 [2024-12-05 12:09:24.450395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.504 [2024-12-05 12:09:24.450404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.504 [2024-12-05 12:09:24.450411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.504 [2024-12-05 12:09:24.450419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.504 [2024-12-05 12:09:24.450425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.504 [2024-12-05 12:09:24.450433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.504 [2024-12-05 12:09:24.450440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.504 [2024-12-05 12:09:24.450448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.504 [2024-12-05 12:09:24.450455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.504 [2024-12-05 12:09:24.450463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.504 [2024-12-05 12:09:24.450469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.504 [2024-12-05 12:09:24.450477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.504 [2024-12-05 12:09:24.450483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.504 [2024-12-05 12:09:24.450491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.504 [2024-12-05 12:09:24.450498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.504 [2024-12-05 12:09:24.450506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.504 [2024-12-05 12:09:24.450512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.504 [2024-12-05 12:09:24.450520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.504 [2024-12-05 12:09:24.450526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.504 [2024-12-05 12:09:24.450534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.504 [2024-12-05 12:09:24.450541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.504 [2024-12-05 12:09:24.450548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.504 [2024-12-05 12:09:24.450555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.504 [2024-12-05 12:09:24.450562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.504 [2024-12-05 12:09:24.450569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.504 [2024-12-05 12:09:24.450578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.504 [2024-12-05 12:09:24.450585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.504 [2024-12-05 12:09:24.450593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.504 [2024-12-05 12:09:24.450599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.504 [2024-12-05 12:09:24.450607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.504 [2024-12-05 12:09:24.450613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.504 [2024-12-05 12:09:24.450621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.504 [2024-12-05 12:09:24.450627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.504 [2024-12-05 12:09:24.450637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.504 [2024-12-05 12:09:24.450644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.504 [2024-12-05 12:09:24.450651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.504 [2024-12-05 12:09:24.450658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.504 [2024-12-05 12:09:24.450666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.504 [2024-12-05 12:09:24.450672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.504 [2024-12-05 12:09:24.450680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.504 [2024-12-05 12:09:24.450687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.504 [2024-12-05 12:09:24.450696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.504 [2024-12-05 12:09:24.450702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.504 [2024-12-05 12:09:24.450710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.504 [2024-12-05 12:09:24.450717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.504 [2024-12-05 12:09:24.450725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.504 [2024-12-05 12:09:24.450731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.504 [2024-12-05 12:09:24.450739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.504 [2024-12-05 12:09:24.450745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.504 [2024-12-05 12:09:24.450753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.504 [2024-12-05 12:09:24.450761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.505 [2024-12-05 12:09:24.450770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.505 [2024-12-05 12:09:24.450776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.505 [2024-12-05 12:09:24.450784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.505 [2024-12-05 12:09:24.450790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.505 [2024-12-05 12:09:24.450798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.505 [2024-12-05 12:09:24.450805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.505 [2024-12-05 12:09:24.450813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.505 [2024-12-05 12:09:24.450819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.505 [2024-12-05 12:09:24.450827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.505 [2024-12-05 12:09:24.450834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.505 [2024-12-05 12:09:24.450842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.505 [2024-12-05 12:09:24.450848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.505 [2024-12-05 12:09:24.450856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.505 [2024-12-05 12:09:24.450863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.505 [2024-12-05 12:09:24.450873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.505 [2024-12-05 12:09:24.450879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.505 [2024-12-05 12:09:24.450887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.505 [2024-12-05 12:09:24.450894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.505 [2024-12-05 12:09:24.450902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.505 [2024-12-05 12:09:24.450909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.505 [2024-12-05 12:09:24.450917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.505 [2024-12-05 12:09:24.450924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.505 [2024-12-05 12:09:24.450933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.505 [2024-12-05 12:09:24.450939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.505 [2024-12-05 12:09:24.450949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.505 [2024-12-05 12:09:24.450956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.505 [2024-12-05 12:09:24.450964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.505 [2024-12-05 12:09:24.450971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.505 [2024-12-05 12:09:24.450978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.505 [2024-12-05 12:09:24.450985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.505 [2024-12-05 12:09:24.450993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.505 [2024-12-05 12:09:24.450999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.505 [2024-12-05 12:09:24.451007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.505 [2024-12-05 12:09:24.451013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.505 [2024-12-05 12:09:24.451041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:50.505 [2024-12-05 12:09:24.451220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.505 [2024-12-05 12:09:24.451231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.505 [2024-12-05 12:09:24.451242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.505 [2024-12-05 12:09:24.451248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.505 [2024-12-05 12:09:24.451257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.505 [2024-12-05 12:09:24.451264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.505 [2024-12-05 12:09:24.451272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.505 [2024-12-05 12:09:24.451279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.505 [2024-12-05 12:09:24.451287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.505 [2024-12-05 12:09:24.451295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.505 [2024-12-05 12:09:24.451304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.505 [2024-12-05 12:09:24.451310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.505 [2024-12-05 12:09:24.451318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.505 [2024-12-05 12:09:24.451325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.505 [2024-12-05 12:09:24.451336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.505 [2024-12-05 12:09:24.451342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.505 [2024-12-05 12:09:24.451351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.505 [2024-12-05 12:09:24.451357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.505 [2024-12-05 12:09:24.451366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.505 [2024-12-05 12:09:24.451378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.505 [2024-12-05 12:09:24.451389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.505 [2024-12-05 12:09:24.451396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.505 [2024-12-05 12:09:24.451404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.505 [2024-12-05 12:09:24.451411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.505 [2024-12-05 12:09:24.451419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.505 [2024-12-05 12:09:24.451426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.505 [2024-12-05 12:09:24.451435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.505 [2024-12-05 12:09:24.451441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.505 [2024-12-05 12:09:24.451449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.505 [2024-12-05 12:09:24.451456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.505 [2024-12-05 12:09:24.451464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.505 [2024-12-05 12:09:24.451471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.505 [2024-12-05 12:09:24.451479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.505 [2024-12-05 12:09:24.451486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.505 [2024-12-05 12:09:24.451494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.505 [2024-12-05 12:09:24.451501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.505 [2024-12-05 12:09:24.451509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.505 [2024-12-05 12:09:24.451515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.505 [2024-12-05 12:09:24.451523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.505 [2024-12-05 12:09:24.451532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.505 [2024-12-05 12:09:24.451540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.505 [2024-12-05 12:09:24.451546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.505 [2024-12-05 12:09:24.451554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.505 [2024-12-05 12:09:24.451561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.506 [2024-12-05 12:09:24.451569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.506 [2024-12-05 12:09:24.451576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.506 [2024-12-05 12:09:24.451584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.506 [2024-12-05 12:09:24.451591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.506 [2024-12-05 12:09:24.451599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.506 [2024-12-05 12:09:24.451606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.506 [2024-12-05 12:09:24.451614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.506 [2024-12-05 12:09:24.451621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.506 [2024-12-05 12:09:24.451630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.506 [2024-12-05 12:09:24.451637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.506 [2024-12-05 12:09:24.451645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.506 [2024-12-05 12:09:24.451652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.506 [2024-12-05 12:09:24.451660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.506 [2024-12-05 12:09:24.451667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.506 [2024-12-05 12:09:24.451675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.506 [2024-12-05 12:09:24.451681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.506 [2024-12-05 12:09:24.451689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.506 [2024-12-05 12:09:24.451696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.506 [2024-12-05 12:09:24.451704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.506 [2024-12-05 12:09:24.451711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.506 [2024-12-05 12:09:24.451721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.506 [2024-12-05 12:09:24.451728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.506 [2024-12-05 12:09:24.451735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.506 [2024-12-05 12:09:24.451742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.506 [2024-12-05 12:09:24.467396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.506 [2024-12-05 12:09:24.467420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.506 [2024-12-05 12:09:24.467431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.506 [2024-12-05 12:09:24.467440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.506 [2024-12-05 12:09:24.467451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.506 [2024-12-05 12:09:24.467460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.506 [2024-12-05 12:09:24.467470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.506 [2024-12-05 12:09:24.467479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.506 [2024-12-05 12:09:24.467490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.506 [2024-12-05 12:09:24.467499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.506 [2024-12-05 12:09:24.467511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.506 [2024-12-05 12:09:24.467519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.506 [2024-12-05 12:09:24.467530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.506 [2024-12-05 12:09:24.467539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.506 [2024-12-05 12:09:24.467549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.506 [2024-12-05 12:09:24.467558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.506 [2024-12-05 12:09:24.467569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.506 [2024-12-05 12:09:24.467578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.506 [2024-12-05 12:09:24.467589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.506 [2024-12-05 12:09:24.467598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.506 [2024-12-05 12:09:24.467608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.506 [2024-12-05 12:09:24.467622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.506 [2024-12-05 12:09:24.467633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.506 [2024-12-05 12:09:24.467642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.506 [2024-12-05 12:09:24.467653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.506 [2024-12-05 12:09:24.467661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.506 [2024-12-05 12:09:24.467672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.506 [2024-12-05 12:09:24.467681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.506 [2024-12-05 12:09:24.467691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.506 [2024-12-05 12:09:24.467700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.506 [2024-12-05 12:09:24.467711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.506 [2024-12-05 12:09:24.467720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.506 [2024-12-05 12:09:24.467730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.506 [2024-12-05 12:09:24.467739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.506 [2024-12-05 12:09:24.467749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.506 [2024-12-05 12:09:24.467757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.506 [2024-12-05 12:09:24.467768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.506 [2024-12-05 12:09:24.467776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.506 [2024-12-05 12:09:24.467787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.506 [2024-12-05 12:09:24.467795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.506 [2024-12-05 12:09:24.467819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.506 [2024-12-05 12:09:24.467831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.506 [2024-12-05 12:09:24.467845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.506 [2024-12-05 12:09:24.467857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.506 [2024-12-05 12:09:24.467871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.506 [2024-12-05 12:09:24.467883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.507 [2024-12-05 12:09:24.467904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.507 [2024-12-05 12:09:24.467916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.507 [2024-12-05 12:09:24.467931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.507 [2024-12-05 12:09:24.467942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.507 [2024-12-05 12:09:24.467957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.507 [2024-12-05 12:09:24.467969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.507 [2024-12-05 12:09:24.467983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.507 [2024-12-05 12:09:24.467995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.507 [2024-12-05 12:09:24.468009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.507 [2024-12-05 12:09:24.468021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.507 [2024-12-05 12:09:24.468035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.507 [2024-12-05 12:09:24.468047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.507 [2024-12-05 12:09:24.468062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.507 [2024-12-05 12:09:24.468073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.507 [2024-12-05 12:09:24.468778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.507 [2024-12-05 12:09:24.468809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.507 [2024-12-05 12:09:24.468830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.507 [2024-12-05 12:09:24.468842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.507 [2024-12-05 12:09:24.468857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.507 [2024-12-05 12:09:24.468869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.507 [2024-12-05 12:09:24.468883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.507 [2024-12-05 12:09:24.468895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.507 [2024-12-05 12:09:24.468909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.507 [2024-12-05 12:09:24.468921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.507 [2024-12-05 12:09:24.468935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.507 [2024-12-05 12:09:24.468954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.507 [2024-12-05 12:09:24.468968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.507 [2024-12-05 12:09:24.468980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.507 [2024-12-05 12:09:24.468994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.507 [2024-12-05 12:09:24.469006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.507 [2024-12-05 12:09:24.469020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.507 [2024-12-05 12:09:24.469032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.507 [2024-12-05 12:09:24.469046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.507 [2024-12-05 12:09:24.469059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.507 [2024-12-05 12:09:24.469074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.507 [2024-12-05 12:09:24.469086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.507 [2024-12-05 12:09:24.469100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.507 [2024-12-05 12:09:24.469112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.507 [2024-12-05 12:09:24.469126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.507 [2024-12-05 12:09:24.469138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.507 [2024-12-05 12:09:24.469152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.507 [2024-12-05 12:09:24.469163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.507 [2024-12-05 12:09:24.469178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.507 [2024-12-05 12:09:24.469190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.507 [2024-12-05 12:09:24.469203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.507 [2024-12-05 12:09:24.469215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.507 [2024-12-05 12:09:24.469229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.507 [2024-12-05 12:09:24.469241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.507 [2024-12-05 12:09:24.469256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.507 [2024-12-05 12:09:24.469267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.507 [2024-12-05 12:09:24.469283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.507 [2024-12-05 12:09:24.469295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.507 [2024-12-05 12:09:24.469310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.507 [2024-12-05 12:09:24.469322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.507 [2024-12-05 12:09:24.469336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.507 [2024-12-05 12:09:24.469347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.507 [2024-12-05 12:09:24.469362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.507 [2024-12-05 12:09:24.469382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.507 [2024-12-05 12:09:24.469396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.507 [2024-12-05 12:09:24.469408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.507 [2024-12-05 12:09:24.469422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.507 [2024-12-05 12:09:24.469434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.507 [2024-12-05 12:09:24.469448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.507 [2024-12-05 12:09:24.469460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.507 [2024-12-05 12:09:24.469475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.507 [2024-12-05 12:09:24.469487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.507 [2024-12-05 12:09:24.469501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.507 [2024-12-05 12:09:24.469513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.507 [2024-12-05 12:09:24.469527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.507 [2024-12-05 12:09:24.469539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.507 [2024-12-05 12:09:24.469553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.507 [2024-12-05 12:09:24.469565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.507 [2024-12-05 12:09:24.469579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.507 [2024-12-05 12:09:24.469591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.507 [2024-12-05 12:09:24.469605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.507 [2024-12-05 12:09:24.469619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.507 [2024-12-05 12:09:24.469633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.507 [2024-12-05 12:09:24.469644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.507 [2024-12-05 12:09:24.469659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.508 [2024-12-05 12:09:24.469670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.508 [2024-12-05 12:09:24.469685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.508 [2024-12-05 12:09:24.469696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.508 [2024-12-05 12:09:24.469711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.508 [2024-12-05 12:09:24.469723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.508 [2024-12-05 12:09:24.469736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.508 [2024-12-05 12:09:24.469748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.508 [2024-12-05 12:09:24.469763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.508 [2024-12-05 12:09:24.469774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.508 [2024-12-05 12:09:24.469788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.508 [2024-12-05 12:09:24.469800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.508 [2024-12-05 12:09:24.469814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.508 [2024-12-05 12:09:24.469826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.508 [2024-12-05 12:09:24.469840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.508 [2024-12-05 12:09:24.469852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.508 [2024-12-05 12:09:24.469867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.508 [2024-12-05 12:09:24.469878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.508 [2024-12-05 12:09:24.469892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.508 [2024-12-05 12:09:24.469906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.508 [2024-12-05 12:09:24.469920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.508 [2024-12-05 12:09:24.469931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.508 [2024-12-05 12:09:24.469948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.508 [2024-12-05 12:09:24.469960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.508 [2024-12-05 12:09:24.469974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.508 [2024-12-05 12:09:24.469986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.508 [2024-12-05 12:09:24.470000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.508 [2024-12-05 12:09:24.470012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.508 [2024-12-05 12:09:24.470026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.508 [2024-12-05 12:09:24.470038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.508 [2024-12-05 12:09:24.470052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.508 [2024-12-05 12:09:24.470064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.508 [2024-12-05 12:09:24.470078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.508 [2024-12-05 12:09:24.470090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.508 [2024-12-05 12:09:24.470105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.508 [2024-12-05 12:09:24.470116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.508 [2024-12-05 12:09:24.470130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.508 [2024-12-05 12:09:24.470142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.508 [2024-12-05 12:09:24.470156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.508 [2024-12-05 12:09:24.470168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.508 [2024-12-05 12:09:24.470182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.508 [2024-12-05 12:09:24.470193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.508 [2024-12-05 12:09:24.470207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.508 [2024-12-05 12:09:24.470219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.508 [2024-12-05 12:09:24.470233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.508 [2024-12-05 12:09:24.470245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.508 [2024-12-05 12:09:24.470259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.508 [2024-12-05 12:09:24.470273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.508 [2024-12-05 12:09:24.470287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.508 [2024-12-05 12:09:24.470299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.508 [2024-12-05 12:09:24.470313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.508 [2024-12-05 12:09:24.470325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.508 [2024-12-05 12:09:24.470340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.508 [2024-12-05 12:09:24.470351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.508 [2024-12-05 12:09:24.470365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.508 [2024-12-05 12:09:24.470382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.508 [2024-12-05 12:09:24.470396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.508 [2024-12-05 12:09:24.470408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.508 [2024-12-05 12:09:24.470423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.508 [2024-12-05 12:09:24.470435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.508 [2024-12-05 12:09:24.470448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.508 [2024-12-05 12:09:24.470460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.508 [2024-12-05 12:09:24.470474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.508 [2024-12-05 12:09:24.470487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.508 [2024-12-05 12:09:24.470534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:50.508 [2024-12-05 12:09:24.470764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c37ce0 (9): Bad file descriptor 00:25:50.508 [2024-12-05 12:09:24.470798] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1806490 (9): Bad file descriptor 00:25:50.508 [2024-12-05 12:09:24.470822] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1800200 (9): Bad file descriptor 00:25:50.508 [2024-12-05 12:09:24.470843] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1720610 (9): Bad file descriptor 00:25:50.508 [2024-12-05 12:09:24.470862] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2fec0 (9): Bad file descriptor 00:25:50.508 [2024-12-05 12:09:24.470888] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x180c1c0 (9): Bad file descriptor 00:25:50.508 [2024-12-05 12:09:24.470913] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c375f0 (9): Bad file descriptor 00:25:50.508 [2024-12-05 12:09:24.470931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69e00 (9): Bad file descriptor 00:25:50.508 [2024-12-05 12:09:24.470954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c796e0 (9): Bad file descriptor 00:25:50.508 [2024-12-05 12:09:24.470997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.508 [2024-12-05 12:09:24.471012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.508 [2024-12-05 12:09:24.471026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.508 [2024-12-05 12:09:24.471037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.508 [2024-12-05 12:09:24.471050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.508 [2024-12-05 12:09:24.471061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.508 [2024-12-05 12:09:24.471073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.509 [2024-12-05 12:09:24.471085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.509 [2024-12-05 12:09:24.471096] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c37200 is same with the state(6) to be set 00:25:50.509 [2024-12-05 12:09:24.476192] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:25:50.509 [2024-12-05 12:09:24.476778] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:25:50.509 [2024-12-05 12:09:24.476821] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:25:50.509 [2024-12-05 12:09:24.476841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c37200 (9): Bad file descriptor 00:25:50.509 [2024-12-05 12:09:24.477105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.509 [2024-12-05 12:09:24.477129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1800200 with addr=10.0.0.2, port=4420 00:25:50.509 [2024-12-05 12:09:24.477143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800200 is same with the state(6) to be set 00:25:50.509 [2024-12-05 12:09:24.478089] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:50.509 [2024-12-05 12:09:24.478150] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:50.509 [2024-12-05 12:09:24.478514] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:50.509 [2024-12-05 12:09:24.478624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.509 [2024-12-05 12:09:24.478640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c375f0 with addr=10.0.0.2, port=4420 00:25:50.509 [2024-12-05 12:09:24.478650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c375f0 is same with the state(6) to be set 00:25:50.509 [2024-12-05 12:09:24.478677] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1800200 (9): Bad file descriptor 00:25:50.509 [2024-12-05 12:09:24.478733] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:50.509 [2024-12-05 12:09:24.478794] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:50.509 [2024-12-05 12:09:24.478848] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:50.509 [2024-12-05 12:09:24.478930] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:50.509 [2024-12-05 12:09:24.479055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.509 [2024-12-05 12:09:24.479071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c37200 with addr=10.0.0.2, port=4420 00:25:50.509 [2024-12-05 12:09:24.479085] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c37200 is same with the state(6) to be set 00:25:50.509 [2024-12-05 12:09:24.479097] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c375f0 (9): Bad file descriptor 00:25:50.509 [2024-12-05 12:09:24.479108] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:25:50.509 [2024-12-05 12:09:24.479115] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:25:50.509 [2024-12-05 12:09:24.479125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:25:50.509 [2024-12-05 12:09:24.479135] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:25:50.509 [2024-12-05 12:09:24.479226] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c37200 (9): Bad file descriptor 00:25:50.509 [2024-12-05 12:09:24.479237] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:25:50.509 [2024-12-05 12:09:24.479244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:25:50.509 [2024-12-05 12:09:24.479252] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:25:50.509 [2024-12-05 12:09:24.479260] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:25:50.509 [2024-12-05 12:09:24.479298] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:25:50.509 [2024-12-05 12:09:24.479305] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:25:50.509 [2024-12-05 12:09:24.479313] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:25:50.509 [2024-12-05 12:09:24.479320] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:25:50.509 [2024-12-05 12:09:24.480875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.509 [2024-12-05 12:09:24.480891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.509 [2024-12-05 12:09:24.480905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.509 [2024-12-05 12:09:24.480913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.509 [2024-12-05 12:09:24.480923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.509 [2024-12-05 12:09:24.480932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.509 [2024-12-05 12:09:24.480941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.509 [2024-12-05 12:09:24.480949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.509 [2024-12-05 12:09:24.480959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.509 [2024-12-05 12:09:24.480967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.509 [2024-12-05 12:09:24.480977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.509 [2024-12-05 12:09:24.480985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.509 [2024-12-05 12:09:24.480999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.509 [2024-12-05 12:09:24.481007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.509 [2024-12-05 12:09:24.481017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.509 [2024-12-05 12:09:24.481025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.509 [2024-12-05 12:09:24.481036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.509 [2024-12-05 12:09:24.481044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.509 [2024-12-05 12:09:24.481054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.509 [2024-12-05 12:09:24.481062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.509 [2024-12-05 12:09:24.481071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.509 [2024-12-05 12:09:24.481079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.509 [2024-12-05 12:09:24.481088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.509 [2024-12-05 12:09:24.481096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.509 [2024-12-05 12:09:24.481105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.509 [2024-12-05 12:09:24.481113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.509 [2024-12-05 12:09:24.481122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.509 [2024-12-05 12:09:24.481130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.509 [2024-12-05 12:09:24.481139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.509 [2024-12-05 12:09:24.481147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.509 [2024-12-05 12:09:24.481157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.509 [2024-12-05 12:09:24.481164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.509 [2024-12-05 12:09:24.481174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.509 [2024-12-05 12:09:24.481181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.509 [2024-12-05 12:09:24.481191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.509 [2024-12-05 12:09:24.481198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.509 [2024-12-05 12:09:24.481208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.509 [2024-12-05 12:09:24.481218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.509 [2024-12-05 12:09:24.481228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.509 [2024-12-05 12:09:24.481236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.509 [2024-12-05 12:09:24.481245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.509 [2024-12-05 12:09:24.481253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.509 [2024-12-05 12:09:24.481262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.509 [2024-12-05 12:09:24.481270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.509 [2024-12-05 12:09:24.481279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.509 [2024-12-05 12:09:24.481287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.509 [2024-12-05 12:09:24.481297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.509 [2024-12-05 12:09:24.481304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.510 [2024-12-05 12:09:24.481313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.510 [2024-12-05 12:09:24.481321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.510 [2024-12-05 12:09:24.481333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.510 [2024-12-05 12:09:24.481342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.510 [2024-12-05 12:09:24.481351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.510 [2024-12-05 12:09:24.481359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.510 [2024-12-05 12:09:24.481374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.510 [2024-12-05 12:09:24.481383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.510 [2024-12-05 12:09:24.481392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.510 [2024-12-05 12:09:24.481400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.510 [2024-12-05 12:09:24.481410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.510 [2024-12-05 12:09:24.481418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.510 [2024-12-05 12:09:24.481427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.510 [2024-12-05 12:09:24.481435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.510 [2024-12-05 12:09:24.481446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.510 [2024-12-05 12:09:24.481454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.510 [2024-12-05 12:09:24.481464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.510 [2024-12-05 12:09:24.481472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.510 [2024-12-05 12:09:24.481481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.510 [2024-12-05 12:09:24.481489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.510 [2024-12-05 12:09:24.481498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.510 [2024-12-05 12:09:24.481506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.510 [2024-12-05 12:09:24.481516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.510 [2024-12-05 12:09:24.481523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.510 [2024-12-05 12:09:24.481533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.510 [2024-12-05 12:09:24.481541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.510 [2024-12-05 12:09:24.481550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.510 [2024-12-05 12:09:24.481558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.510 [2024-12-05 12:09:24.481567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.510 [2024-12-05 12:09:24.481576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.510 [2024-12-05 12:09:24.481585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.510 [2024-12-05 12:09:24.481593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.510 [2024-12-05 12:09:24.481602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.510 [2024-12-05 12:09:24.481610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.510 [2024-12-05 12:09:24.481620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.510 [2024-12-05 12:09:24.481627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.510 [2024-12-05 12:09:24.481637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.510 [2024-12-05 12:09:24.481644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.510 [2024-12-05 12:09:24.481654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.510 [2024-12-05 12:09:24.481664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.510 [2024-12-05 12:09:24.481673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.510 [2024-12-05 12:09:24.481681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.510 [2024-12-05 12:09:24.481691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.510 [2024-12-05 12:09:24.481701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.510 [2024-12-05 12:09:24.481711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.510 [2024-12-05 12:09:24.481719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.510 [2024-12-05 12:09:24.481729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.510 [2024-12-05 12:09:24.481737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.510 [2024-12-05 12:09:24.481746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.510 [2024-12-05 12:09:24.481754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.510 [2024-12-05 12:09:24.481764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.510 [2024-12-05 12:09:24.481772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.510 [2024-12-05 12:09:24.481781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.510 [2024-12-05 12:09:24.481789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.510 [2024-12-05 12:09:24.481799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.510 [2024-12-05 12:09:24.481806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.510 [2024-12-05 12:09:24.481816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.510 [2024-12-05 12:09:24.481824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.510 [2024-12-05 12:09:24.481833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.510 [2024-12-05 12:09:24.481841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.510 [2024-12-05 12:09:24.481851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.510 [2024-12-05 12:09:24.481858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.510 [2024-12-05 12:09:24.481869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.510 [2024-12-05 12:09:24.481877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.510 [2024-12-05 12:09:24.481888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.510 [2024-12-05 12:09:24.481895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.510 [2024-12-05 12:09:24.481905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.510 [2024-12-05 12:09:24.481912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.510 [2024-12-05 12:09:24.481922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.510 [2024-12-05 12:09:24.481930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.511 [2024-12-05 12:09:24.481939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.511 [2024-12-05 12:09:24.481948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.511 [2024-12-05 12:09:24.481957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.511 [2024-12-05 12:09:24.481965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.511 [2024-12-05 12:09:24.481974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.511 [2024-12-05 12:09:24.481982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.511 [2024-12-05 12:09:24.481992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.511 [2024-12-05 12:09:24.481999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.511 [2024-12-05 12:09:24.482010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.511 [2024-12-05 12:09:24.482017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.511 [2024-12-05 12:09:24.482028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a101a0 is same with the state(6) to be set 00:25:50.511 [2024-12-05 12:09:24.483185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.511 [2024-12-05 12:09:24.483198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.511 [2024-12-05 12:09:24.483209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.511 [2024-12-05 12:09:24.483218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.511 [2024-12-05 12:09:24.483228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.511 [2024-12-05 12:09:24.483235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.511 [2024-12-05 12:09:24.483245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.511 [2024-12-05 12:09:24.483253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.511 [2024-12-05 12:09:24.483268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.511 [2024-12-05 12:09:24.483276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.511 [2024-12-05 12:09:24.483286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.511 [2024-12-05 12:09:24.483294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.511 [2024-12-05 12:09:24.483304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.511 [2024-12-05 12:09:24.483312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.511 [2024-12-05 12:09:24.483322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.511 [2024-12-05 12:09:24.483330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.511 [2024-12-05 12:09:24.483339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.511 [2024-12-05 12:09:24.483347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.511 [2024-12-05 12:09:24.483357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.511 [2024-12-05 12:09:24.483364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.511 [2024-12-05 12:09:24.483378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.511 [2024-12-05 12:09:24.483386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.511 [2024-12-05 12:09:24.483396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.511 [2024-12-05 12:09:24.483404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.511 [2024-12-05 12:09:24.483415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.511 [2024-12-05 12:09:24.483422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.511 [2024-12-05 12:09:24.483432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.511 [2024-12-05 12:09:24.483440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.511 [2024-12-05 12:09:24.483449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.511 [2024-12-05 12:09:24.483457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.511 [2024-12-05 12:09:24.483467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.511 [2024-12-05 12:09:24.483474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.511 [2024-12-05 12:09:24.483484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.511 [2024-12-05 12:09:24.483493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.511 [2024-12-05 12:09:24.483503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.511 [2024-12-05 12:09:24.483511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.511 [2024-12-05 12:09:24.483520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.511 [2024-12-05 12:09:24.483528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.511 [2024-12-05 12:09:24.483537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.511 [2024-12-05 12:09:24.483545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.511 [2024-12-05 12:09:24.483555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.511 [2024-12-05 12:09:24.483562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.511 [2024-12-05 12:09:24.483571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.511 [2024-12-05 12:09:24.483579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.511 [2024-12-05 12:09:24.483588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.511 [2024-12-05 12:09:24.483596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.511 [2024-12-05 12:09:24.483606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.511 [2024-12-05 12:09:24.483614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.511 [2024-12-05 12:09:24.483625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.511 [2024-12-05 12:09:24.483633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.511 [2024-12-05 12:09:24.483644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.511 [2024-12-05 12:09:24.483652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.511 [2024-12-05 12:09:24.483661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.511 [2024-12-05 12:09:24.483669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.511 [2024-12-05 12:09:24.483678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.511 [2024-12-05 12:09:24.483688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.511 [2024-12-05 12:09:24.483698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.511 [2024-12-05 12:09:24.483705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.511 [2024-12-05 12:09:24.483716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.511 [2024-12-05 12:09:24.483724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.511 [2024-12-05 12:09:24.483734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.511 [2024-12-05 12:09:24.483742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.511 [2024-12-05 12:09:24.483751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.511 [2024-12-05 12:09:24.483759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.511 [2024-12-05 12:09:24.483768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.511 [2024-12-05 12:09:24.483777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.511 [2024-12-05 12:09:24.483786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.511 [2024-12-05 12:09:24.483794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.512 [2024-12-05 12:09:24.483804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.512 [2024-12-05 12:09:24.483812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.512 [2024-12-05 12:09:24.483822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.512 [2024-12-05 12:09:24.483830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.512 [2024-12-05 12:09:24.483839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.512 [2024-12-05 12:09:24.483848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.512 [2024-12-05 12:09:24.483857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.512 [2024-12-05 12:09:24.483866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.512 [2024-12-05 12:09:24.483875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.512 [2024-12-05 12:09:24.483883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.512 [2024-12-05 12:09:24.483893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.512 [2024-12-05 12:09:24.483901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.512 [2024-12-05 12:09:24.483911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.512 [2024-12-05 12:09:24.483920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.512 [2024-12-05 12:09:24.483929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.512 [2024-12-05 12:09:24.483939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.512 [2024-12-05 12:09:24.483950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.512 [2024-12-05 12:09:24.483958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.512 [2024-12-05 12:09:24.483969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.512 [2024-12-05 12:09:24.483977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.512 [2024-12-05 12:09:24.483986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.512 [2024-12-05 12:09:24.483994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.512 [2024-12-05 12:09:24.484006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.512 [2024-12-05 12:09:24.484014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.512 [2024-12-05 12:09:24.484024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.512 [2024-12-05 12:09:24.484033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.512 [2024-12-05 12:09:24.484043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.512 [2024-12-05 12:09:24.484052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.512 [2024-12-05 12:09:24.484061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.512 [2024-12-05 12:09:24.484070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.512 [2024-12-05 12:09:24.484080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.512 [2024-12-05 12:09:24.484088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.512 [2024-12-05 12:09:24.484098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.512 [2024-12-05 12:09:24.484106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.512 [2024-12-05 12:09:24.484117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.512 [2024-12-05 12:09:24.484125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.512 [2024-12-05 12:09:24.484135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.512 [2024-12-05 12:09:24.484143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.512 [2024-12-05 12:09:24.484153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.512 [2024-12-05 12:09:24.484161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.512 [2024-12-05 12:09:24.484171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.512 [2024-12-05 12:09:24.484181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.512 [2024-12-05 12:09:24.484191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.512 [2024-12-05 12:09:24.484199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.512 [2024-12-05 12:09:24.484209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.512 [2024-12-05 12:09:24.484217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.512 [2024-12-05 12:09:24.484227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.512 [2024-12-05 12:09:24.484235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.512 [2024-12-05 12:09:24.484245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.512 [2024-12-05 12:09:24.484254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.512 [2024-12-05 12:09:24.484263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.512 [2024-12-05 12:09:24.484271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.512 [2024-12-05 12:09:24.484280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.512 [2024-12-05 12:09:24.484289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.512 [2024-12-05 12:09:24.484298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.512 [2024-12-05 12:09:24.484308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.512 [2024-12-05 12:09:24.484319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.512 [2024-12-05 12:09:24.484327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.512 [2024-12-05 12:09:24.484338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.512 [2024-12-05 12:09:24.484346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.512 [2024-12-05 12:09:24.484356] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c03a10 is same with the state(6) to be set 00:25:50.512 [2024-12-05 12:09:24.485504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.512 [2024-12-05 12:09:24.485518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.512 [2024-12-05 12:09:24.485530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.512 [2024-12-05 12:09:24.485539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.512 [2024-12-05 12:09:24.485550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.512 [2024-12-05 12:09:24.485561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.512 [2024-12-05 12:09:24.485573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.512 [2024-12-05 12:09:24.485581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.512 [2024-12-05 12:09:24.485592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.512 [2024-12-05 12:09:24.485601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.512 [2024-12-05 12:09:24.485611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.512 [2024-12-05 12:09:24.485620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.512 [2024-12-05 12:09:24.485630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.512 [2024-12-05 12:09:24.485639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.512 [2024-12-05 12:09:24.485650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.513 [2024-12-05 12:09:24.485659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.513 [2024-12-05 12:09:24.485669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.513 [2024-12-05 12:09:24.485678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.513 [2024-12-05 12:09:24.485689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.513 [2024-12-05 12:09:24.485698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.513 [2024-12-05 12:09:24.485708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.513 [2024-12-05 12:09:24.485717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.513 [2024-12-05 12:09:24.485728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.513 [2024-12-05 12:09:24.485736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.513 [2024-12-05 12:09:24.485748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.513 [2024-12-05 12:09:24.485756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.513 [2024-12-05 12:09:24.485767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.513 [2024-12-05 12:09:24.485776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.513 [2024-12-05 12:09:24.485786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.513 [2024-12-05 12:09:24.485795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.513 [2024-12-05 12:09:24.485808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.513 [2024-12-05 12:09:24.485816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.513 [2024-12-05 12:09:24.485827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.513 [2024-12-05 12:09:24.485835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.513 [2024-12-05 12:09:24.485845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.513 [2024-12-05 12:09:24.485855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.513 [2024-12-05 12:09:24.485864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.513 [2024-12-05 12:09:24.485874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.513 [2024-12-05 12:09:24.485885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.513 [2024-12-05 12:09:24.485894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.513 [2024-12-05 12:09:24.485904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.513 [2024-12-05 12:09:24.485912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.513 [2024-12-05 12:09:24.485922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.513 [2024-12-05 12:09:24.485931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.513 [2024-12-05 12:09:24.485942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.513 [2024-12-05 12:09:24.485950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.513 [2024-12-05 12:09:24.485962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.513 [2024-12-05 12:09:24.485970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.513 [2024-12-05 12:09:24.485980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.513 [2024-12-05 12:09:24.485989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.513 [2024-12-05 12:09:24.485999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.513 [2024-12-05 12:09:24.486008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.513 [2024-12-05 12:09:24.486018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.513 [2024-12-05 12:09:24.486027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.513 [2024-12-05 12:09:24.486037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.513 [2024-12-05 12:09:24.486048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.513 [2024-12-05 12:09:24.486058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.513 [2024-12-05 12:09:24.486067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.513 [2024-12-05 12:09:24.486077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.513 [2024-12-05 12:09:24.486086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.513 [2024-12-05 12:09:24.486097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.513 [2024-12-05 12:09:24.486106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.513 [2024-12-05 12:09:24.486116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.513 [2024-12-05 12:09:24.486125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.513 [2024-12-05 12:09:24.486136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.513 [2024-12-05 12:09:24.486144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.513 [2024-12-05 12:09:24.486155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.513 [2024-12-05 12:09:24.486164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.513 [2024-12-05 12:09:24.486175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.513 [2024-12-05 12:09:24.486184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.513 [2024-12-05 12:09:24.486195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.513 [2024-12-05 12:09:24.486203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.513 [2024-12-05 12:09:24.486214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.513 [2024-12-05 12:09:24.486222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.513 [2024-12-05 12:09:24.486233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.513 [2024-12-05 12:09:24.486242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.513 [2024-12-05 12:09:24.486252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.513 [2024-12-05 12:09:24.486261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.513 [2024-12-05 12:09:24.486271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.513 [2024-12-05 12:09:24.486279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.513 [2024-12-05 12:09:24.486292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.513 [2024-12-05 12:09:24.486300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.513 [2024-12-05 12:09:24.486311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.513 [2024-12-05 12:09:24.486320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.513 [2024-12-05 12:09:24.486331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.513 [2024-12-05 12:09:24.486339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.513 [2024-12-05 12:09:24.486350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.513 [2024-12-05 12:09:24.486358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.513 [2024-12-05 12:09:24.486372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.513 [2024-12-05 12:09:24.486381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.514 [2024-12-05 12:09:24.486392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.514 [2024-12-05 12:09:24.486401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.514 [2024-12-05 12:09:24.486411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.514 [2024-12-05 12:09:24.486420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.514 [2024-12-05 12:09:24.486430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.514 [2024-12-05 12:09:24.486439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.514 [2024-12-05 12:09:24.486450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.514 [2024-12-05 12:09:24.486459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.514 [2024-12-05 12:09:24.486470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.514 [2024-12-05 12:09:24.486478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.514 [2024-12-05 12:09:24.486489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.514 [2024-12-05 12:09:24.486498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.514 [2024-12-05 12:09:24.486510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.514 [2024-12-05 12:09:24.486518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.514 [2024-12-05 12:09:24.486529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.514 [2024-12-05 12:09:24.486539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.514 [2024-12-05 12:09:24.486549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.514 [2024-12-05 12:09:24.486558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.514 [2024-12-05 12:09:24.486568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.514 [2024-12-05 12:09:24.486577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.514 [2024-12-05 12:09:24.486588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.514 [2024-12-05 12:09:24.486597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.514 [2024-12-05 12:09:24.486607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.514 [2024-12-05 12:09:24.486616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.514 [2024-12-05 12:09:24.486627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.514 [2024-12-05 12:09:24.486635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.514 [2024-12-05 12:09:24.486647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.514 [2024-12-05 12:09:24.486656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.514 [2024-12-05 12:09:24.486666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.514 [2024-12-05 12:09:24.486676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.514 [2024-12-05 12:09:24.486686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.514 [2024-12-05 12:09:24.486694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.514 [2024-12-05 12:09:24.486705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.514 [2024-12-05 12:09:24.486714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.514 [2024-12-05 12:09:24.486725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.514 [2024-12-05 12:09:24.486733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.514 [2024-12-05 12:09:24.486744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.514 [2024-12-05 12:09:24.486753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.514 [2024-12-05 12:09:24.486763] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c10980 is same with the state(6) to be set 00:25:50.514 [2024-12-05 12:09:24.487899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.514 [2024-12-05 12:09:24.487916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.514 [2024-12-05 12:09:24.487928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.514 [2024-12-05 12:09:24.487937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.514 [2024-12-05 12:09:24.487948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.514 [2024-12-05 12:09:24.487957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.514 [2024-12-05 12:09:24.487968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.514 [2024-12-05 12:09:24.487976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.514 [2024-12-05 12:09:24.487987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.514 [2024-12-05 12:09:24.487995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.514 [2024-12-05 12:09:24.488006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.514 [2024-12-05 12:09:24.488015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.514 [2024-12-05 12:09:24.488025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.514 [2024-12-05 12:09:24.488035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.514 [2024-12-05 12:09:24.488045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.514 [2024-12-05 12:09:24.488053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.514 [2024-12-05 12:09:24.488064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.514 [2024-12-05 12:09:24.488072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.514 [2024-12-05 12:09:24.488083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.514 [2024-12-05 12:09:24.488092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.514 [2024-12-05 12:09:24.488103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.514 [2024-12-05 12:09:24.488112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.514 [2024-12-05 12:09:24.488122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.514 [2024-12-05 12:09:24.488130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.514 [2024-12-05 12:09:24.488141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.514 [2024-12-05 12:09:24.488150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.514 [2024-12-05 12:09:24.488163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.514 [2024-12-05 12:09:24.488171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.514 [2024-12-05 12:09:24.488182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.514 [2024-12-05 12:09:24.488191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.514 [2024-12-05 12:09:24.488201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.514 [2024-12-05 12:09:24.488210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.514 [2024-12-05 12:09:24.488221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.514 [2024-12-05 12:09:24.488230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.514 [2024-12-05 12:09:24.488240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.514 [2024-12-05 12:09:24.488249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.514 [2024-12-05 12:09:24.488259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.515 [2024-12-05 12:09:24.488268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.515 [2024-12-05 12:09:24.488278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.515 [2024-12-05 12:09:24.488288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.515 [2024-12-05 12:09:24.488298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.515 [2024-12-05 12:09:24.488306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.515 [2024-12-05 12:09:24.488318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.515 [2024-12-05 12:09:24.488326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.515 [2024-12-05 12:09:24.488337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.515 [2024-12-05 12:09:24.488345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.515 [2024-12-05 12:09:24.488355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.515 [2024-12-05 12:09:24.488364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.515 [2024-12-05 12:09:24.488382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.515 [2024-12-05 12:09:24.488391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.515 [2024-12-05 12:09:24.488402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.515 [2024-12-05 12:09:24.488413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.515 [2024-12-05 12:09:24.488424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.515 [2024-12-05 12:09:24.488432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.515 [2024-12-05 12:09:24.488443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.515 [2024-12-05 12:09:24.488452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.515 [2024-12-05 12:09:24.488463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.515 [2024-12-05 12:09:24.488472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.515 [2024-12-05 12:09:24.488482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.515 [2024-12-05 12:09:24.488491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.515 [2024-12-05 12:09:24.488502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.515 [2024-12-05 12:09:24.488511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.515 [2024-12-05 12:09:24.488521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.515 [2024-12-05 12:09:24.488529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.515 [2024-12-05 12:09:24.488540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.515 [2024-12-05 12:09:24.488549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.515 [2024-12-05 12:09:24.488559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.515 [2024-12-05 12:09:24.488568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.515 [2024-12-05 12:09:24.488578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.515 [2024-12-05 12:09:24.488587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.515 [2024-12-05 12:09:24.488598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.515 [2024-12-05 12:09:24.488607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.515 [2024-12-05 12:09:24.488618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.515 [2024-12-05 12:09:24.488626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.515 [2024-12-05 12:09:24.488637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.515 [2024-12-05 12:09:24.488646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.515 [2024-12-05 12:09:24.488659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.515 [2024-12-05 12:09:24.488668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.515 [2024-12-05 12:09:24.488678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.515 [2024-12-05 12:09:24.488687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.515 [2024-12-05 12:09:24.488697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.515 [2024-12-05 12:09:24.488706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.515 [2024-12-05 12:09:24.488716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.515 [2024-12-05 12:09:24.488725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.515 [2024-12-05 12:09:24.488736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.515 [2024-12-05 12:09:24.488744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.515 [2024-12-05 12:09:24.488755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.515 [2024-12-05 12:09:24.488764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.515 [2024-12-05 12:09:24.488774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.515 [2024-12-05 12:09:24.488783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.515 [2024-12-05 12:09:24.488793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.515 [2024-12-05 12:09:24.488802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.515 [2024-12-05 12:09:24.488812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.515 [2024-12-05 12:09:24.488821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.515 [2024-12-05 12:09:24.488831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.515 [2024-12-05 12:09:24.488840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.515 [2024-12-05 12:09:24.488851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.515 [2024-12-05 12:09:24.488860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.515 [2024-12-05 12:09:24.488870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.515 [2024-12-05 12:09:24.488879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.515 [2024-12-05 12:09:24.488889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.515 [2024-12-05 12:09:24.488899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.515 [2024-12-05 12:09:24.488910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.516 [2024-12-05 12:09:24.488918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.516 [2024-12-05 12:09:24.488941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.516 [2024-12-05 12:09:24.488948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.516 [2024-12-05 12:09:24.488958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.516 [2024-12-05 12:09:24.488965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.516 [2024-12-05 12:09:24.488974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.516 [2024-12-05 12:09:24.488981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.516 [2024-12-05 12:09:24.488990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.516 [2024-12-05 12:09:24.488998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.516 [2024-12-05 12:09:24.489008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.516 [2024-12-05 12:09:24.489015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.516 [2024-12-05 12:09:24.489025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.516 [2024-12-05 12:09:24.489033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.516 [2024-12-05 12:09:24.489042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.516 [2024-12-05 12:09:24.489049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.516 [2024-12-05 12:09:24.489059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.516 [2024-12-05 12:09:24.489066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.516 [2024-12-05 12:09:24.489075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.516 [2024-12-05 12:09:24.489083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.516 [2024-12-05 12:09:24.489092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.516 [2024-12-05 12:09:24.489100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.516 [2024-12-05 12:09:24.489108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.516 [2024-12-05 12:09:24.489115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.516 [2024-12-05 12:09:24.489127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.516 [2024-12-05 12:09:24.489133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.516 [2024-12-05 12:09:24.489141] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c12630 is same with the state(6) to be set 00:25:50.516 [2024-12-05 12:09:24.490125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.516 [2024-12-05 12:09:24.490137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.516 [2024-12-05 12:09:24.490148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.516 [2024-12-05 12:09:24.490155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.516 [2024-12-05 12:09:24.490164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.516 [2024-12-05 12:09:24.490171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.516 [2024-12-05 12:09:24.490179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.516 [2024-12-05 12:09:24.490186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.516 [2024-12-05 12:09:24.490194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.516 [2024-12-05 12:09:24.490201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.516 [2024-12-05 12:09:24.490209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.516 [2024-12-05 12:09:24.490215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.516 [2024-12-05 12:09:24.490224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.516 [2024-12-05 12:09:24.490230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.516 [2024-12-05 12:09:24.490238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.516 [2024-12-05 12:09:24.490245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.516 [2024-12-05 12:09:24.490253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.516 [2024-12-05 12:09:24.490260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.516 [2024-12-05 12:09:24.490268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.516 [2024-12-05 12:09:24.490275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.516 [2024-12-05 12:09:24.490283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.516 [2024-12-05 12:09:24.490289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.516 [2024-12-05 12:09:24.490299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.516 [2024-12-05 12:09:24.490307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.516 [2024-12-05 12:09:24.490315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.516 [2024-12-05 12:09:24.490322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.516 [2024-12-05 12:09:24.490330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.516 [2024-12-05 12:09:24.490336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.516 [2024-12-05 12:09:24.490345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.516 [2024-12-05 12:09:24.490351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.516 [2024-12-05 12:09:24.490359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.516 [2024-12-05 12:09:24.490370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.516 [2024-12-05 12:09:24.490379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.516 [2024-12-05 12:09:24.490385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.516 [2024-12-05 12:09:24.490394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.516 [2024-12-05 12:09:24.490400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.516 [2024-12-05 12:09:24.490408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.516 [2024-12-05 12:09:24.490415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.516 [2024-12-05 12:09:24.490423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.516 [2024-12-05 12:09:24.490430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.516 [2024-12-05 12:09:24.490438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.516 [2024-12-05 12:09:24.490444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.516 [2024-12-05 12:09:24.490452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.516 [2024-12-05 12:09:24.490459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.516 [2024-12-05 12:09:24.490467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.516 [2024-12-05 12:09:24.490474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.516 [2024-12-05 12:09:24.490482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.516 [2024-12-05 12:09:24.490491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.516 [2024-12-05 12:09:24.490499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.516 [2024-12-05 12:09:24.490506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.516 [2024-12-05 12:09:24.490514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.516 [2024-12-05 12:09:24.490520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.517 [2024-12-05 12:09:24.490528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.517 [2024-12-05 12:09:24.490535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.517 [2024-12-05 12:09:24.490543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.517 [2024-12-05 12:09:24.490550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.517 [2024-12-05 12:09:24.490558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.517 [2024-12-05 12:09:24.490564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.517 [2024-12-05 12:09:24.490573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.517 [2024-12-05 12:09:24.490579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.517 [2024-12-05 12:09:24.490587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.517 [2024-12-05 12:09:24.490594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.517 [2024-12-05 12:09:24.490601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.517 [2024-12-05 12:09:24.490609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.517 [2024-12-05 12:09:24.490617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.517 [2024-12-05 12:09:24.490623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.517 [2024-12-05 12:09:24.490632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.517 [2024-12-05 12:09:24.490638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.517 [2024-12-05 12:09:24.490646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.517 [2024-12-05 12:09:24.490653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.517 [2024-12-05 12:09:24.490660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.517 [2024-12-05 12:09:24.490667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.517 [2024-12-05 12:09:24.490675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.517 [2024-12-05 12:09:24.490683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.517 [2024-12-05 12:09:24.490691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.517 [2024-12-05 12:09:24.490698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.517 [2024-12-05 12:09:24.490706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.517 [2024-12-05 12:09:24.490712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.517 [2024-12-05 12:09:24.490720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.517 [2024-12-05 12:09:24.490727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.517 [2024-12-05 12:09:24.490734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.517 [2024-12-05 12:09:24.490742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.517 [2024-12-05 12:09:24.490750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.517 [2024-12-05 12:09:24.490757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.517 [2024-12-05 12:09:24.490764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.517 [2024-12-05 12:09:24.490771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.517 [2024-12-05 12:09:24.490779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.517 [2024-12-05 12:09:24.490786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.517 [2024-12-05 12:09:24.490794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.517 [2024-12-05 12:09:24.490800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.517 [2024-12-05 12:09:24.490808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.517 [2024-12-05 12:09:24.490815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.517 [2024-12-05 12:09:24.490823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.517 [2024-12-05 12:09:24.490829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.517 [2024-12-05 12:09:24.490837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.517 [2024-12-05 12:09:24.490844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.517 [2024-12-05 12:09:24.490852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.517 [2024-12-05 12:09:24.490858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.517 [2024-12-05 12:09:24.490868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.517 [2024-12-05 12:09:24.490875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.517 [2024-12-05 12:09:24.490883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.517 [2024-12-05 12:09:24.490890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.517 [2024-12-05 12:09:24.490898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.517 [2024-12-05 12:09:24.490905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.517 [2024-12-05 12:09:24.490912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.517 [2024-12-05 12:09:24.490919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.517 [2024-12-05 12:09:24.490927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.517 [2024-12-05 12:09:24.490934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.517 [2024-12-05 12:09:24.490942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.517 [2024-12-05 12:09:24.490949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.517 [2024-12-05 12:09:24.490956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.517 [2024-12-05 12:09:24.490963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.517 [2024-12-05 12:09:24.490971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.517 [2024-12-05 12:09:24.490979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.517 [2024-12-05 12:09:24.490987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.517 [2024-12-05 12:09:24.490993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.517 [2024-12-05 12:09:24.491002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.517 [2024-12-05 12:09:24.491008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.517 [2024-12-05 12:09:24.491016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.517 [2024-12-05 12:09:24.491023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.517 [2024-12-05 12:09:24.491031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.517 [2024-12-05 12:09:24.491038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.517 [2024-12-05 12:09:24.491046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.517 [2024-12-05 12:09:24.491054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.517 [2024-12-05 12:09:24.491061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.517 [2024-12-05 12:09:24.491068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.517 [2024-12-05 12:09:24.491076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.517 [2024-12-05 12:09:24.491085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.517 [2024-12-05 12:09:24.491093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x290d3f0 is same with the state(6) to be set 00:25:50.517 [2024-12-05 12:09:24.492069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.517 [2024-12-05 12:09:24.492082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.518 [2024-12-05 12:09:24.492093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.518 [2024-12-05 12:09:24.492101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.518 [2024-12-05 12:09:24.492109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.518 [2024-12-05 12:09:24.492116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.518 [2024-12-05 12:09:24.492125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.518 [2024-12-05 12:09:24.492131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.518 [2024-12-05 12:09:24.492141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.518 [2024-12-05 12:09:24.492147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.518 [2024-12-05 12:09:24.492156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.518 [2024-12-05 12:09:24.492163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.518 [2024-12-05 12:09:24.492170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.518 [2024-12-05 12:09:24.492177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.518 [2024-12-05 12:09:24.492185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.518 [2024-12-05 12:09:24.492192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.518 [2024-12-05 12:09:24.492201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.518 [2024-12-05 12:09:24.492208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.518 [2024-12-05 12:09:24.492216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.518 [2024-12-05 12:09:24.492225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.518 [2024-12-05 12:09:24.492233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.518 [2024-12-05 12:09:24.492240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.518 [2024-12-05 12:09:24.492249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.518 [2024-12-05 12:09:24.492256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.518 [2024-12-05 12:09:24.492264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.518 [2024-12-05 12:09:24.492271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.518 [2024-12-05 12:09:24.492279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.518 [2024-12-05 12:09:24.492286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.518 [2024-12-05 12:09:24.492294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.518 [2024-12-05 12:09:24.492301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.518 [2024-12-05 12:09:24.492309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.518 [2024-12-05 12:09:24.492315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.518 [2024-12-05 12:09:24.492323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.518 [2024-12-05 12:09:24.492330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.518 [2024-12-05 12:09:24.492338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.518 [2024-12-05 12:09:24.492345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.518 [2024-12-05 12:09:24.492353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.518 [2024-12-05 12:09:24.492360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.518 [2024-12-05 12:09:24.492373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.518 [2024-12-05 12:09:24.492380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.518 [2024-12-05 12:09:24.492388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.518 [2024-12-05 12:09:24.492394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.518 [2024-12-05 12:09:24.492403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.518 [2024-12-05 12:09:24.492410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.518 [2024-12-05 12:09:24.492420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.518 [2024-12-05 12:09:24.492426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.518 [2024-12-05 12:09:24.492434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.518 [2024-12-05 12:09:24.492442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.518 [2024-12-05 12:09:24.492450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.518 [2024-12-05 12:09:24.492457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.518 [2024-12-05 12:09:24.492465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.518 [2024-12-05 12:09:24.492472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.518 [2024-12-05 12:09:24.492480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.518 [2024-12-05 12:09:24.492487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.518 [2024-12-05 12:09:24.492495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.518 [2024-12-05 12:09:24.492502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.518 [2024-12-05 12:09:24.492510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.518 [2024-12-05 12:09:24.492517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.518 [2024-12-05 12:09:24.492525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.518 [2024-12-05 12:09:24.492531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.518 [2024-12-05 12:09:24.492539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.518 [2024-12-05 12:09:24.492546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.518 [2024-12-05 12:09:24.492554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.518 [2024-12-05 12:09:24.492561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.518 [2024-12-05 12:09:24.492569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.518 [2024-12-05 12:09:24.492576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.518 [2024-12-05 12:09:24.492584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.518 [2024-12-05 12:09:24.492591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.518 [2024-12-05 12:09:24.492599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.518 [2024-12-05 12:09:24.492607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.518 [2024-12-05 12:09:24.492616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.518 [2024-12-05 12:09:24.492622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.518 [2024-12-05 12:09:24.492631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.518 [2024-12-05 12:09:24.492638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.518 [2024-12-05 12:09:24.492646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.518 [2024-12-05 12:09:24.492653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.518 [2024-12-05 12:09:24.492661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.518 [2024-12-05 12:09:24.492668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.518 [2024-12-05 12:09:24.492676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.518 [2024-12-05 12:09:24.492683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.518 [2024-12-05 12:09:24.492691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.519 [2024-12-05 12:09:24.492697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.519 [2024-12-05 12:09:24.492706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.519 [2024-12-05 12:09:24.492712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.519 [2024-12-05 12:09:24.492720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.519 [2024-12-05 12:09:24.492727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.519 [2024-12-05 12:09:24.492735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.519 [2024-12-05 12:09:24.492742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.519 [2024-12-05 12:09:24.492749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.519 [2024-12-05 12:09:24.492756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.519 [2024-12-05 12:09:24.492764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.519 [2024-12-05 12:09:24.492771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.519 [2024-12-05 12:09:24.492779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.519 [2024-12-05 12:09:24.492785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.519 [2024-12-05 12:09:24.492795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.519 [2024-12-05 12:09:24.492802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.519 [2024-12-05 12:09:24.492810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.519 [2024-12-05 12:09:24.492817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.519 [2024-12-05 12:09:24.492825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.519 [2024-12-05 12:09:24.492831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.519 [2024-12-05 12:09:24.492839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.519 [2024-12-05 12:09:24.492846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.519 [2024-12-05 12:09:24.492854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.519 [2024-12-05 12:09:24.492860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.519 [2024-12-05 12:09:24.492869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.519 [2024-12-05 12:09:24.492875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.519 [2024-12-05 12:09:24.492883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.519 [2024-12-05 12:09:24.492890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.519 [2024-12-05 12:09:24.492898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.519 [2024-12-05 12:09:24.492905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.519 [2024-12-05 12:09:24.492912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.519 [2024-12-05 12:09:24.492919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.519 [2024-12-05 12:09:24.492927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.519 [2024-12-05 12:09:24.492934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.519 [2024-12-05 12:09:24.492942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.519 [2024-12-05 12:09:24.492949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.519 [2024-12-05 12:09:24.492957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.519 [2024-12-05 12:09:24.492963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.519 [2024-12-05 12:09:24.492971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.519 [2024-12-05 12:09:24.492980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.519 [2024-12-05 12:09:24.492988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.519 [2024-12-05 12:09:24.492994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.519 [2024-12-05 12:09:24.493002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.519 [2024-12-05 12:09:24.493009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.519 [2024-12-05 12:09:24.493017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.519 [2024-12-05 12:09:24.493024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.519 [2024-12-05 12:09:24.493032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.519 [2024-12-05 12:09:24.493039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.519 [2024-12-05 12:09:24.493046] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x291a440 is same with the state(6) to be set 00:25:50.519 [2024-12-05 12:09:24.494029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.519 [2024-12-05 12:09:24.494042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.519 [2024-12-05 12:09:24.494052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.519 [2024-12-05 12:09:24.494058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.519 [2024-12-05 12:09:24.494067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.519 [2024-12-05 12:09:24.494074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.519 [2024-12-05 12:09:24.494082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.519 [2024-12-05 12:09:24.494089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.519 [2024-12-05 12:09:24.494097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.519 [2024-12-05 12:09:24.494104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.519 [2024-12-05 12:09:24.494112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.519 [2024-12-05 12:09:24.494118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.519 [2024-12-05 12:09:24.494126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.519 [2024-12-05 12:09:24.494133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.519 [2024-12-05 12:09:24.494142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.519 [2024-12-05 12:09:24.494151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.519 [2024-12-05 12:09:24.494159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.519 [2024-12-05 12:09:24.494166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.519 [2024-12-05 12:09:24.494174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.519 [2024-12-05 12:09:24.494180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.519 [2024-12-05 12:09:24.494188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.519 [2024-12-05 12:09:24.494195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.519 [2024-12-05 12:09:24.494204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.519 [2024-12-05 12:09:24.494210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.520 [2024-12-05 12:09:24.494218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.520 [2024-12-05 12:09:24.494225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.520 [2024-12-05 12:09:24.494233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.520 [2024-12-05 12:09:24.494239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.520 [2024-12-05 12:09:24.494247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.520 [2024-12-05 12:09:24.494254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.520 [2024-12-05 12:09:24.494262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.520 [2024-12-05 12:09:24.494269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.520 [2024-12-05 12:09:24.494277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.520 [2024-12-05 12:09:24.494283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.520 [2024-12-05 12:09:24.494292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.520 [2024-12-05 12:09:24.494298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.520 [2024-12-05 12:09:24.494306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.520 [2024-12-05 12:09:24.494313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.520 [2024-12-05 12:09:24.494321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.520 [2024-12-05 12:09:24.494328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.520 [2024-12-05 12:09:24.494338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.520 [2024-12-05 12:09:24.494344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.520 [2024-12-05 12:09:24.494352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.520 [2024-12-05 12:09:24.494359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.520 [2024-12-05 12:09:24.494371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.520 [2024-12-05 12:09:24.494377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.520 [2024-12-05 12:09:24.494385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.520 [2024-12-05 12:09:24.494392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.520 [2024-12-05 12:09:24.494400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.520 [2024-12-05 12:09:24.494407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.520 [2024-12-05 12:09:24.494415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.520 [2024-12-05 12:09:24.494421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.520 [2024-12-05 12:09:24.494430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.520 [2024-12-05 12:09:24.494436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.520 [2024-12-05 12:09:24.494445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.520 [2024-12-05 12:09:24.494451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.520 [2024-12-05 12:09:24.494459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.520 [2024-12-05 12:09:24.494466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.520 [2024-12-05 12:09:24.494474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.520 [2024-12-05 12:09:24.494481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.520 [2024-12-05 12:09:24.494489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.520 [2024-12-05 12:09:24.494495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.520 [2024-12-05 12:09:24.494504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.520 [2024-12-05 12:09:24.494511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.520 [2024-12-05 12:09:24.494519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.520 [2024-12-05 12:09:24.494527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.520 [2024-12-05 12:09:24.494535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.520 [2024-12-05 12:09:24.494542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.520 [2024-12-05 12:09:24.494550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.520 [2024-12-05 12:09:24.494557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.520 [2024-12-05 12:09:24.494565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.520 [2024-12-05 12:09:24.494571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.520 [2024-12-05 12:09:24.494579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.520 [2024-12-05 12:09:24.494586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.520 [2024-12-05 12:09:24.494594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.520 [2024-12-05 12:09:24.494600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.520 [2024-12-05 12:09:24.494608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.520 [2024-12-05 12:09:24.494615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.520 [2024-12-05 12:09:24.494623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.520 [2024-12-05 12:09:24.494630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.520 [2024-12-05 12:09:24.494638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.520 [2024-12-05 12:09:24.494644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.520 [2024-12-05 12:09:24.494653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.520 [2024-12-05 12:09:24.494659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.520 [2024-12-05 12:09:24.494667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.520 [2024-12-05 12:09:24.494674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.520 [2024-12-05 12:09:24.494682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.520 [2024-12-05 12:09:24.494689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.520 [2024-12-05 12:09:24.494697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.520 [2024-12-05 12:09:24.494703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.520 [2024-12-05 12:09:24.494713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.520 [2024-12-05 12:09:24.494719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.520 [2024-12-05 12:09:24.494728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.520 [2024-12-05 12:09:24.494734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.520 [2024-12-05 12:09:24.494742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.520 [2024-12-05 12:09:24.494749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.520 [2024-12-05 12:09:24.494757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.520 [2024-12-05 12:09:24.494764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.520 [2024-12-05 12:09:24.494772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.520 [2024-12-05 12:09:24.494778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.520 [2024-12-05 12:09:24.494786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.520 [2024-12-05 12:09:24.494793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.520 [2024-12-05 12:09:24.494801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.521 [2024-12-05 12:09:24.494807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.521 [2024-12-05 12:09:24.494815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.521 [2024-12-05 12:09:24.494822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.521 [2024-12-05 12:09:24.494830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.521 [2024-12-05 12:09:24.494836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.521 [2024-12-05 12:09:24.494845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.521 [2024-12-05 12:09:24.494851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.521 [2024-12-05 12:09:24.494859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.521 [2024-12-05 12:09:24.494866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.521 [2024-12-05 12:09:24.494874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.521 [2024-12-05 12:09:24.494881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.521 [2024-12-05 12:09:24.494889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.521 [2024-12-05 12:09:24.494899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.521 [2024-12-05 12:09:24.494907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.521 [2024-12-05 12:09:24.494914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.521 [2024-12-05 12:09:24.494922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.521 [2024-12-05 12:09:24.494929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.521 [2024-12-05 12:09:24.494938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.521 [2024-12-05 12:09:24.494945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.521 [2024-12-05 12:09:24.494953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.521 [2024-12-05 12:09:24.494959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.521 [2024-12-05 12:09:24.494968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.521 [2024-12-05 12:09:24.494974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.521 [2024-12-05 12:09:24.494982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.521 [2024-12-05 12:09:24.494989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.521 [2024-12-05 12:09:24.494996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x291c1b0 is same with the state(6) to be set 00:25:50.521 [2024-12-05 12:09:24.495952] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:25:50.521 [2024-12-05 12:09:24.495970] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:25:50.521 [2024-12-05 12:09:24.495981] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:25:50.521 [2024-12-05 12:09:24.495992] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:25:50.521 [2024-12-05 12:09:24.496067] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:25:50.521 [2024-12-05 12:09:24.496082] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:25:50.521 [2024-12-05 12:09:24.496093] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:25:50.521 [2024-12-05 12:09:24.496161] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:25:50.521 [2024-12-05 12:09:24.496173] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:25:50.521 task offset: 28928 on job bdev=Nvme2n1 fails 00:25:50.521 00:25:50.521 Latency(us) 00:25:50.521 [2024-12-05T11:09:24.717Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:50.521 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:50.521 Job: Nvme1n1 ended in about 0.91 seconds with error 00:25:50.521 Verification LBA range: start 0x0 length 0x400 00:25:50.521 Nvme1n1 : 0.91 211.34 13.21 70.45 0.00 224744.35 14667.58 219701.64 00:25:50.521 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:50.521 Job: Nvme2n1 ended in about 0.90 seconds with error 00:25:50.521 Verification LBA range: start 0x0 length 0x400 00:25:50.521 Nvme2n1 : 0.90 213.86 13.37 71.29 0.00 218112.00 17476.27 235679.94 00:25:50.521 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:50.521 Job: Nvme3n1 ended in about 0.91 seconds with error 00:25:50.521 Verification LBA range: start 0x0 length 0x400 00:25:50.521 Nvme3n1 : 0.91 210.80 13.18 70.27 0.00 217433.23 17975.59 215707.06 00:25:50.521 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:50.521 Job: Nvme4n1 ended in about 0.91 seconds with error 00:25:50.521 Verification LBA range: start 0x0 length 0x400 00:25:50.521 Nvme4n1 : 0.91 210.25 13.14 70.08 0.00 214133.88 13419.28 220700.28 00:25:50.521 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:50.521 Job: Nvme5n1 ended in about 0.90 seconds with error 00:25:50.521 Verification LBA range: start 0x0 length 0x400 00:25:50.521 Nvme5n1 : 0.90 213.47 13.34 71.16 0.00 206838.98 17101.78 226692.14 00:25:50.521 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:50.521 Job: Nvme6n1 ended in about 0.92 seconds with error 00:25:50.521 Verification LBA range: start 0x0 length 0x400 00:25:50.521 Nvme6n1 : 0.92 215.18 13.45 69.91 0.00 203015.96 15666.22 211712.49 00:25:50.521 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:50.521 Job: Nvme7n1 ended in about 0.92 seconds with error 00:25:50.521 Verification LBA range: start 0x0 length 0x400 00:25:50.521 Nvme7n1 : 0.92 209.27 13.08 69.76 0.00 203626.30 25715.08 199728.76 00:25:50.521 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:50.521 Job: Nvme8n1 ended in about 0.92 seconds with error 00:25:50.521 Verification LBA range: start 0x0 length 0x400 00:25:50.521 Nvme8n1 : 0.92 208.83 13.05 69.61 0.00 200201.02 24716.43 214708.42 00:25:50.521 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:50.521 Job: Nvme9n1 ended in about 0.90 seconds with error 00:25:50.521 Verification LBA range: start 0x0 length 0x400 00:25:50.521 Nvme9n1 : 0.90 213.06 13.32 71.02 0.00 191759.42 6241.52 213709.78 00:25:50.521 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:50.521 Job: Nvme10n1 ended in about 0.92 seconds with error 00:25:50.521 Verification LBA range: start 0x0 length 0x400 00:25:50.521 Nvme10n1 : 0.92 138.92 8.68 69.46 0.00 257321.77 16602.45 251658.24 00:25:50.521 [2024-12-05T11:09:24.717Z] =================================================================================================================== 00:25:50.521 [2024-12-05T11:09:24.717Z] Total : 2044.99 127.81 703.00 0.00 212581.50 6241.52 251658.24 00:25:50.521 [2024-12-05 12:09:24.528982] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:50.521 [2024-12-05 12:09:24.529028] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:25:50.521 [2024-12-05 12:09:24.529379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.521 [2024-12-05 12:09:24.529399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x180c1c0 with addr=10.0.0.2, port=4420 00:25:50.521 [2024-12-05 12:09:24.529409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180c1c0 is same with the state(6) to be set 00:25:50.521 [2024-12-05 12:09:24.529579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.521 [2024-12-05 12:09:24.529590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1806490 with addr=10.0.0.2, port=4420 00:25:50.521 [2024-12-05 12:09:24.529598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1806490 is same with the state(6) to be set 00:25:50.521 [2024-12-05 12:09:24.529736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.521 [2024-12-05 12:09:24.529747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c37ce0 with addr=10.0.0.2, port=4420 00:25:50.521 [2024-12-05 12:09:24.529761] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c37ce0 is same with the state(6) to be set 00:25:50.521 [2024-12-05 12:09:24.529896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.521 [2024-12-05 12:09:24.529907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2fec0 with addr=10.0.0.2, port=4420 00:25:50.521 [2024-12-05 12:09:24.529914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2fec0 is same with the state(6) to be set 00:25:50.521 [2024-12-05 12:09:24.531614] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:25:50.521 [2024-12-05 12:09:24.531632] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:25:50.521 [2024-12-05 12:09:24.531929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.521 [2024-12-05 12:09:24.531944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1720610 with addr=10.0.0.2, port=4420 00:25:50.521 [2024-12-05 12:09:24.531953] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1720610 is same with the state(6) to be set 00:25:50.521 [2024-12-05 12:09:24.532171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.521 [2024-12-05 12:09:24.532182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c796e0 with addr=10.0.0.2, port=4420 00:25:50.521 [2024-12-05 12:09:24.532189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c796e0 is same with the state(6) to be set 00:25:50.522 [2024-12-05 12:09:24.532358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.522 [2024-12-05 12:09:24.532372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69e00 with addr=10.0.0.2, port=4420 00:25:50.522 [2024-12-05 12:09:24.532380] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69e00 is same with the state(6) to be set 00:25:50.522 [2024-12-05 12:09:24.532393] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x180c1c0 (9): Bad file descriptor 00:25:50.522 [2024-12-05 12:09:24.532405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1806490 (9): Bad file descriptor 00:25:50.522 [2024-12-05 12:09:24.532414] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c37ce0 (9): Bad file descriptor 00:25:50.522 [2024-12-05 12:09:24.532423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2fec0 (9): Bad file descriptor 00:25:50.522 [2024-12-05 12:09:24.532452] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:25:50.522 [2024-12-05 12:09:24.532467] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:25:50.522 [2024-12-05 12:09:24.532478] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:25:50.522 [2024-12-05 12:09:24.532488] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:25:50.522 [2024-12-05 12:09:24.532498] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:25:50.522 [2024-12-05 12:09:24.532561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:25:50.522 [2024-12-05 12:09:24.532727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.522 [2024-12-05 12:09:24.532740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1800200 with addr=10.0.0.2, port=4420 00:25:50.522 [2024-12-05 12:09:24.532747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800200 is same with the state(6) to be set 00:25:50.522 [2024-12-05 12:09:24.532909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.522 [2024-12-05 12:09:24.532920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c375f0 with addr=10.0.0.2, port=4420 00:25:50.522 [2024-12-05 12:09:24.532927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c375f0 is same with the state(6) to be set 00:25:50.522 [2024-12-05 12:09:24.532936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1720610 (9): Bad file descriptor 00:25:50.522 [2024-12-05 12:09:24.532945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c796e0 (9): Bad file descriptor 00:25:50.522 [2024-12-05 12:09:24.532954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69e00 (9): Bad file descriptor 00:25:50.522 [2024-12-05 12:09:24.532964] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:25:50.522 [2024-12-05 12:09:24.532971] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:25:50.522 [2024-12-05 12:09:24.532980] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:25:50.522 [2024-12-05 12:09:24.532988] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:25:50.522 [2024-12-05 12:09:24.532997] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:25:50.522 [2024-12-05 12:09:24.533003] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:25:50.522 [2024-12-05 12:09:24.533010] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:25:50.522 [2024-12-05 12:09:24.533016] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:25:50.522 [2024-12-05 12:09:24.533022] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:25:50.522 [2024-12-05 12:09:24.533028] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:25:50.522 [2024-12-05 12:09:24.533035] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:25:50.522 [2024-12-05 12:09:24.533041] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:25:50.522 [2024-12-05 12:09:24.533048] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:25:50.522 [2024-12-05 12:09:24.533054] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:25:50.522 [2024-12-05 12:09:24.533060] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:25:50.522 [2024-12-05 12:09:24.533066] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:25:50.522 [2024-12-05 12:09:24.533270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.522 [2024-12-05 12:09:24.533281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c37200 with addr=10.0.0.2, port=4420 00:25:50.522 [2024-12-05 12:09:24.533289] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c37200 is same with the state(6) to be set 00:25:50.522 [2024-12-05 12:09:24.533297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1800200 (9): Bad file descriptor 00:25:50.522 [2024-12-05 12:09:24.533306] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c375f0 (9): Bad file descriptor 00:25:50.522 [2024-12-05 12:09:24.533313] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:25:50.522 [2024-12-05 12:09:24.533319] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:25:50.522 [2024-12-05 12:09:24.533329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:25:50.522 [2024-12-05 12:09:24.533336] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:25:50.522 [2024-12-05 12:09:24.533343] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:25:50.522 [2024-12-05 12:09:24.533349] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:25:50.522 [2024-12-05 12:09:24.533356] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:25:50.522 [2024-12-05 12:09:24.533362] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:25:50.522 [2024-12-05 12:09:24.533374] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:25:50.522 [2024-12-05 12:09:24.533380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:25:50.522 [2024-12-05 12:09:24.533387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:25:50.522 [2024-12-05 12:09:24.533393] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:25:50.522 [2024-12-05 12:09:24.533419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c37200 (9): Bad file descriptor 00:25:50.522 [2024-12-05 12:09:24.533428] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:25:50.522 [2024-12-05 12:09:24.533434] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:25:50.522 [2024-12-05 12:09:24.533441] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:25:50.522 [2024-12-05 12:09:24.533447] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:25:50.522 [2024-12-05 12:09:24.533454] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:25:50.522 [2024-12-05 12:09:24.533460] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:25:50.522 [2024-12-05 12:09:24.533466] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:25:50.522 [2024-12-05 12:09:24.533473] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:25:50.522 [2024-12-05 12:09:24.533496] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:25:50.522 [2024-12-05 12:09:24.533503] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:25:50.522 [2024-12-05 12:09:24.533509] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:25:50.522 [2024-12-05 12:09:24.533515] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:25:50.781 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:25:51.716 12:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 145488 00:25:51.716 12:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:25:51.716 12:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 145488 00:25:51.716 12:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:25:51.716 12:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:51.716 12:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:25:51.717 12:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:51.717 12:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 145488 00:25:51.717 12:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:25:51.717 12:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:51.717 12:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:25:51.717 12:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:25:51.717 12:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:25:51.717 12:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:51.717 12:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:25:51.717 12:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:25:51.717 12:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:51.717 12:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:51.717 12:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:25:51.717 12:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # nvmfcleanup 00:25:51.717 12:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@99 -- # sync 00:25:51.717 12:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:25:51.717 12:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # set +e 00:25:51.717 12:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # for i in {1..20} 00:25:51.717 12:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:25:51.717 rmmod nvme_tcp 00:25:51.717 rmmod nvme_fabrics 00:25:51.717 rmmod nvme_keyring 00:25:51.717 12:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:25:51.717 12:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # set -e 00:25:51.717 12:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # return 0 00:25:51.717 12:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # '[' -n 145214 ']' 00:25:51.717 12:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@337 -- # killprocess 145214 00:25:51.717 12:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 145214 ']' 00:25:51.717 12:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 145214 00:25:51.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (145214) - No such process 00:25:51.717 12:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 145214 is not found' 00:25:51.717 Process with pid 145214 is not found 00:25:51.717 12:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:25:51.717 12:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # nvmf_fini 00:25:51.717 12:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@264 -- # local dev 00:25:51.717 12:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@267 -- # remove_target_ns 00:25:51.976 12:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:25:51.976 12:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:25:51.976 12:09:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_target_ns 00:25:53.885 12:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@268 -- # delete_main_bridge 00:25:53.885 12:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:25:53.885 12:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@130 -- # return 0 00:25:53.885 12:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:25:53.885 12:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:25:53.885 12:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:25:53.885 12:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:25:53.885 12:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:25:53.885 12:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:25:53.885 12:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:25:53.885 12:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:25:53.885 12:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:25:53.885 12:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:25:53.885 12:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:25:53.885 12:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:25:53.885 12:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:25:53.885 12:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:25:53.885 12:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:25:53.885 12:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:25:53.885 12:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:25:53.885 12:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@41 -- # _dev=0 00:25:53.885 12:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@41 -- # dev_map=() 00:25:53.885 12:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@284 -- # iptr 00:25:53.885 12:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@542 -- # iptables-save 00:25:53.885 12:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:25:53.885 12:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@542 -- # iptables-restore 00:25:53.885 00:25:53.885 real 0m8.359s 00:25:53.885 user 0m21.534s 00:25:53.885 sys 0m1.396s 00:25:53.885 12:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:53.885 12:09:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:53.885 ************************************ 00:25:53.885 END TEST nvmf_shutdown_tc3 00:25:53.885 ************************************ 00:25:53.885 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:25:53.885 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:25:53.885 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:25:53.885 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:53.885 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:53.885 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:53.885 ************************************ 00:25:53.885 START TEST nvmf_shutdown_tc4 00:25:53.885 ************************************ 00:25:53.885 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:25:53.885 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:25:53.885 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:25:53.885 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:25:53.885 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:53.885 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@296 -- # prepare_net_devs 00:25:53.885 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # local -g is_hw=no 00:25:53.885 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@260 -- # remove_target_ns 00:25:53.885 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:25:53.885 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:25:53.885 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_target_ns 00:25:53.885 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:25:53.885 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:25:53.885 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # xtrace_disable 00:25:53.885 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:53.885 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:53.886 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@131 -- # pci_devs=() 00:25:53.886 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@131 -- # local -a pci_devs 00:25:53.886 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@132 -- # pci_net_devs=() 00:25:53.886 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:25:53.886 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@133 -- # pci_drivers=() 00:25:53.886 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@133 -- # local -A pci_drivers 00:25:53.886 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@135 -- # net_devs=() 00:25:53.886 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@135 -- # local -ga net_devs 00:25:53.886 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@136 -- # e810=() 00:25:53.886 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@136 -- # local -ga e810 00:25:53.886 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@137 -- # x722=() 00:25:53.886 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@137 -- # local -ga x722 00:25:53.886 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@138 -- # mlx=() 00:25:53.886 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@138 -- # local -ga mlx 00:25:53.886 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:53.886 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:53.886 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:53.886 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:53.886 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:53.886 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:53.886 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:53.886 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:54.146 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:54.146 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:54.146 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:54.146 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:54.146 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:25:54.146 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:25:54.146 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:25:54.146 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:25:54.146 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:25:54.146 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:25:54.146 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:25:54.146 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:54.146 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:54.146 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:25:54.146 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:25:54.146 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:54.146 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:54.146 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:25:54.146 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:25:54.146 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:54.146 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:54.146 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:25:54.146 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:25:54.146 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:54.146 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:54.146 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:25:54.146 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:25:54.146 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:25:54.146 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@234 -- # [[ up == up ]] 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:54.147 Found net devices under 0000:86:00.0: cvl_0_0 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@234 -- # [[ up == up ]] 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:54.147 Found net devices under 0000:86:00.1: cvl_0_1 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # is_hw=yes 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@257 -- # create_target_ns 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@27 -- # local -gA dev_map 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@28 -- # local -g _dev 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@44 -- # ips=() 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@11 -- # local val=167772161 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:25:54.147 10.0.0.1 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@11 -- # local val=167772162 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:25:54.147 10.0.0.2 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:54.147 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:54.148 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:25:54.148 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:25:54.148 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:25:54.148 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:25:54.148 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:25:54.148 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:25:54.148 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:25:54.148 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:25:54.148 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:25:54.148 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:25:54.148 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:54.148 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@38 -- # ping_ips 1 00:25:54.148 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:25:54.148 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:25:54.148 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:25:54.148 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:25:54.148 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:25:54.148 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:25:54.148 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:25:54.148 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:25:54.408 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:25:54.408 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@107 -- # local dev=initiator0 00:25:54.408 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:25:54.408 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:25:54.408 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:25:54.408 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:25:54.408 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:25:54.408 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:25:54.408 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:25:54.408 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:25:54.408 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:25:54.408 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:25:54.408 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:25:54.408 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:54.408 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:54.408 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:25:54.408 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:25:54.408 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:54.408 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.447 ms 00:25:54.408 00:25:54.408 --- 10.0.0.1 ping statistics --- 00:25:54.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:54.408 rtt min/avg/max/mdev = 0.447/0.447/0.447/0.000 ms 00:25:54.408 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:25:54.408 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:25:54.408 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:25:54.408 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:25:54.408 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:54.408 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:54.408 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@168 -- # get_net_dev target0 00:25:54.408 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@107 -- # local dev=target0 00:25:54.408 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:25:54.408 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:25:54.408 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:25:54.408 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:25:54.408 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:25:54.408 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:25:54.408 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:25:54.408 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:25:54.408 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:25:54.408 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:25:54.408 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:25:54.408 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:25:54.408 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:25:54.408 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:25:54.408 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:54.408 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:25:54.408 00:25:54.408 --- 10.0.0.2 ping statistics --- 00:25:54.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:54.408 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:25:54.408 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@98 -- # (( pair++ )) 00:25:54.408 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:25:54.408 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@270 -- # return 0 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@107 -- # local dev=initiator0 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@107 -- # local dev=initiator1 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@109 -- # return 1 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@168 -- # dev= 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@169 -- # return 0 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@168 -- # get_net_dev target0 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@107 -- # local dev=target0 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@168 -- # get_net_dev target1 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@107 -- # local dev=target1 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@109 -- # return 1 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@168 -- # dev= 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@169 -- # return 0 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # nvmfpid=146785 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@329 -- # waitforlisten 146785 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk ip netns exec nvmf_ns_spdk ip netns exec nvmf_ns_spdk ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 146785 ']' 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:54.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:54.409 12:09:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:54.409 [2024-12-05 12:09:28.545043] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:25:54.409 [2024-12-05 12:09:28.545087] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:54.669 [2024-12-05 12:09:28.625500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:54.669 [2024-12-05 12:09:28.667001] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:54.669 [2024-12-05 12:09:28.667042] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:54.669 [2024-12-05 12:09:28.667048] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:54.669 [2024-12-05 12:09:28.667055] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:54.669 [2024-12-05 12:09:28.667060] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:54.669 [2024-12-05 12:09:28.668639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:54.669 [2024-12-05 12:09:28.668752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:54.669 [2024-12-05 12:09:28.668859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:54.669 [2024-12-05 12:09:28.668860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:25:55.238 12:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:55.238 12:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:25:55.238 12:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:25:55.238 12:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:55.238 12:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:55.238 12:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:55.238 12:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:55.238 12:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.238 12:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:55.238 [2024-12-05 12:09:29.420625] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:55.238 12:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.238 12:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:25:55.238 12:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:25:55.238 12:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:55.238 12:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:55.238 12:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:55.498 12:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:55.498 12:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:55.498 12:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:55.498 12:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:55.498 12:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:55.498 12:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:55.498 12:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:55.498 12:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:55.498 12:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:55.498 12:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:55.498 12:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:55.498 12:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:55.498 12:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:55.498 12:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:55.498 12:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:55.498 12:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:55.498 12:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:55.498 12:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:55.498 12:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:55.498 12:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:55.498 12:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:25:55.498 12:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.498 12:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:55.498 Malloc1 00:25:55.498 [2024-12-05 12:09:29.534864] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:55.498 Malloc2 00:25:55.498 Malloc3 00:25:55.498 Malloc4 00:25:55.498 Malloc5 00:25:55.758 Malloc6 00:25:55.758 Malloc7 00:25:55.758 Malloc8 00:25:55.758 Malloc9 00:25:55.758 Malloc10 00:25:55.758 12:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.758 12:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:25:55.758 12:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:55.758 12:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:56.018 12:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=147062 00:25:56.018 12:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:25:56.018 12:09:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:25:56.018 [2024-12-05 12:09:30.046309] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:26:01.305 12:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:01.305 12:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 146785 00:26:01.305 12:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 146785 ']' 00:26:01.305 12:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 146785 00:26:01.305 12:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:26:01.305 12:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:01.305 12:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 146785 00:26:01.305 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:01.305 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:01.305 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 146785' 00:26:01.305 killing process with pid 146785 00:26:01.305 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 146785 00:26:01.305 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 146785 00:26:01.305 [2024-12-05 12:09:35.041528] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd050 is same with the state(6) to be set 00:26:01.305 [2024-12-05 12:09:35.041594] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd050 is same with the state(6) to be set 00:26:01.305 [2024-12-05 12:09:35.041606] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd050 is same with the state(6) to be set 00:26:01.305 [2024-12-05 12:09:35.041615] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd050 is same with the state(6) to be set 00:26:01.305 [2024-12-05 12:09:35.041633] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd050 is same with the state(6) to be set 00:26:01.305 [2024-12-05 12:09:35.041641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd050 is same with the state(6) to be set 00:26:01.305 [2024-12-05 12:09:35.041649] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd050 is same with the state(6) to be set 00:26:01.305 [2024-12-05 12:09:35.041658] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd050 is same with the state(6) to be set 00:26:01.305 [2024-12-05 12:09:35.041666] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd050 is same with the state(6) to be set 00:26:01.305 [2024-12-05 12:09:35.041674] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd050 is same with the state(6) to be set 00:26:01.305 [2024-12-05 12:09:35.042324] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd520 is same with the state(6) to be set 00:26:01.305 [2024-12-05 12:09:35.042357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd520 is same with the state(6) to be set 00:26:01.305 [2024-12-05 12:09:35.042366] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd520 is same with the state(6) to be set 00:26:01.305 [2024-12-05 12:09:35.042380] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd520 is same with the state(6) to be set 00:26:01.305 [2024-12-05 12:09:35.042386] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd520 is same with the state(6) to be set 00:26:01.305 [2024-12-05 12:09:35.042393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd520 is same with the state(6) to be set 00:26:01.305 [2024-12-05 12:09:35.043091] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd9f0 is same with the state(6) to be set 00:26:01.305 [2024-12-05 12:09:35.043118] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd9f0 is same with the state(6) to be set 00:26:01.305 [2024-12-05 12:09:35.043127] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd9f0 is same with the state(6) to be set 00:26:01.305 [2024-12-05 12:09:35.043134] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd9f0 is same with the state(6) to be set 00:26:01.305 [2024-12-05 12:09:35.043141] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd9f0 is same with the state(6) to be set 00:26:01.305 [2024-12-05 12:09:35.043148] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd9f0 is same with the state(6) to be set 00:26:01.305 [2024-12-05 12:09:35.043154] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd9f0 is same with the state(6) to be set 00:26:01.305 [2024-12-05 12:09:35.043161] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fd9f0 is same with the state(6) to be set 00:26:01.305 [2024-12-05 12:09:35.044877] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79e260 is same with the state(6) to be set 00:26:01.305 [2024-12-05 12:09:35.044902] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79e260 is same with the state(6) to be set 00:26:01.305 [2024-12-05 12:09:35.044912] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79e260 is same with the state(6) to be set 00:26:01.305 [2024-12-05 12:09:35.044923] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79e260 is same with the state(6) to be set 00:26:01.305 [2024-12-05 12:09:35.044933] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79e260 is same with the state(6) to be set 00:26:01.305 [2024-12-05 12:09:35.044942] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79e260 is same with the state(6) to be set 00:26:01.305 [2024-12-05 12:09:35.044953] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79e260 is same with the state(6) to be set 00:26:01.305 [2024-12-05 12:09:35.044967] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79e260 is same with the state(6) to be set 00:26:01.305 [2024-12-05 12:09:35.044977] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79e260 is same with the state(6) to be set 00:26:01.305 [2024-12-05 12:09:35.044987] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79e260 is same with the state(6) to be set 00:26:01.305 [2024-12-05 12:09:35.044997] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79e260 is same with the state(6) to be set 00:26:01.305 [2024-12-05 12:09:35.045008] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79e260 is same with the state(6) to be set 00:26:01.305 [2024-12-05 12:09:35.045017] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79e260 is same with the state(6) to be set 00:26:01.305 [2024-12-05 12:09:35.045027] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79e260 is same with the state(6) to be set 00:26:01.305 [2024-12-05 12:09:35.045038] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79e260 is same with the state(6) to be set 00:26:01.305 [2024-12-05 12:09:35.045728] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79e750 is same with the state(6) to be set 00:26:01.305 [2024-12-05 12:09:35.046309] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79dd90 is same with the state(6) to be set 00:26:01.306 [2024-12-05 12:09:35.046332] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79dd90 is same with the state(6) to be set 00:26:01.306 [2024-12-05 12:09:35.046341] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79dd90 is same with the state(6) to be set 00:26:01.306 [2024-12-05 12:09:35.046348] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79dd90 is same with the state(6) to be set 00:26:01.306 [2024-12-05 12:09:35.046355] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79dd90 is same with the state(6) to be set 00:26:01.306 [2024-12-05 12:09:35.046363] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79dd90 is same with the state(6) to be set 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 starting I/O failed: -6 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 starting I/O failed: -6 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 starting I/O failed: -6 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 starting I/O failed: -6 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 starting I/O failed: -6 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 starting I/O failed: -6 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 starting I/O failed: -6 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 starting I/O failed: -6 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 starting I/O failed: -6 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 starting I/O failed: -6 00:26:01.306 [2024-12-05 12:09:35.047118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 starting I/O failed: -6 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 starting I/O failed: -6 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 starting I/O failed: -6 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 starting I/O failed: -6 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 starting I/O failed: -6 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 starting I/O failed: -6 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 starting I/O failed: -6 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 starting I/O failed: -6 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 starting I/O failed: -6 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 starting I/O failed: -6 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 starting I/O failed: -6 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 starting I/O failed: -6 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 starting I/O failed: -6 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 starting I/O failed: -6 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 starting I/O failed: -6 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 starting I/O failed: -6 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 starting I/O failed: -6 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 starting I/O failed: -6 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 starting I/O failed: -6 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 starting I/O failed: -6 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 starting I/O failed: -6 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 [2024-12-05 12:09:35.047999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 starting I/O failed: -6 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 starting I/O failed: -6 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 starting I/O failed: -6 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 starting I/O failed: -6 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 starting I/O failed: -6 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 starting I/O failed: -6 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 starting I/O failed: -6 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 starting I/O failed: -6 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 starting I/O failed: -6 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 starting I/O failed: -6 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 starting I/O failed: -6 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 starting I/O failed: -6 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 starting I/O failed: -6 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 starting I/O failed: -6 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 starting I/O failed: -6 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 Write completed with error (sct=0, sc=8) 00:26:01.306 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 [2024-12-05 12:09:35.049054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 [2024-12-05 12:09:35.050504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.307 NVMe io qpair process completion error 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 Write completed with error (sct=0, sc=8) 00:26:01.307 starting I/O failed: -6 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 starting I/O failed: -6 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 starting I/O failed: -6 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 starting I/O failed: -6 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 starting I/O failed: -6 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 starting I/O failed: -6 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 [2024-12-05 12:09:35.051467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:01.308 starting I/O failed: -6 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 starting I/O failed: -6 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 starting I/O failed: -6 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 starting I/O failed: -6 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 starting I/O failed: -6 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 starting I/O failed: -6 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 starting I/O failed: -6 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 starting I/O failed: -6 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 starting I/O failed: -6 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 starting I/O failed: -6 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 starting I/O failed: -6 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 starting I/O failed: -6 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 starting I/O failed: -6 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 starting I/O failed: -6 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 starting I/O failed: -6 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 starting I/O failed: -6 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 starting I/O failed: -6 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 starting I/O failed: -6 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 starting I/O failed: -6 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 starting I/O failed: -6 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 starting I/O failed: -6 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 starting I/O failed: -6 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 [2024-12-05 12:09:35.052351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 starting I/O failed: -6 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 starting I/O failed: -6 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 starting I/O failed: -6 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 starting I/O failed: -6 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 starting I/O failed: -6 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 starting I/O failed: -6 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 starting I/O failed: -6 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 starting I/O failed: -6 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 starting I/O failed: -6 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 starting I/O failed: -6 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 starting I/O failed: -6 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 starting I/O failed: -6 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 starting I/O failed: -6 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 starting I/O failed: -6 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 starting I/O failed: -6 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 starting I/O failed: -6 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 starting I/O failed: -6 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 starting I/O failed: -6 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 starting I/O failed: -6 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 starting I/O failed: -6 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 starting I/O failed: -6 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 starting I/O failed: -6 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 starting I/O failed: -6 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 starting I/O failed: -6 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 starting I/O failed: -6 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 starting I/O failed: -6 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 starting I/O failed: -6 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 starting I/O failed: -6 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 starting I/O failed: -6 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 starting I/O failed: -6 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 starting I/O failed: -6 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 starting I/O failed: -6 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 starting I/O failed: -6 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 Write completed with error (sct=0, sc=8) 00:26:01.308 starting I/O failed: -6 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 [2024-12-05 12:09:35.053403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 [2024-12-05 12:09:35.055275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.309 NVMe io qpair process completion error 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 starting I/O failed: -6 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 Write completed with error (sct=0, sc=8) 00:26:01.309 [2024-12-05 12:09:35.056332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:01.309 starting I/O failed: -6 00:26:01.309 starting I/O failed: -6 00:26:01.310 starting I/O failed: -6 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 starting I/O failed: -6 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 starting I/O failed: -6 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 starting I/O failed: -6 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 starting I/O failed: -6 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 starting I/O failed: -6 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 starting I/O failed: -6 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 starting I/O failed: -6 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 starting I/O failed: -6 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 starting I/O failed: -6 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 starting I/O failed: -6 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 starting I/O failed: -6 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 starting I/O failed: -6 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 starting I/O failed: -6 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 starting I/O failed: -6 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 starting I/O failed: -6 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 starting I/O failed: -6 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 starting I/O failed: -6 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 starting I/O failed: -6 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 [2024-12-05 12:09:35.057271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 starting I/O failed: -6 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 starting I/O failed: -6 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 starting I/O failed: -6 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 starting I/O failed: -6 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 starting I/O failed: -6 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 starting I/O failed: -6 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 starting I/O failed: -6 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 starting I/O failed: -6 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 starting I/O failed: -6 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 starting I/O failed: -6 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 starting I/O failed: -6 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 starting I/O failed: -6 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 starting I/O failed: -6 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 starting I/O failed: -6 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 starting I/O failed: -6 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 starting I/O failed: -6 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 starting I/O failed: -6 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 starting I/O failed: -6 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 starting I/O failed: -6 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 starting I/O failed: -6 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 starting I/O failed: -6 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 starting I/O failed: -6 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 starting I/O failed: -6 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 starting I/O failed: -6 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 starting I/O failed: -6 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 starting I/O failed: -6 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 starting I/O failed: -6 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 starting I/O failed: -6 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 starting I/O failed: -6 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 starting I/O failed: -6 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 starting I/O failed: -6 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 starting I/O failed: -6 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 starting I/O failed: -6 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 starting I/O failed: -6 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 starting I/O failed: -6 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 starting I/O failed: -6 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 starting I/O failed: -6 00:26:01.310 [2024-12-05 12:09:35.058314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 starting I/O failed: -6 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 starting I/O failed: -6 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 starting I/O failed: -6 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 starting I/O failed: -6 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 starting I/O failed: -6 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 starting I/O failed: -6 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 starting I/O failed: -6 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 starting I/O failed: -6 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 starting I/O failed: -6 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 starting I/O failed: -6 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 starting I/O failed: -6 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 starting I/O failed: -6 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 starting I/O failed: -6 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 starting I/O failed: -6 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 starting I/O failed: -6 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.310 starting I/O failed: -6 00:26:01.310 Write completed with error (sct=0, sc=8) 00:26:01.311 starting I/O failed: -6 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 starting I/O failed: -6 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 starting I/O failed: -6 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 starting I/O failed: -6 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 starting I/O failed: -6 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 starting I/O failed: -6 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 starting I/O failed: -6 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 starting I/O failed: -6 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 starting I/O failed: -6 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 starting I/O failed: -6 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 starting I/O failed: -6 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 starting I/O failed: -6 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 starting I/O failed: -6 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 starting I/O failed: -6 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 starting I/O failed: -6 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 starting I/O failed: -6 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 starting I/O failed: -6 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 starting I/O failed: -6 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 starting I/O failed: -6 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 starting I/O failed: -6 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 starting I/O failed: -6 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 starting I/O failed: -6 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 starting I/O failed: -6 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 starting I/O failed: -6 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 starting I/O failed: -6 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 starting I/O failed: -6 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 starting I/O failed: -6 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 starting I/O failed: -6 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 starting I/O failed: -6 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 starting I/O failed: -6 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 starting I/O failed: -6 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 starting I/O failed: -6 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 starting I/O failed: -6 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 starting I/O failed: -6 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 starting I/O failed: -6 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 starting I/O failed: -6 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 starting I/O failed: -6 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 starting I/O failed: -6 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 starting I/O failed: -6 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 starting I/O failed: -6 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 starting I/O failed: -6 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 starting I/O failed: -6 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 starting I/O failed: -6 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 starting I/O failed: -6 00:26:01.311 [2024-12-05 12:09:35.060444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.311 NVMe io qpair process completion error 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 starting I/O failed: -6 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 starting I/O failed: -6 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 starting I/O failed: -6 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 starting I/O failed: -6 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 starting I/O failed: -6 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 starting I/O failed: -6 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 starting I/O failed: -6 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 starting I/O failed: -6 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 starting I/O failed: -6 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 starting I/O failed: -6 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 [2024-12-05 12:09:35.061489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 starting I/O failed: -6 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 starting I/O failed: -6 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 starting I/O failed: -6 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 starting I/O failed: -6 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 starting I/O failed: -6 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 starting I/O failed: -6 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 starting I/O failed: -6 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 starting I/O failed: -6 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 starting I/O failed: -6 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 starting I/O failed: -6 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 starting I/O failed: -6 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.311 starting I/O failed: -6 00:26:01.311 Write completed with error (sct=0, sc=8) 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 [2024-12-05 12:09:35.062398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 [2024-12-05 12:09:35.063415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.312 Write completed with error (sct=0, sc=8) 00:26:01.312 starting I/O failed: -6 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 starting I/O failed: -6 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 starting I/O failed: -6 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 starting I/O failed: -6 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 starting I/O failed: -6 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 starting I/O failed: -6 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 starting I/O failed: -6 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 starting I/O failed: -6 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 starting I/O failed: -6 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 starting I/O failed: -6 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 starting I/O failed: -6 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 starting I/O failed: -6 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 starting I/O failed: -6 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 starting I/O failed: -6 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 starting I/O failed: -6 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 starting I/O failed: -6 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 starting I/O failed: -6 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 starting I/O failed: -6 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 starting I/O failed: -6 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 starting I/O failed: -6 00:26:01.313 [2024-12-05 12:09:35.065348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.313 NVMe io qpair process completion error 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 starting I/O failed: -6 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 starting I/O failed: -6 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 starting I/O failed: -6 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 starting I/O failed: -6 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 starting I/O failed: -6 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 starting I/O failed: -6 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 starting I/O failed: -6 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 starting I/O failed: -6 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 starting I/O failed: -6 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 starting I/O failed: -6 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 [2024-12-05 12:09:35.066419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 starting I/O failed: -6 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 starting I/O failed: -6 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 starting I/O failed: -6 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 starting I/O failed: -6 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 starting I/O failed: -6 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 starting I/O failed: -6 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 starting I/O failed: -6 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 starting I/O failed: -6 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 starting I/O failed: -6 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 starting I/O failed: -6 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 starting I/O failed: -6 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 starting I/O failed: -6 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 starting I/O failed: -6 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 starting I/O failed: -6 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 starting I/O failed: -6 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 starting I/O failed: -6 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 starting I/O failed: -6 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 starting I/O failed: -6 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 starting I/O failed: -6 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 starting I/O failed: -6 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 starting I/O failed: -6 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 starting I/O failed: -6 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 [2024-12-05 12:09:35.067305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 starting I/O failed: -6 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 starting I/O failed: -6 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 starting I/O failed: -6 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 starting I/O failed: -6 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 starting I/O failed: -6 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 starting I/O failed: -6 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 starting I/O failed: -6 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 starting I/O failed: -6 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 starting I/O failed: -6 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 starting I/O failed: -6 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 starting I/O failed: -6 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 starting I/O failed: -6 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 starting I/O failed: -6 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 starting I/O failed: -6 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 starting I/O failed: -6 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 starting I/O failed: -6 00:26:01.313 Write completed with error (sct=0, sc=8) 00:26:01.313 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 [2024-12-05 12:09:35.068314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 [2024-12-05 12:09:35.071032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.314 NVMe io qpair process completion error 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.314 starting I/O failed: -6 00:26:01.314 Write completed with error (sct=0, sc=8) 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 [2024-12-05 12:09:35.072038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 [2024-12-05 12:09:35.072934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 [2024-12-05 12:09:35.073931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.315 Write completed with error (sct=0, sc=8) 00:26:01.315 starting I/O failed: -6 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 starting I/O failed: -6 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 starting I/O failed: -6 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 starting I/O failed: -6 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 starting I/O failed: -6 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 starting I/O failed: -6 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 starting I/O failed: -6 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 starting I/O failed: -6 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 starting I/O failed: -6 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 starting I/O failed: -6 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 starting I/O failed: -6 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 starting I/O failed: -6 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 starting I/O failed: -6 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 starting I/O failed: -6 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 starting I/O failed: -6 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 starting I/O failed: -6 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 starting I/O failed: -6 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 starting I/O failed: -6 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 starting I/O failed: -6 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 starting I/O failed: -6 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 starting I/O failed: -6 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 starting I/O failed: -6 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 starting I/O failed: -6 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 starting I/O failed: -6 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 starting I/O failed: -6 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 starting I/O failed: -6 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 starting I/O failed: -6 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 starting I/O failed: -6 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 starting I/O failed: -6 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 starting I/O failed: -6 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 starting I/O failed: -6 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 starting I/O failed: -6 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 starting I/O failed: -6 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 starting I/O failed: -6 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 starting I/O failed: -6 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 starting I/O failed: -6 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 starting I/O failed: -6 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 starting I/O failed: -6 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 starting I/O failed: -6 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 starting I/O failed: -6 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 starting I/O failed: -6 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 starting I/O failed: -6 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 starting I/O failed: -6 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 starting I/O failed: -6 00:26:01.316 [2024-12-05 12:09:35.076460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.316 NVMe io qpair process completion error 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 starting I/O failed: -6 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 starting I/O failed: -6 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 starting I/O failed: -6 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 starting I/O failed: -6 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 starting I/O failed: -6 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 starting I/O failed: -6 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 starting I/O failed: -6 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 starting I/O failed: -6 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 starting I/O failed: -6 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 starting I/O failed: -6 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 [2024-12-05 12:09:35.077443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 starting I/O failed: -6 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 starting I/O failed: -6 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 starting I/O failed: -6 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 starting I/O failed: -6 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 starting I/O failed: -6 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 starting I/O failed: -6 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 starting I/O failed: -6 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 starting I/O failed: -6 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 starting I/O failed: -6 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 starting I/O failed: -6 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 starting I/O failed: -6 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.316 starting I/O failed: -6 00:26:01.316 Write completed with error (sct=0, sc=8) 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 [2024-12-05 12:09:35.078316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 [2024-12-05 12:09:35.079326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.317 starting I/O failed: -6 00:26:01.317 Write completed with error (sct=0, sc=8) 00:26:01.318 starting I/O failed: -6 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 starting I/O failed: -6 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 starting I/O failed: -6 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 starting I/O failed: -6 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 starting I/O failed: -6 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 starting I/O failed: -6 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 starting I/O failed: -6 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 starting I/O failed: -6 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 starting I/O failed: -6 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 starting I/O failed: -6 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 starting I/O failed: -6 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 starting I/O failed: -6 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 starting I/O failed: -6 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 starting I/O failed: -6 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 starting I/O failed: -6 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 starting I/O failed: -6 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 starting I/O failed: -6 00:26:01.318 [2024-12-05 12:09:35.080857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.318 NVMe io qpair process completion error 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 starting I/O failed: -6 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 starting I/O failed: -6 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 starting I/O failed: -6 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 starting I/O failed: -6 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 starting I/O failed: -6 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 starting I/O failed: -6 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 starting I/O failed: -6 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 starting I/O failed: -6 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 starting I/O failed: -6 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 starting I/O failed: -6 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 [2024-12-05 12:09:35.081890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 starting I/O failed: -6 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 starting I/O failed: -6 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 starting I/O failed: -6 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 starting I/O failed: -6 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 starting I/O failed: -6 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 starting I/O failed: -6 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 starting I/O failed: -6 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 starting I/O failed: -6 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 starting I/O failed: -6 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 starting I/O failed: -6 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 starting I/O failed: -6 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 starting I/O failed: -6 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 starting I/O failed: -6 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 starting I/O failed: -6 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 starting I/O failed: -6 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 starting I/O failed: -6 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 starting I/O failed: -6 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 starting I/O failed: -6 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 starting I/O failed: -6 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 starting I/O failed: -6 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 starting I/O failed: -6 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 starting I/O failed: -6 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 [2024-12-05 12:09:35.082767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 starting I/O failed: -6 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 starting I/O failed: -6 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 starting I/O failed: -6 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 starting I/O failed: -6 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 starting I/O failed: -6 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 starting I/O failed: -6 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 starting I/O failed: -6 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 starting I/O failed: -6 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 starting I/O failed: -6 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 starting I/O failed: -6 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 starting I/O failed: -6 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 starting I/O failed: -6 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 starting I/O failed: -6 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 starting I/O failed: -6 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 starting I/O failed: -6 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 Write completed with error (sct=0, sc=8) 00:26:01.318 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 [2024-12-05 12:09:35.083803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 [2024-12-05 12:09:35.086995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.319 NVMe io qpair process completion error 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.319 starting I/O failed: -6 00:26:01.319 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 [2024-12-05 12:09:35.089655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:01.320 starting I/O failed: -6 00:26:01.320 starting I/O failed: -6 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.320 starting I/O failed: -6 00:26:01.320 Write completed with error (sct=0, sc=8) 00:26:01.321 starting I/O failed: -6 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 starting I/O failed: -6 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 starting I/O failed: -6 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 starting I/O failed: -6 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 starting I/O failed: -6 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 starting I/O failed: -6 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 starting I/O failed: -6 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 starting I/O failed: -6 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 starting I/O failed: -6 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 starting I/O failed: -6 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 starting I/O failed: -6 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 starting I/O failed: -6 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 starting I/O failed: -6 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 starting I/O failed: -6 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 starting I/O failed: -6 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 starting I/O failed: -6 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 starting I/O failed: -6 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 starting I/O failed: -6 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 starting I/O failed: -6 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 starting I/O failed: -6 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 starting I/O failed: -6 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 starting I/O failed: -6 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 starting I/O failed: -6 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 starting I/O failed: -6 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 starting I/O failed: -6 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 starting I/O failed: -6 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 starting I/O failed: -6 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 starting I/O failed: -6 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 starting I/O failed: -6 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 starting I/O failed: -6 00:26:01.321 [2024-12-05 12:09:35.095446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.321 NVMe io qpair process completion error 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 starting I/O failed: -6 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 starting I/O failed: -6 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 starting I/O failed: -6 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 starting I/O failed: -6 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 starting I/O failed: -6 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 starting I/O failed: -6 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 starting I/O failed: -6 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 starting I/O failed: -6 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 starting I/O failed: -6 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 starting I/O failed: -6 00:26:01.321 [2024-12-05 12:09:35.096468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 starting I/O failed: -6 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 starting I/O failed: -6 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 starting I/O failed: -6 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 starting I/O failed: -6 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 starting I/O failed: -6 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 starting I/O failed: -6 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 starting I/O failed: -6 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 starting I/O failed: -6 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 starting I/O failed: -6 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 starting I/O failed: -6 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 starting I/O failed: -6 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 starting I/O failed: -6 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 starting I/O failed: -6 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 starting I/O failed: -6 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 starting I/O failed: -6 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 starting I/O failed: -6 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 starting I/O failed: -6 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 starting I/O failed: -6 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 starting I/O failed: -6 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 starting I/O failed: -6 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 starting I/O failed: -6 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 Write completed with error (sct=0, sc=8) 00:26:01.321 starting I/O failed: -6 00:26:01.322 [2024-12-05 12:09:35.097323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 [2024-12-05 12:09:35.098313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 Write completed with error (sct=0, sc=8) 00:26:01.322 starting I/O failed: -6 00:26:01.322 [2024-12-05 12:09:35.100804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:01.322 NVMe io qpair process completion error 00:26:01.322 Initializing NVMe Controllers 00:26:01.322 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:26:01.323 Controller IO queue size 128, less than required. 00:26:01.323 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:01.323 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:26:01.323 Controller IO queue size 128, less than required. 00:26:01.323 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:01.323 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:26:01.323 Controller IO queue size 128, less than required. 00:26:01.323 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:01.323 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:26:01.323 Controller IO queue size 128, less than required. 00:26:01.323 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:01.323 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:26:01.323 Controller IO queue size 128, less than required. 00:26:01.323 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:01.323 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:01.323 Controller IO queue size 128, less than required. 00:26:01.323 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:01.323 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:26:01.323 Controller IO queue size 128, less than required. 00:26:01.323 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:01.323 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:26:01.323 Controller IO queue size 128, less than required. 00:26:01.323 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:01.323 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:26:01.323 Controller IO queue size 128, less than required. 00:26:01.323 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:01.323 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:26:01.323 Controller IO queue size 128, less than required. 00:26:01.323 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:01.323 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:26:01.323 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:26:01.323 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:26:01.323 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:26:01.323 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:26:01.323 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:01.323 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:26:01.323 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:26:01.323 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:26:01.323 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:26:01.323 Initialization complete. Launching workers. 00:26:01.323 ======================================================== 00:26:01.323 Latency(us) 00:26:01.323 Device Information : IOPS MiB/s Average min max 00:26:01.323 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2171.46 93.31 58950.28 914.29 108559.58 00:26:01.323 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2158.05 92.73 59328.10 887.05 140038.86 00:26:01.323 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2241.30 96.31 57151.48 633.26 115444.24 00:26:01.323 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2236.61 96.10 57332.23 898.35 110045.49 00:26:01.323 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2194.88 94.31 57765.21 943.05 109717.28 00:26:01.323 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2209.36 94.93 57395.71 911.89 109023.03 00:26:01.323 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2210.21 94.97 57388.57 663.17 108284.99 00:26:01.323 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2222.78 95.51 57081.80 748.00 106897.31 00:26:01.323 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2224.48 95.58 57053.25 901.82 107519.68 00:26:01.323 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2152.52 92.49 58984.73 943.25 105497.41 00:26:01.323 ======================================================== 00:26:01.323 Total : 22021.66 946.24 57832.22 633.26 140038.86 00:26:01.323 00:26:01.323 [2024-12-05 12:09:35.103792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fdef0 is same with the state(6) to be set 00:26:01.323 [2024-12-05 12:09:35.103839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fe740 is same with the state(6) to be set 00:26:01.323 [2024-12-05 12:09:35.103869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fd560 is same with the state(6) to be set 00:26:01.323 [2024-12-05 12:09:35.103898] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fea70 is same with the state(6) to be set 00:26:01.323 [2024-12-05 12:09:35.103927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ff900 is same with the state(6) to be set 00:26:01.323 [2024-12-05 12:09:35.103955] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ff720 is same with the state(6) to be set 00:26:01.323 [2024-12-05 12:09:35.103983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fe410 is same with the state(6) to be set 00:26:01.323 [2024-12-05 12:09:35.104011] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fdbc0 is same with the state(6) to be set 00:26:01.323 [2024-12-05 12:09:35.104039] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fd890 is same with the state(6) to be set 00:26:01.323 [2024-12-05 12:09:35.104068] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ffae0 is same with the state(6) to be set 00:26:01.323 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:26:01.323 12:09:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:26:02.260 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 147062 00:26:02.260 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:26:02.260 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 147062 00:26:02.260 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:26:02.260 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:02.260 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:26:02.260 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:02.260 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 147062 00:26:02.260 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:26:02.260 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:02.260 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:02.260 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:02.260 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:26:02.260 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:26:02.260 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:02.260 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:02.260 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:26:02.261 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@335 -- # nvmfcleanup 00:26:02.261 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@99 -- # sync 00:26:02.261 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:26:02.261 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@102 -- # set +e 00:26:02.261 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@103 -- # for i in {1..20} 00:26:02.261 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:26:02.261 rmmod nvme_tcp 00:26:02.520 rmmod nvme_fabrics 00:26:02.520 rmmod nvme_keyring 00:26:02.520 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:26:02.520 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # set -e 00:26:02.520 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # return 0 00:26:02.520 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # '[' -n 146785 ']' 00:26:02.520 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@337 -- # killprocess 146785 00:26:02.520 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 146785 ']' 00:26:02.520 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 146785 00:26:02.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (146785) - No such process 00:26:02.520 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 146785 is not found' 00:26:02.520 Process with pid 146785 is not found 00:26:02.520 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:26:02.520 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@342 -- # nvmf_fini 00:26:02.520 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@264 -- # local dev 00:26:02.520 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@267 -- # remove_target_ns 00:26:02.520 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:26:02.520 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:26:02.520 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_target_ns 00:26:04.426 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@268 -- # delete_main_bridge 00:26:04.426 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:26:04.426 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@130 -- # return 0 00:26:04.426 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:26:04.426 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:26:04.426 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:26:04.426 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:26:04.426 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:26:04.427 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:26:04.427 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:26:04.427 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:26:04.427 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:26:04.427 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:26:04.427 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:26:04.427 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:26:04.427 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:26:04.427 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:26:04.427 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:26:04.427 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:26:04.427 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:26:04.427 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@41 -- # _dev=0 00:26:04.427 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@41 -- # dev_map=() 00:26:04.427 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@284 -- # iptr 00:26:04.427 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@542 -- # iptables-save 00:26:04.427 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:26:04.427 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@542 -- # iptables-restore 00:26:04.427 00:26:04.427 real 0m10.524s 00:26:04.427 user 0m27.646s 00:26:04.427 sys 0m5.238s 00:26:04.427 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:04.427 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:04.427 ************************************ 00:26:04.427 END TEST nvmf_shutdown_tc4 00:26:04.427 ************************************ 00:26:04.686 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:26:04.686 00:26:04.686 real 0m43.057s 00:26:04.686 user 1m48.460s 00:26:04.686 sys 0m14.270s 00:26:04.686 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:04.686 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:04.686 ************************************ 00:26:04.686 END TEST nvmf_shutdown 00:26:04.686 ************************************ 00:26:04.686 12:09:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:26:04.686 12:09:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:04.686 12:09:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:04.686 12:09:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:04.686 ************************************ 00:26:04.686 START TEST nvmf_nsid 00:26:04.686 ************************************ 00:26:04.686 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:26:04.686 * Looking for test storage... 00:26:04.686 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:04.686 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:04.686 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:26:04.686 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:04.686 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:04.686 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:04.686 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:04.686 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:04.686 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:26:04.686 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:26:04.686 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:26:04.686 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:26:04.686 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:26:04.687 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:26:04.687 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:26:04.687 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:04.687 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:26:04.687 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:26:04.687 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:04.687 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:04.687 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:26:04.687 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:26:04.687 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:04.687 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:26:04.687 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:26:04.687 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:26:04.687 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:26:04.687 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:04.687 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:26:04.947 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:26:04.947 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:04.947 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:04.947 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:26:04.947 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:04.947 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:04.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:04.947 --rc genhtml_branch_coverage=1 00:26:04.947 --rc genhtml_function_coverage=1 00:26:04.947 --rc genhtml_legend=1 00:26:04.947 --rc geninfo_all_blocks=1 00:26:04.947 --rc geninfo_unexecuted_blocks=1 00:26:04.947 00:26:04.947 ' 00:26:04.947 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:04.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:04.947 --rc genhtml_branch_coverage=1 00:26:04.947 --rc genhtml_function_coverage=1 00:26:04.947 --rc genhtml_legend=1 00:26:04.947 --rc geninfo_all_blocks=1 00:26:04.947 --rc geninfo_unexecuted_blocks=1 00:26:04.947 00:26:04.947 ' 00:26:04.947 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:04.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:04.947 --rc genhtml_branch_coverage=1 00:26:04.947 --rc genhtml_function_coverage=1 00:26:04.947 --rc genhtml_legend=1 00:26:04.947 --rc geninfo_all_blocks=1 00:26:04.947 --rc geninfo_unexecuted_blocks=1 00:26:04.947 00:26:04.947 ' 00:26:04.947 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:04.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:04.947 --rc genhtml_branch_coverage=1 00:26:04.947 --rc genhtml_function_coverage=1 00:26:04.947 --rc genhtml_legend=1 00:26:04.947 --rc geninfo_all_blocks=1 00:26:04.947 --rc geninfo_unexecuted_blocks=1 00:26:04.947 00:26:04.947 ' 00:26:04.948 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:04.948 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:26:04.948 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:04.948 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:04.948 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:04.948 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:04.948 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:04.948 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:26:04.948 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:04.948 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:26:04.948 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:26:04.948 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:26:04.948 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:04.948 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:26:04.948 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:26:04.948 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:04.948 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:04.948 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:26:04.948 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:04.948 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:04.948 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:04.948 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.948 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.948 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.948 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:26:04.948 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.948 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:26:04.948 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:26:04.948 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:26:04.948 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:26:04.948 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@50 -- # : 0 00:26:04.948 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:26:04.948 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:26:04.948 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:26:04.948 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:04.948 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:04.948 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:26:04.948 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:26:04.948 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:26:04.948 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:26:04.948 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@54 -- # have_pci_nics=0 00:26:04.948 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:26:04.948 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:26:04.948 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:26:04.948 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:26:04.948 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:26:04.948 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:26:04.948 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:26:04.948 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:04.948 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@296 -- # prepare_net_devs 00:26:04.948 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # local -g is_hw=no 00:26:04.948 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@260 -- # remove_target_ns 00:26:04.948 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:26:04.948 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:26:04.948 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_target_ns 00:26:04.948 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:26:04.948 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:26:04.948 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # xtrace_disable 00:26:04.948 12:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:11.522 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:11.522 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@131 -- # pci_devs=() 00:26:11.522 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@131 -- # local -a pci_devs 00:26:11.522 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@132 -- # pci_net_devs=() 00:26:11.522 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:26:11.522 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@133 -- # pci_drivers=() 00:26:11.522 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@133 -- # local -A pci_drivers 00:26:11.522 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@135 -- # net_devs=() 00:26:11.522 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@135 -- # local -ga net_devs 00:26:11.522 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@136 -- # e810=() 00:26:11.522 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@136 -- # local -ga e810 00:26:11.522 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@137 -- # x722=() 00:26:11.522 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@137 -- # local -ga x722 00:26:11.522 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@138 -- # mlx=() 00:26:11.522 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@138 -- # local -ga mlx 00:26:11.522 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:11.522 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:11.522 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:11.522 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:11.522 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:11.522 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:11.522 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:11.522 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:11.522 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:11.522 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:11.522 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:11.522 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:11.522 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:26:11.522 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:26:11.522 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:26:11.522 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:26:11.522 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:26:11.522 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:26:11.522 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:26:11.522 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:11.522 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:11.522 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:26:11.522 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:26:11.522 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:11.522 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:11.522 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:26:11.522 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:26:11.522 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:11.522 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:11.522 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:26:11.522 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:26:11.522 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:11.522 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:11.522 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:26:11.522 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:26:11.522 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:26:11.522 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # [[ up == up ]] 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:11.523 Found net devices under 0000:86:00.0: cvl_0_0 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # [[ up == up ]] 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:11.523 Found net devices under 0000:86:00.1: cvl_0_1 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # is_hw=yes 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@257 -- # create_target_ns 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@27 -- # local -gA dev_map 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@28 -- # local -g _dev 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@44 -- # ips=() 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@11 -- # local val=167772161 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:26:11.523 10.0.0.1 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@11 -- # local val=167772162 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:26:11.523 10.0.0.2 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@38 -- # ping_ips 1 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:26:11.523 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@107 -- # local dev=initiator0 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:26:11.524 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:11.524 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.466 ms 00:26:11.524 00:26:11.524 --- 10.0.0.1 ping statistics --- 00:26:11.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:11.524 rtt min/avg/max/mdev = 0.466/0.466/0.466/0.000 ms 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@168 -- # get_net_dev target0 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@107 -- # local dev=target0 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:26:11.524 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:11.524 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:26:11.524 00:26:11.524 --- 10.0.0.2 ping statistics --- 00:26:11.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:11.524 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # (( pair++ )) 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@270 -- # return 0 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@107 -- # local dev=initiator0 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@107 -- # local dev=initiator1 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@109 -- # return 1 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@168 -- # dev= 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@169 -- # return 0 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@168 -- # get_net_dev target0 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@107 -- # local dev=target0 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@168 -- # get_net_dev target1 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@107 -- # local dev=target1 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:26:11.524 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:26:11.525 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@109 -- # return 1 00:26:11.525 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@168 -- # dev= 00:26:11.525 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@169 -- # return 0 00:26:11.525 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:26:11.525 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:11.525 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:26:11.525 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:26:11.525 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:11.525 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:26:11.525 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:26:11.525 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:26:11.525 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:26:11.525 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:11.525 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:11.525 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # nvmfpid=151584 00:26:11.525 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:26:11.525 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@329 -- # waitforlisten 151584 00:26:11.525 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 151584 ']' 00:26:11.525 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:11.525 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:11.525 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:11.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:11.525 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:11.525 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:11.525 [2024-12-05 12:09:45.038786] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:26:11.525 [2024-12-05 12:09:45.038835] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:11.525 [2024-12-05 12:09:45.118170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:11.525 [2024-12-05 12:09:45.158656] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:11.525 [2024-12-05 12:09:45.158690] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:11.525 [2024-12-05 12:09:45.158697] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:11.525 [2024-12-05 12:09:45.158703] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:11.525 [2024-12-05 12:09:45.158707] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:11.525 [2024-12-05 12:09:45.159273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:11.525 12:09:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:11.525 12:09:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:26:11.525 12:09:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:26:11.525 12:09:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:11.525 12:09:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:11.525 12:09:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:11.525 12:09:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:11.525 12:09:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=151769 00:26:11.525 12:09:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:26:11.525 12:09:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:26:11.525 12:09:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:26:11.525 12:09:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:26:11.525 12:09:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=2fe89c29-0a31-4002-be91-f3362bb740e9 00:26:11.525 12:09:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:26:11.525 12:09:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=2fefad7f-8f8d-47d4-a770-86ab0509a6eb 00:26:11.525 12:09:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:26:11.525 12:09:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=8d2d4572-a848-4603-9565-1b96253ca674 00:26:11.525 12:09:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:26:11.525 12:09:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.525 12:09:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:11.525 null0 00:26:11.525 null1 00:26:11.525 null2 00:26:11.525 [2024-12-05 12:09:45.336063] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:26:11.525 [2024-12-05 12:09:45.336106] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151769 ] 00:26:11.525 [2024-12-05 12:09:45.338866] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:11.525 [2024-12-05 12:09:45.363049] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:11.525 12:09:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.525 12:09:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 151769 /var/tmp/tgt2.sock 00:26:11.525 12:09:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 151769 ']' 00:26:11.525 12:09:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:26:11.525 12:09:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:11.525 12:09:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:26:11.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:26:11.525 12:09:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:11.525 12:09:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:11.525 [2024-12-05 12:09:45.411603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:11.525 [2024-12-05 12:09:45.457583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:11.525 12:09:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:11.525 12:09:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:26:11.525 12:09:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:26:11.806 [2024-12-05 12:09:45.976301] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:11.806 [2024-12-05 12:09:45.992415] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:26:12.123 nvme0n1 nvme0n2 00:26:12.123 nvme1n1 00:26:12.123 12:09:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:26:12.123 12:09:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:26:12.123 12:09:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 00:26:13.104 12:09:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:26:13.104 12:09:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:26:13.104 12:09:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:26:13.104 12:09:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:26:13.104 12:09:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:26:13.104 12:09:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:26:13.104 12:09:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:26:13.104 12:09:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:26:13.104 12:09:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:26:13.104 12:09:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:26:13.104 12:09:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:26:13.104 12:09:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:26:13.104 12:09:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:26:14.076 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:26:14.076 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:26:14.076 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:26:14.076 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:26:14.076 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:26:14.076 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 2fe89c29-0a31-4002-be91-f3362bb740e9 00:26:14.076 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@538 -- # tr -d - 00:26:14.076 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:26:14.076 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:26:14.076 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:26:14.076 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:26:14.076 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=2fe89c290a314002be91f3362bb740e9 00:26:14.076 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 2FE89C290A314002BE91F3362BB740E9 00:26:14.076 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 2FE89C290A314002BE91F3362BB740E9 == \2\F\E\8\9\C\2\9\0\A\3\1\4\0\0\2\B\E\9\1\F\3\3\6\2\B\B\7\4\0\E\9 ]] 00:26:14.076 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:26:14.076 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:26:14.076 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:26:14.076 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:26:14.077 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:26:14.077 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:26:14.077 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:26:14.077 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 2fefad7f-8f8d-47d4-a770-86ab0509a6eb 00:26:14.077 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@538 -- # tr -d - 00:26:14.077 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:26:14.077 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:26:14.077 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:26:14.077 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:26:14.077 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=2fefad7f8f8d47d4a77086ab0509a6eb 00:26:14.077 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 2FEFAD7F8F8D47D4A77086AB0509A6EB 00:26:14.077 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 2FEFAD7F8F8D47D4A77086AB0509A6EB == \2\F\E\F\A\D\7\F\8\F\8\D\4\7\D\4\A\7\7\0\8\6\A\B\0\5\0\9\A\6\E\B ]] 00:26:14.077 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:26:14.077 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:26:14.077 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:26:14.077 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:26:14.336 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:26:14.336 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:26:14.336 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:26:14.336 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 8d2d4572-a848-4603-9565-1b96253ca674 00:26:14.336 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@538 -- # tr -d - 00:26:14.336 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:26:14.336 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:26:14.336 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:26:14.336 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:26:14.336 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=8d2d4572a848460395651b96253ca674 00:26:14.336 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 8D2D4572A848460395651B96253CA674 00:26:14.336 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 8D2D4572A848460395651B96253CA674 == \8\D\2\D\4\5\7\2\A\8\4\8\4\6\0\3\9\5\6\5\1\B\9\6\2\5\3\C\A\6\7\4 ]] 00:26:14.336 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:26:14.336 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:26:14.336 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:26:14.336 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 151769 00:26:14.336 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 151769 ']' 00:26:14.336 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 151769 00:26:14.336 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:26:14.336 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:14.336 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 151769 00:26:14.596 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:14.596 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:14.596 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 151769' 00:26:14.596 killing process with pid 151769 00:26:14.596 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 151769 00:26:14.596 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 151769 00:26:14.855 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:26:14.855 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@335 -- # nvmfcleanup 00:26:14.855 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@99 -- # sync 00:26:14.855 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:26:14.855 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@102 -- # set +e 00:26:14.855 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@103 -- # for i in {1..20} 00:26:14.855 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:26:14.855 rmmod nvme_tcp 00:26:14.855 rmmod nvme_fabrics 00:26:14.855 rmmod nvme_keyring 00:26:14.855 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:26:14.855 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # set -e 00:26:14.855 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # return 0 00:26:14.855 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # '[' -n 151584 ']' 00:26:14.855 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@337 -- # killprocess 151584 00:26:14.855 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 151584 ']' 00:26:14.855 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 151584 00:26:14.855 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:26:14.855 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:14.855 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 151584 00:26:14.855 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:14.856 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:14.856 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 151584' 00:26:14.856 killing process with pid 151584 00:26:14.856 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 151584 00:26:14.856 12:09:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 151584 00:26:15.115 12:09:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:26:15.115 12:09:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@342 -- # nvmf_fini 00:26:15.115 12:09:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@264 -- # local dev 00:26:15.115 12:09:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@267 -- # remove_target_ns 00:26:15.115 12:09:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:26:15.115 12:09:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:26:15.115 12:09:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_target_ns 00:26:17.033 12:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@268 -- # delete_main_bridge 00:26:17.033 12:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:26:17.033 12:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@130 -- # return 0 00:26:17.033 12:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:26:17.033 12:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:26:17.033 12:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:26:17.033 12:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:26:17.033 12:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:26:17.033 12:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:26:17.033 12:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:26:17.033 12:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:26:17.033 12:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:26:17.033 12:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:26:17.033 12:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:26:17.033 12:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:26:17.033 12:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:26:17.033 12:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:26:17.033 12:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:26:17.033 12:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:26:17.033 12:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:26:17.033 12:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@41 -- # _dev=0 00:26:17.033 12:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@41 -- # dev_map=() 00:26:17.033 12:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@284 -- # iptr 00:26:17.033 12:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@542 -- # iptables-save 00:26:17.033 12:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:26:17.033 12:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@542 -- # iptables-restore 00:26:17.291 00:26:17.291 real 0m12.520s 00:26:17.291 user 0m9.681s 00:26:17.291 sys 0m5.600s 00:26:17.291 12:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:17.291 12:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:17.291 ************************************ 00:26:17.291 END TEST nvmf_nsid 00:26:17.291 ************************************ 00:26:17.291 12:09:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:26:17.291 00:26:17.291 real 12m1.518s 00:26:17.291 user 25m43.969s 00:26:17.291 sys 3m40.277s 00:26:17.291 12:09:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:17.291 12:09:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:17.291 ************************************ 00:26:17.291 END TEST nvmf_target_extra 00:26:17.291 ************************************ 00:26:17.291 12:09:51 nvmf_tcp -- nvmf/nvmf.sh@12 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:26:17.291 12:09:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:17.291 12:09:51 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:17.291 12:09:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:17.291 ************************************ 00:26:17.291 START TEST nvmf_host 00:26:17.291 ************************************ 00:26:17.291 12:09:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:26:17.291 * Looking for test storage... 00:26:17.291 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:26:17.291 12:09:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:17.291 12:09:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:26:17.291 12:09:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:17.549 12:09:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:17.549 12:09:51 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:17.549 12:09:51 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:17.549 12:09:51 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:17.549 12:09:51 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:26:17.549 12:09:51 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:26:17.549 12:09:51 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:26:17.549 12:09:51 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:26:17.549 12:09:51 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:26:17.549 12:09:51 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:26:17.549 12:09:51 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:26:17.549 12:09:51 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:17.549 12:09:51 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:26:17.549 12:09:51 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:26:17.549 12:09:51 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:17.549 12:09:51 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:17.549 12:09:51 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:26:17.549 12:09:51 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:26:17.549 12:09:51 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:17.549 12:09:51 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:26:17.549 12:09:51 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:26:17.549 12:09:51 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:26:17.549 12:09:51 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:26:17.549 12:09:51 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:17.549 12:09:51 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:26:17.549 12:09:51 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:26:17.549 12:09:51 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:17.549 12:09:51 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:17.549 12:09:51 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:26:17.549 12:09:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:17.549 12:09:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:17.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.549 --rc genhtml_branch_coverage=1 00:26:17.549 --rc genhtml_function_coverage=1 00:26:17.549 --rc genhtml_legend=1 00:26:17.549 --rc geninfo_all_blocks=1 00:26:17.549 --rc geninfo_unexecuted_blocks=1 00:26:17.549 00:26:17.549 ' 00:26:17.549 12:09:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:17.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.549 --rc genhtml_branch_coverage=1 00:26:17.549 --rc genhtml_function_coverage=1 00:26:17.549 --rc genhtml_legend=1 00:26:17.549 --rc geninfo_all_blocks=1 00:26:17.549 --rc geninfo_unexecuted_blocks=1 00:26:17.549 00:26:17.549 ' 00:26:17.549 12:09:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:17.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.549 --rc genhtml_branch_coverage=1 00:26:17.550 --rc genhtml_function_coverage=1 00:26:17.550 --rc genhtml_legend=1 00:26:17.550 --rc geninfo_all_blocks=1 00:26:17.550 --rc geninfo_unexecuted_blocks=1 00:26:17.550 00:26:17.550 ' 00:26:17.550 12:09:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:17.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.550 --rc genhtml_branch_coverage=1 00:26:17.550 --rc genhtml_function_coverage=1 00:26:17.550 --rc genhtml_legend=1 00:26:17.550 --rc geninfo_all_blocks=1 00:26:17.550 --rc geninfo_unexecuted_blocks=1 00:26:17.550 00:26:17.550 ' 00:26:17.550 12:09:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:17.550 12:09:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:26:17.550 12:09:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:17.550 12:09:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:17.550 12:09:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:17.550 12:09:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:17.550 12:09:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:17.550 12:09:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:26:17.550 12:09:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:17.550 12:09:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:26:17.550 12:09:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:26:17.550 12:09:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:26:17.550 12:09:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:17.550 12:09:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:26:17.550 12:09:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:26:17.550 12:09:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:17.550 12:09:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:17.550 12:09:51 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:26:17.550 12:09:51 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:17.550 12:09:51 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:17.550 12:09:51 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:17.550 12:09:51 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.550 12:09:51 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.550 12:09:51 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.550 12:09:51 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:26:17.550 12:09:51 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.550 12:09:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:26:17.550 12:09:51 nvmf_tcp.nvmf_host -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:26:17.550 12:09:51 nvmf_tcp.nvmf_host -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:26:17.550 12:09:51 nvmf_tcp.nvmf_host -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:26:17.550 12:09:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@50 -- # : 0 00:26:17.550 12:09:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:26:17.550 12:09:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:26:17.550 12:09:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:26:17.550 12:09:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:17.550 12:09:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:17.550 12:09:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:26:17.550 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:26:17.550 12:09:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:26:17.550 12:09:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:26:17.550 12:09:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@54 -- # have_pci_nics=0 00:26:17.550 12:09:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:26:17.550 12:09:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:26:17.550 12:09:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:26:17.550 12:09:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:26:17.550 12:09:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:17.550 12:09:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:17.550 12:09:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.550 ************************************ 00:26:17.550 START TEST nvmf_aer 00:26:17.550 ************************************ 00:26:17.550 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:26:17.550 * Looking for test storage... 00:26:17.550 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:17.550 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:17.550 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:26:17.550 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:17.550 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:17.550 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:17.550 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:17.550 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:17.550 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:26:17.550 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:26:17.550 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:26:17.550 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:26:17.550 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:26:17.550 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:26:17.550 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:26:17.550 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:17.550 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:26:17.550 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:26:17.550 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:17.550 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:17.812 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:26:17.812 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:26:17.812 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:17.812 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:26:17.812 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:26:17.812 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:26:17.812 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:26:17.812 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:17.812 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:26:17.812 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:26:17.812 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:17.812 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:17.812 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:26:17.812 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:17.812 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:17.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.812 --rc genhtml_branch_coverage=1 00:26:17.812 --rc genhtml_function_coverage=1 00:26:17.812 --rc genhtml_legend=1 00:26:17.812 --rc geninfo_all_blocks=1 00:26:17.812 --rc geninfo_unexecuted_blocks=1 00:26:17.812 00:26:17.812 ' 00:26:17.812 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:17.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.812 --rc genhtml_branch_coverage=1 00:26:17.812 --rc genhtml_function_coverage=1 00:26:17.812 --rc genhtml_legend=1 00:26:17.812 --rc geninfo_all_blocks=1 00:26:17.812 --rc geninfo_unexecuted_blocks=1 00:26:17.812 00:26:17.812 ' 00:26:17.812 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:17.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.812 --rc genhtml_branch_coverage=1 00:26:17.812 --rc genhtml_function_coverage=1 00:26:17.812 --rc genhtml_legend=1 00:26:17.812 --rc geninfo_all_blocks=1 00:26:17.812 --rc geninfo_unexecuted_blocks=1 00:26:17.812 00:26:17.812 ' 00:26:17.812 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:17.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.812 --rc genhtml_branch_coverage=1 00:26:17.812 --rc genhtml_function_coverage=1 00:26:17.812 --rc genhtml_legend=1 00:26:17.812 --rc geninfo_all_blocks=1 00:26:17.812 --rc geninfo_unexecuted_blocks=1 00:26:17.812 00:26:17.812 ' 00:26:17.812 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:17.812 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:26:17.812 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:17.812 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:17.812 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:17.812 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:17.812 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:17.812 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:26:17.812 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:17.812 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:26:17.812 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:26:17.812 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:26:17.812 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:17.812 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:26:17.812 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:26:17.812 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:17.812 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:17.812 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:26:17.812 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:17.812 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:17.812 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:17.812 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.813 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.813 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.813 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:26:17.813 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.813 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:26:17.813 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:26:17.813 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:26:17.813 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:26:17.813 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@50 -- # : 0 00:26:17.813 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:26:17.813 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:26:17.813 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:26:17.813 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:17.813 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:17.813 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:26:17.813 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:26:17.813 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:26:17.813 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:26:17.813 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@54 -- # have_pci_nics=0 00:26:17.813 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:26:17.813 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:26:17.813 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:17.813 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # prepare_net_devs 00:26:17.813 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # local -g is_hw=no 00:26:17.813 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@260 -- # remove_target_ns 00:26:17.813 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:26:17.813 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:26:17.813 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_target_ns 00:26:17.813 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:26:17.813 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:26:17.813 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # xtrace_disable 00:26:17.813 12:09:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:24.380 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:24.380 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@131 -- # pci_devs=() 00:26:24.380 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@131 -- # local -a pci_devs 00:26:24.380 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@132 -- # pci_net_devs=() 00:26:24.380 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:26:24.380 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@133 -- # pci_drivers=() 00:26:24.380 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@133 -- # local -A pci_drivers 00:26:24.380 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@135 -- # net_devs=() 00:26:24.380 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@135 -- # local -ga net_devs 00:26:24.380 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@136 -- # e810=() 00:26:24.380 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@136 -- # local -ga e810 00:26:24.380 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@137 -- # x722=() 00:26:24.380 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@137 -- # local -ga x722 00:26:24.380 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@138 -- # mlx=() 00:26:24.380 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@138 -- # local -ga mlx 00:26:24.380 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:24.380 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:24.380 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:24.380 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:24.380 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:24.380 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:24.380 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:24.380 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:24.380 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:24.380 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:24.380 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:24.380 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:24.380 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:26:24.380 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:26:24.380 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:26:24.380 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:26:24.380 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:26:24.380 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:26:24.380 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:26:24.380 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:24.380 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:24.380 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:26:24.380 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:26:24.380 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:24.380 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:24.380 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:26:24.380 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:26:24.380 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:24.380 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:24.380 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:26:24.380 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:26:24.380 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:24.380 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:24.380 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # [[ up == up ]] 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:24.381 Found net devices under 0000:86:00.0: cvl_0_0 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # [[ up == up ]] 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:24.381 Found net devices under 0000:86:00.1: cvl_0_1 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # is_hw=yes 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@257 -- # create_target_ns 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@27 -- # local -gA dev_map 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@28 -- # local -g _dev 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@44 -- # ips=() 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@11 -- # local val=167772161 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:26:24.381 10.0.0.1 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@11 -- # local val=167772162 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:26:24.381 10.0.0.2 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:26:24.381 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@38 -- # ping_ips 1 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@107 -- # local dev=initiator0 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:26:24.382 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:24.382 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.472 ms 00:26:24.382 00:26:24.382 --- 10.0.0.1 ping statistics --- 00:26:24.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:24.382 rtt min/avg/max/mdev = 0.472/0.472/0.472/0.000 ms 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@168 -- # get_net_dev target0 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@107 -- # local dev=target0 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:26:24.382 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:24.382 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:26:24.382 00:26:24.382 --- 10.0.0.2 ping statistics --- 00:26:24.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:24.382 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@98 -- # (( pair++ )) 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@270 -- # return 0 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@107 -- # local dev=initiator0 00:26:24.382 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@107 -- # local dev=initiator1 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@109 -- # return 1 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@168 -- # dev= 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@169 -- # return 0 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@168 -- # get_net_dev target0 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@107 -- # local dev=target0 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@168 -- # get_net_dev target1 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@107 -- # local dev=target1 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@109 -- # return 1 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@168 -- # dev= 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@169 -- # return 0 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # nvmfpid=155937 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@329 -- # waitforlisten 155937 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 155937 ']' 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:24.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:24.383 12:09:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:24.383 [2024-12-05 12:09:57.933104] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:26:24.383 [2024-12-05 12:09:57.933155] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:24.383 [2024-12-05 12:09:58.011356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:24.383 [2024-12-05 12:09:58.054761] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:24.383 [2024-12-05 12:09:58.054798] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:24.383 [2024-12-05 12:09:58.054805] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:24.383 [2024-12-05 12:09:58.054811] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:24.383 [2024-12-05 12:09:58.054816] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:24.383 [2024-12-05 12:09:58.056250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:24.383 [2024-12-05 12:09:58.056291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:24.383 [2024-12-05 12:09:58.056415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:24.383 [2024-12-05 12:09:58.056416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:24.383 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:24.383 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:26:24.383 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:26:24.383 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:24.384 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:24.384 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:24.384 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:24.384 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.384 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:24.384 [2024-12-05 12:09:58.198928] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:24.384 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.384 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:26:24.384 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.384 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:24.384 Malloc0 00:26:24.384 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.384 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:26:24.384 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.384 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:24.384 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.384 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:24.384 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.384 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:24.384 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.384 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:24.384 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.384 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:24.384 [2024-12-05 12:09:58.266153] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:24.384 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.384 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:26:24.384 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.384 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:24.384 [ 00:26:24.384 { 00:26:24.384 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:24.384 "subtype": "Discovery", 00:26:24.384 "listen_addresses": [], 00:26:24.384 "allow_any_host": true, 00:26:24.384 "hosts": [] 00:26:24.384 }, 00:26:24.384 { 00:26:24.384 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:24.384 "subtype": "NVMe", 00:26:24.384 "listen_addresses": [ 00:26:24.384 { 00:26:24.384 "trtype": "TCP", 00:26:24.384 "adrfam": "IPv4", 00:26:24.384 "traddr": "10.0.0.2", 00:26:24.384 "trsvcid": "4420" 00:26:24.384 } 00:26:24.384 ], 00:26:24.384 "allow_any_host": true, 00:26:24.384 "hosts": [], 00:26:24.384 "serial_number": "SPDK00000000000001", 00:26:24.384 "model_number": "SPDK bdev Controller", 00:26:24.384 "max_namespaces": 2, 00:26:24.384 "min_cntlid": 1, 00:26:24.384 "max_cntlid": 65519, 00:26:24.384 "namespaces": [ 00:26:24.384 { 00:26:24.384 "nsid": 1, 00:26:24.384 "bdev_name": "Malloc0", 00:26:24.384 "name": "Malloc0", 00:26:24.384 "nguid": "F1094D39B797496E88D1BB9E137E3FDE", 00:26:24.384 "uuid": "f1094d39-b797-496e-88d1-bb9e137e3fde" 00:26:24.384 } 00:26:24.384 ] 00:26:24.384 } 00:26:24.384 ] 00:26:24.384 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.384 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:26:24.384 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:26:24.384 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=156158 00:26:24.384 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:26:24.384 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:26:24.384 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:26:24.384 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:24.384 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:26:24.384 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:26:24.384 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:26:24.384 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:24.384 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:26:24.384 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:26:24.384 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:26:24.384 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:24.384 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:24.384 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:26:24.384 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:26:24.384 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.384 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:24.384 Malloc1 00:26:24.384 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.384 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:26:24.384 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.384 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:24.384 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.384 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:26:24.384 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.384 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:24.384 Asynchronous Event Request test 00:26:24.384 Attaching to 10.0.0.2 00:26:24.384 Attached to 10.0.0.2 00:26:24.384 Registering asynchronous event callbacks... 00:26:24.384 Starting namespace attribute notice tests for all controllers... 00:26:24.384 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:26:24.384 aer_cb - Changed Namespace 00:26:24.384 Cleaning up... 00:26:24.384 [ 00:26:24.384 { 00:26:24.384 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:24.384 "subtype": "Discovery", 00:26:24.384 "listen_addresses": [], 00:26:24.384 "allow_any_host": true, 00:26:24.384 "hosts": [] 00:26:24.384 }, 00:26:24.384 { 00:26:24.384 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:24.384 "subtype": "NVMe", 00:26:24.384 "listen_addresses": [ 00:26:24.384 { 00:26:24.384 "trtype": "TCP", 00:26:24.384 "adrfam": "IPv4", 00:26:24.384 "traddr": "10.0.0.2", 00:26:24.384 "trsvcid": "4420" 00:26:24.384 } 00:26:24.384 ], 00:26:24.384 "allow_any_host": true, 00:26:24.384 "hosts": [], 00:26:24.384 "serial_number": "SPDK00000000000001", 00:26:24.384 "model_number": "SPDK bdev Controller", 00:26:24.384 "max_namespaces": 2, 00:26:24.384 "min_cntlid": 1, 00:26:24.384 "max_cntlid": 65519, 00:26:24.384 "namespaces": [ 00:26:24.385 { 00:26:24.385 "nsid": 1, 00:26:24.385 "bdev_name": "Malloc0", 00:26:24.385 "name": "Malloc0", 00:26:24.385 "nguid": "F1094D39B797496E88D1BB9E137E3FDE", 00:26:24.385 "uuid": "f1094d39-b797-496e-88d1-bb9e137e3fde" 00:26:24.385 }, 00:26:24.385 { 00:26:24.385 "nsid": 2, 00:26:24.385 "bdev_name": "Malloc1", 00:26:24.385 "name": "Malloc1", 00:26:24.385 "nguid": "A7245278EB004F31B489F7543CECD940", 00:26:24.385 "uuid": "a7245278-eb00-4f31-b489-f7543cecd940" 00:26:24.385 } 00:26:24.385 ] 00:26:24.385 } 00:26:24.385 ] 00:26:24.385 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.385 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 156158 00:26:24.385 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:26:24.385 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.385 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:24.644 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.644 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:26:24.644 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.644 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:24.644 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.644 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:24.644 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.644 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:24.644 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.644 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:26:24.644 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:26:24.644 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # nvmfcleanup 00:26:24.644 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@99 -- # sync 00:26:24.644 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:26:24.644 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@102 -- # set +e 00:26:24.644 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@103 -- # for i in {1..20} 00:26:24.644 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:26:24.644 rmmod nvme_tcp 00:26:24.644 rmmod nvme_fabrics 00:26:24.644 rmmod nvme_keyring 00:26:24.644 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:26:24.644 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # set -e 00:26:24.644 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # return 0 00:26:24.644 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # '[' -n 155937 ']' 00:26:24.644 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@337 -- # killprocess 155937 00:26:24.644 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 155937 ']' 00:26:24.644 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 155937 00:26:24.644 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:26:24.644 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:24.644 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 155937 00:26:24.644 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:24.644 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:24.644 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 155937' 00:26:24.644 killing process with pid 155937 00:26:24.644 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 155937 00:26:24.644 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 155937 00:26:24.904 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:26:24.904 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # nvmf_fini 00:26:24.904 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@264 -- # local dev 00:26:24.904 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@267 -- # remove_target_ns 00:26:24.904 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:26:24.904 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:26:24.904 12:09:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_target_ns 00:26:26.809 12:10:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@268 -- # delete_main_bridge 00:26:26.809 12:10:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:26:26.809 12:10:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@130 -- # return 0 00:26:26.809 12:10:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:26:26.809 12:10:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:26:26.809 12:10:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:26:26.809 12:10:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:26:26.809 12:10:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:26:26.809 12:10:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:26:26.809 12:10:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:26:26.809 12:10:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:26:26.809 12:10:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:26:26.809 12:10:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:26:26.809 12:10:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:26:26.809 12:10:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:26:26.809 12:10:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:26:26.809 12:10:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:26:26.809 12:10:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:26:26.809 12:10:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:26:26.809 12:10:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:26:26.809 12:10:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@41 -- # _dev=0 00:26:26.809 12:10:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@41 -- # dev_map=() 00:26:26.809 12:10:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@284 -- # iptr 00:26:26.809 12:10:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@542 -- # iptables-save 00:26:26.809 12:10:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:26:26.809 12:10:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@542 -- # iptables-restore 00:26:26.809 00:26:26.809 real 0m9.386s 00:26:26.809 user 0m5.186s 00:26:26.809 sys 0m4.915s 00:26:26.809 12:10:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:26.809 12:10:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:26.809 ************************************ 00:26:26.809 END TEST nvmf_aer 00:26:26.809 ************************************ 00:26:26.809 12:10:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:26:26.809 12:10:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:26.809 12:10:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:26.809 12:10:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.068 ************************************ 00:26:27.068 START TEST nvmf_async_init 00:26:27.068 ************************************ 00:26:27.068 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:26:27.068 * Looking for test storage... 00:26:27.068 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:27.068 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:27.068 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:26:27.068 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:27.068 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:27.068 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:27.068 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:27.068 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:27.068 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:26:27.068 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:26:27.068 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:26:27.068 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:26:27.068 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:26:27.068 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:26:27.068 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:26:27.068 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:27.068 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:27.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:27.069 --rc genhtml_branch_coverage=1 00:26:27.069 --rc genhtml_function_coverage=1 00:26:27.069 --rc genhtml_legend=1 00:26:27.069 --rc geninfo_all_blocks=1 00:26:27.069 --rc geninfo_unexecuted_blocks=1 00:26:27.069 00:26:27.069 ' 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:27.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:27.069 --rc genhtml_branch_coverage=1 00:26:27.069 --rc genhtml_function_coverage=1 00:26:27.069 --rc genhtml_legend=1 00:26:27.069 --rc geninfo_all_blocks=1 00:26:27.069 --rc geninfo_unexecuted_blocks=1 00:26:27.069 00:26:27.069 ' 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:27.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:27.069 --rc genhtml_branch_coverage=1 00:26:27.069 --rc genhtml_function_coverage=1 00:26:27.069 --rc genhtml_legend=1 00:26:27.069 --rc geninfo_all_blocks=1 00:26:27.069 --rc geninfo_unexecuted_blocks=1 00:26:27.069 00:26:27.069 ' 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:27.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:27.069 --rc genhtml_branch_coverage=1 00:26:27.069 --rc genhtml_function_coverage=1 00:26:27.069 --rc genhtml_legend=1 00:26:27.069 --rc geninfo_all_blocks=1 00:26:27.069 --rc geninfo_unexecuted_blocks=1 00:26:27.069 00:26:27.069 ' 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@50 -- # : 0 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:26:27.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@54 -- # have_pci_nics=0 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=723a2fe20db44918a1a383893541f649 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # prepare_net_devs 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # local -g is_hw=no 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@260 -- # remove_target_ns 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_target_ns 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # xtrace_disable 00:26:27.069 12:10:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@131 -- # pci_devs=() 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@131 -- # local -a pci_devs 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@132 -- # pci_net_devs=() 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@133 -- # pci_drivers=() 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@133 -- # local -A pci_drivers 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@135 -- # net_devs=() 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@135 -- # local -ga net_devs 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@136 -- # e810=() 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@136 -- # local -ga e810 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@137 -- # x722=() 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@137 -- # local -ga x722 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@138 -- # mlx=() 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@138 -- # local -ga mlx 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:33.634 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:33.634 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # [[ up == up ]] 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:33.634 Found net devices under 0000:86:00.0: cvl_0_0 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # [[ up == up ]] 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:33.634 Found net devices under 0000:86:00.1: cvl_0_1 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # is_hw=yes 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@257 -- # create_target_ns 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@27 -- # local -gA dev_map 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@28 -- # local -g _dev 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@44 -- # ips=() 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:26:33.634 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:26:33.635 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:26:33.635 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:26:33.635 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:26:33.635 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:26:33.635 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:26:33.635 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:26:33.635 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:26:33.635 12:10:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@11 -- # local val=167772161 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:26:33.635 10.0.0.1 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@11 -- # local val=167772162 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:26:33.635 10.0.0.2 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@38 -- # ping_ips 1 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@107 -- # local dev=initiator0 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:26:33.635 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:33.635 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.453 ms 00:26:33.635 00:26:33.635 --- 10.0.0.1 ping statistics --- 00:26:33.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:33.635 rtt min/avg/max/mdev = 0.453/0.453/0.453/0.000 ms 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@168 -- # get_net_dev target0 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@107 -- # local dev=target0 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:26:33.635 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:33.635 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:26:33.635 00:26:33.635 --- 10.0.0.2 ping statistics --- 00:26:33.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:33.635 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@98 -- # (( pair++ )) 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@270 -- # return 0 00:26:33.635 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@107 -- # local dev=initiator0 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@107 -- # local dev=initiator1 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@109 -- # return 1 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@168 -- # dev= 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@169 -- # return 0 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@168 -- # get_net_dev target0 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@107 -- # local dev=target0 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@168 -- # get_net_dev target1 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@107 -- # local dev=target1 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@109 -- # return 1 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@168 -- # dev= 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@169 -- # return 0 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # nvmfpid=159708 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@329 -- # waitforlisten 159708 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 159708 ']' 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:33.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:33.636 [2024-12-05 12:10:07.386269] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:26:33.636 [2024-12-05 12:10:07.386320] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:33.636 [2024-12-05 12:10:07.466955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:33.636 [2024-12-05 12:10:07.506496] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:33.636 [2024-12-05 12:10:07.506533] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:33.636 [2024-12-05 12:10:07.506542] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:33.636 [2024-12-05 12:10:07.506549] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:33.636 [2024-12-05 12:10:07.506554] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:33.636 [2024-12-05 12:10:07.507118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:33.636 [2024-12-05 12:10:07.652187] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:33.636 null0 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:33.636 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.637 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:26:33.637 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.637 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:33.637 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.637 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 723a2fe20db44918a1a383893541f649 00:26:33.637 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.637 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:33.637 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.637 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:33.637 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.637 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:33.637 [2024-12-05 12:10:07.704472] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:33.637 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.637 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:26:33.637 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.637 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:33.896 nvme0n1 00:26:33.896 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.896 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:33.896 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.896 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:33.896 [ 00:26:33.896 { 00:26:33.896 "name": "nvme0n1", 00:26:33.896 "aliases": [ 00:26:33.896 "723a2fe2-0db4-4918-a1a3-83893541f649" 00:26:33.896 ], 00:26:33.896 "product_name": "NVMe disk", 00:26:33.896 "block_size": 512, 00:26:33.896 "num_blocks": 2097152, 00:26:33.896 "uuid": "723a2fe2-0db4-4918-a1a3-83893541f649", 00:26:33.896 "numa_id": 1, 00:26:33.896 "assigned_rate_limits": { 00:26:33.896 "rw_ios_per_sec": 0, 00:26:33.896 "rw_mbytes_per_sec": 0, 00:26:33.896 "r_mbytes_per_sec": 0, 00:26:33.896 "w_mbytes_per_sec": 0 00:26:33.896 }, 00:26:33.896 "claimed": false, 00:26:33.896 "zoned": false, 00:26:33.896 "supported_io_types": { 00:26:33.896 "read": true, 00:26:33.896 "write": true, 00:26:33.896 "unmap": false, 00:26:33.896 "flush": true, 00:26:33.896 "reset": true, 00:26:33.896 "nvme_admin": true, 00:26:33.896 "nvme_io": true, 00:26:33.896 "nvme_io_md": false, 00:26:33.896 "write_zeroes": true, 00:26:33.896 "zcopy": false, 00:26:33.896 "get_zone_info": false, 00:26:33.896 "zone_management": false, 00:26:33.896 "zone_append": false, 00:26:33.896 "compare": true, 00:26:33.896 "compare_and_write": true, 00:26:33.896 "abort": true, 00:26:33.896 "seek_hole": false, 00:26:33.896 "seek_data": false, 00:26:33.896 "copy": true, 00:26:33.896 "nvme_iov_md": false 00:26:33.896 }, 00:26:33.896 "memory_domains": [ 00:26:33.896 { 00:26:33.896 "dma_device_id": "system", 00:26:33.896 "dma_device_type": 1 00:26:33.896 } 00:26:33.896 ], 00:26:33.896 "driver_specific": { 00:26:33.896 "nvme": [ 00:26:33.896 { 00:26:33.896 "trid": { 00:26:33.896 "trtype": "TCP", 00:26:33.896 "adrfam": "IPv4", 00:26:33.896 "traddr": "10.0.0.2", 00:26:33.896 "trsvcid": "4420", 00:26:33.896 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:33.896 }, 00:26:33.896 "ctrlr_data": { 00:26:33.896 "cntlid": 1, 00:26:33.896 "vendor_id": "0x8086", 00:26:33.896 "model_number": "SPDK bdev Controller", 00:26:33.896 "serial_number": "00000000000000000000", 00:26:33.896 "firmware_revision": "25.01", 00:26:33.896 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:33.896 "oacs": { 00:26:33.896 "security": 0, 00:26:33.896 "format": 0, 00:26:33.896 "firmware": 0, 00:26:33.896 "ns_manage": 0 00:26:33.896 }, 00:26:33.896 "multi_ctrlr": true, 00:26:33.896 "ana_reporting": false 00:26:33.896 }, 00:26:33.896 "vs": { 00:26:33.896 "nvme_version": "1.3" 00:26:33.896 }, 00:26:33.896 "ns_data": { 00:26:33.896 "id": 1, 00:26:33.896 "can_share": true 00:26:33.896 } 00:26:33.896 } 00:26:33.896 ], 00:26:33.896 "mp_policy": "active_passive" 00:26:33.896 } 00:26:33.896 } 00:26:33.896 ] 00:26:33.896 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.896 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:26:33.896 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.896 12:10:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:33.896 [2024-12-05 12:10:07.960972] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:33.896 [2024-12-05 12:10:07.961027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16c5f80 (9): Bad file descriptor 00:26:33.896 [2024-12-05 12:10:08.092451] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:26:34.156 12:10:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.156 12:10:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:34.156 12:10:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.156 12:10:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:34.156 [ 00:26:34.156 { 00:26:34.156 "name": "nvme0n1", 00:26:34.156 "aliases": [ 00:26:34.156 "723a2fe2-0db4-4918-a1a3-83893541f649" 00:26:34.156 ], 00:26:34.156 "product_name": "NVMe disk", 00:26:34.156 "block_size": 512, 00:26:34.156 "num_blocks": 2097152, 00:26:34.156 "uuid": "723a2fe2-0db4-4918-a1a3-83893541f649", 00:26:34.156 "numa_id": 1, 00:26:34.156 "assigned_rate_limits": { 00:26:34.156 "rw_ios_per_sec": 0, 00:26:34.156 "rw_mbytes_per_sec": 0, 00:26:34.156 "r_mbytes_per_sec": 0, 00:26:34.156 "w_mbytes_per_sec": 0 00:26:34.156 }, 00:26:34.156 "claimed": false, 00:26:34.156 "zoned": false, 00:26:34.156 "supported_io_types": { 00:26:34.156 "read": true, 00:26:34.156 "write": true, 00:26:34.156 "unmap": false, 00:26:34.156 "flush": true, 00:26:34.156 "reset": true, 00:26:34.156 "nvme_admin": true, 00:26:34.156 "nvme_io": true, 00:26:34.156 "nvme_io_md": false, 00:26:34.156 "write_zeroes": true, 00:26:34.156 "zcopy": false, 00:26:34.156 "get_zone_info": false, 00:26:34.156 "zone_management": false, 00:26:34.156 "zone_append": false, 00:26:34.156 "compare": true, 00:26:34.156 "compare_and_write": true, 00:26:34.156 "abort": true, 00:26:34.156 "seek_hole": false, 00:26:34.156 "seek_data": false, 00:26:34.156 "copy": true, 00:26:34.156 "nvme_iov_md": false 00:26:34.156 }, 00:26:34.156 "memory_domains": [ 00:26:34.156 { 00:26:34.157 "dma_device_id": "system", 00:26:34.157 "dma_device_type": 1 00:26:34.157 } 00:26:34.157 ], 00:26:34.157 "driver_specific": { 00:26:34.157 "nvme": [ 00:26:34.157 { 00:26:34.157 "trid": { 00:26:34.157 "trtype": "TCP", 00:26:34.157 "adrfam": "IPv4", 00:26:34.157 "traddr": "10.0.0.2", 00:26:34.157 "trsvcid": "4420", 00:26:34.157 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:34.157 }, 00:26:34.157 "ctrlr_data": { 00:26:34.157 "cntlid": 2, 00:26:34.157 "vendor_id": "0x8086", 00:26:34.157 "model_number": "SPDK bdev Controller", 00:26:34.157 "serial_number": "00000000000000000000", 00:26:34.157 "firmware_revision": "25.01", 00:26:34.157 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:34.157 "oacs": { 00:26:34.157 "security": 0, 00:26:34.157 "format": 0, 00:26:34.157 "firmware": 0, 00:26:34.157 "ns_manage": 0 00:26:34.157 }, 00:26:34.157 "multi_ctrlr": true, 00:26:34.157 "ana_reporting": false 00:26:34.157 }, 00:26:34.157 "vs": { 00:26:34.157 "nvme_version": "1.3" 00:26:34.157 }, 00:26:34.157 "ns_data": { 00:26:34.157 "id": 1, 00:26:34.157 "can_share": true 00:26:34.157 } 00:26:34.157 } 00:26:34.157 ], 00:26:34.157 "mp_policy": "active_passive" 00:26:34.157 } 00:26:34.157 } 00:26:34.157 ] 00:26:34.157 12:10:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.157 12:10:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:34.157 12:10:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.157 12:10:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:34.157 12:10:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.157 12:10:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:26:34.157 12:10:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.KZbOGZM1Qt 00:26:34.157 12:10:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:26:34.157 12:10:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.KZbOGZM1Qt 00:26:34.157 12:10:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.KZbOGZM1Qt 00:26:34.157 12:10:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.157 12:10:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:34.157 12:10:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.157 12:10:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:26:34.157 12:10:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.157 12:10:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:34.157 12:10:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.157 12:10:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:26:34.157 12:10:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.157 12:10:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:34.157 [2024-12-05 12:10:08.169591] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:34.157 [2024-12-05 12:10:08.169713] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:34.157 12:10:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.157 12:10:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:26:34.157 12:10:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.157 12:10:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:34.157 12:10:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.157 12:10:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:26:34.157 12:10:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.157 12:10:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:34.157 [2024-12-05 12:10:08.189655] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:34.157 nvme0n1 00:26:34.157 12:10:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.157 12:10:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:34.157 12:10:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.157 12:10:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:34.157 [ 00:26:34.157 { 00:26:34.157 "name": "nvme0n1", 00:26:34.157 "aliases": [ 00:26:34.157 "723a2fe2-0db4-4918-a1a3-83893541f649" 00:26:34.157 ], 00:26:34.157 "product_name": "NVMe disk", 00:26:34.157 "block_size": 512, 00:26:34.157 "num_blocks": 2097152, 00:26:34.157 "uuid": "723a2fe2-0db4-4918-a1a3-83893541f649", 00:26:34.157 "numa_id": 1, 00:26:34.157 "assigned_rate_limits": { 00:26:34.157 "rw_ios_per_sec": 0, 00:26:34.157 "rw_mbytes_per_sec": 0, 00:26:34.157 "r_mbytes_per_sec": 0, 00:26:34.157 "w_mbytes_per_sec": 0 00:26:34.157 }, 00:26:34.157 "claimed": false, 00:26:34.157 "zoned": false, 00:26:34.157 "supported_io_types": { 00:26:34.157 "read": true, 00:26:34.157 "write": true, 00:26:34.157 "unmap": false, 00:26:34.157 "flush": true, 00:26:34.157 "reset": true, 00:26:34.157 "nvme_admin": true, 00:26:34.157 "nvme_io": true, 00:26:34.157 "nvme_io_md": false, 00:26:34.157 "write_zeroes": true, 00:26:34.157 "zcopy": false, 00:26:34.157 "get_zone_info": false, 00:26:34.157 "zone_management": false, 00:26:34.157 "zone_append": false, 00:26:34.157 "compare": true, 00:26:34.157 "compare_and_write": true, 00:26:34.157 "abort": true, 00:26:34.157 "seek_hole": false, 00:26:34.157 "seek_data": false, 00:26:34.157 "copy": true, 00:26:34.157 "nvme_iov_md": false 00:26:34.157 }, 00:26:34.157 "memory_domains": [ 00:26:34.157 { 00:26:34.157 "dma_device_id": "system", 00:26:34.157 "dma_device_type": 1 00:26:34.157 } 00:26:34.157 ], 00:26:34.157 "driver_specific": { 00:26:34.157 "nvme": [ 00:26:34.157 { 00:26:34.157 "trid": { 00:26:34.157 "trtype": "TCP", 00:26:34.157 "adrfam": "IPv4", 00:26:34.157 "traddr": "10.0.0.2", 00:26:34.157 "trsvcid": "4421", 00:26:34.157 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:34.157 }, 00:26:34.157 "ctrlr_data": { 00:26:34.157 "cntlid": 3, 00:26:34.157 "vendor_id": "0x8086", 00:26:34.157 "model_number": "SPDK bdev Controller", 00:26:34.157 "serial_number": "00000000000000000000", 00:26:34.157 "firmware_revision": "25.01", 00:26:34.157 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:34.157 "oacs": { 00:26:34.157 "security": 0, 00:26:34.157 "format": 0, 00:26:34.158 "firmware": 0, 00:26:34.158 "ns_manage": 0 00:26:34.158 }, 00:26:34.158 "multi_ctrlr": true, 00:26:34.158 "ana_reporting": false 00:26:34.158 }, 00:26:34.158 "vs": { 00:26:34.158 "nvme_version": "1.3" 00:26:34.158 }, 00:26:34.158 "ns_data": { 00:26:34.158 "id": 1, 00:26:34.158 "can_share": true 00:26:34.158 } 00:26:34.158 } 00:26:34.158 ], 00:26:34.158 "mp_policy": "active_passive" 00:26:34.158 } 00:26:34.158 } 00:26:34.158 ] 00:26:34.158 12:10:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.158 12:10:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:34.158 12:10:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.158 12:10:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:34.158 12:10:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.158 12:10:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.KZbOGZM1Qt 00:26:34.158 12:10:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:26:34.158 12:10:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:26:34.158 12:10:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # nvmfcleanup 00:26:34.158 12:10:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@99 -- # sync 00:26:34.158 12:10:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:26:34.158 12:10:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@102 -- # set +e 00:26:34.158 12:10:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@103 -- # for i in {1..20} 00:26:34.158 12:10:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:26:34.158 rmmod nvme_tcp 00:26:34.158 rmmod nvme_fabrics 00:26:34.158 rmmod nvme_keyring 00:26:34.417 12:10:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:26:34.417 12:10:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # set -e 00:26:34.417 12:10:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # return 0 00:26:34.417 12:10:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # '[' -n 159708 ']' 00:26:34.417 12:10:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@337 -- # killprocess 159708 00:26:34.417 12:10:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 159708 ']' 00:26:34.417 12:10:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 159708 00:26:34.417 12:10:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:26:34.417 12:10:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:34.417 12:10:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 159708 00:26:34.417 12:10:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:34.417 12:10:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:34.417 12:10:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 159708' 00:26:34.417 killing process with pid 159708 00:26:34.417 12:10:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 159708 00:26:34.417 12:10:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 159708 00:26:34.417 12:10:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:26:34.417 12:10:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # nvmf_fini 00:26:34.417 12:10:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@264 -- # local dev 00:26:34.417 12:10:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@267 -- # remove_target_ns 00:26:34.417 12:10:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:26:34.417 12:10:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:26:34.417 12:10:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_target_ns 00:26:36.962 12:10:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@268 -- # delete_main_bridge 00:26:36.962 12:10:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:26:36.962 12:10:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@130 -- # return 0 00:26:36.962 12:10:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:26:36.962 12:10:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:26:36.962 12:10:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:26:36.962 12:10:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:26:36.962 12:10:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:26:36.962 12:10:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:26:36.962 12:10:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:26:36.962 12:10:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:26:36.962 12:10:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:26:36.962 12:10:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:26:36.962 12:10:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:26:36.962 12:10:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:26:36.962 12:10:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:26:36.962 12:10:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:26:36.962 12:10:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:26:36.962 12:10:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:26:36.962 12:10:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:26:36.962 12:10:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@41 -- # _dev=0 00:26:36.962 12:10:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@41 -- # dev_map=() 00:26:36.962 12:10:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@284 -- # iptr 00:26:36.962 12:10:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@542 -- # iptables-save 00:26:36.962 12:10:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:26:36.962 12:10:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@542 -- # iptables-restore 00:26:36.962 00:26:36.962 real 0m9.606s 00:26:36.962 user 0m3.190s 00:26:36.962 sys 0m4.860s 00:26:36.962 12:10:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:36.962 12:10:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:36.962 ************************************ 00:26:36.962 END TEST nvmf_async_init 00:26:36.962 ************************************ 00:26:36.962 12:10:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@20 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:26:36.962 12:10:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:36.962 12:10:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:36.962 12:10:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.962 ************************************ 00:26:36.962 START TEST nvmf_identify 00:26:36.962 ************************************ 00:26:36.962 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:26:36.962 * Looking for test storage... 00:26:36.962 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:36.962 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:36.962 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:26:36.962 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:36.963 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:36.963 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:36.963 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:36.963 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:36.963 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:26:36.963 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:26:36.963 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:26:36.963 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:26:36.963 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:26:36.963 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:26:36.963 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:26:36.963 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:36.963 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:26:36.963 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:26:36.963 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:36.963 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:36.963 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:26:36.963 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:26:36.963 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:36.963 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:26:36.963 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:26:36.963 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:26:36.963 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:26:36.963 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:36.963 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:26:36.963 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:26:36.963 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:36.963 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:36.963 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:26:36.963 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:36.963 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:36.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:36.963 --rc genhtml_branch_coverage=1 00:26:36.963 --rc genhtml_function_coverage=1 00:26:36.963 --rc genhtml_legend=1 00:26:36.963 --rc geninfo_all_blocks=1 00:26:36.963 --rc geninfo_unexecuted_blocks=1 00:26:36.963 00:26:36.963 ' 00:26:36.963 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:36.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:36.963 --rc genhtml_branch_coverage=1 00:26:36.963 --rc genhtml_function_coverage=1 00:26:36.963 --rc genhtml_legend=1 00:26:36.963 --rc geninfo_all_blocks=1 00:26:36.963 --rc geninfo_unexecuted_blocks=1 00:26:36.963 00:26:36.963 ' 00:26:36.963 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:36.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:36.963 --rc genhtml_branch_coverage=1 00:26:36.963 --rc genhtml_function_coverage=1 00:26:36.963 --rc genhtml_legend=1 00:26:36.963 --rc geninfo_all_blocks=1 00:26:36.963 --rc geninfo_unexecuted_blocks=1 00:26:36.963 00:26:36.963 ' 00:26:36.963 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:36.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:36.963 --rc genhtml_branch_coverage=1 00:26:36.963 --rc genhtml_function_coverage=1 00:26:36.963 --rc genhtml_legend=1 00:26:36.963 --rc geninfo_all_blocks=1 00:26:36.963 --rc geninfo_unexecuted_blocks=1 00:26:36.963 00:26:36.963 ' 00:26:36.963 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:36.963 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:26:36.963 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:36.963 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:36.963 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:36.963 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:36.963 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:36.963 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:26:36.963 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:36.963 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:26:36.963 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:26:36.963 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:26:36.963 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:36.963 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:26:36.963 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:26:36.963 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:36.963 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:36.963 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:26:36.963 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:36.963 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:36.963 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:36.963 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:36.963 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:36.963 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:36.963 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:26:36.964 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:36.964 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:26:36.964 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:26:36.964 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:26:36.964 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:26:36.964 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@50 -- # : 0 00:26:36.964 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:26:36.964 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:26:36.964 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:26:36.964 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:36.964 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:36.964 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:26:36.964 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:26:36.964 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:26:36.964 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:26:36.964 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@54 -- # have_pci_nics=0 00:26:36.964 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:36.964 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:36.964 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:26:36.964 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:26:36.964 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:36.964 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # prepare_net_devs 00:26:36.964 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # local -g is_hw=no 00:26:36.964 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@260 -- # remove_target_ns 00:26:36.964 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:26:36.964 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:26:36.964 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_target_ns 00:26:36.964 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:26:36.964 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:26:36.964 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # xtrace_disable 00:26:36.964 12:10:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:43.537 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:43.537 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@131 -- # pci_devs=() 00:26:43.537 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@131 -- # local -a pci_devs 00:26:43.537 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@132 -- # pci_net_devs=() 00:26:43.537 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:26:43.537 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@133 -- # pci_drivers=() 00:26:43.537 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@133 -- # local -A pci_drivers 00:26:43.537 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@135 -- # net_devs=() 00:26:43.537 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@135 -- # local -ga net_devs 00:26:43.537 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@136 -- # e810=() 00:26:43.537 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@136 -- # local -ga e810 00:26:43.537 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@137 -- # x722=() 00:26:43.537 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@137 -- # local -ga x722 00:26:43.537 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@138 -- # mlx=() 00:26:43.537 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@138 -- # local -ga mlx 00:26:43.537 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:43.537 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:43.537 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:43.537 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:43.538 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:43.538 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # [[ up == up ]] 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:43.538 Found net devices under 0000:86:00.0: cvl_0_0 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # [[ up == up ]] 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:43.538 Found net devices under 0000:86:00.1: cvl_0_1 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # is_hw=yes 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@257 -- # create_target_ns 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@27 -- # local -gA dev_map 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@28 -- # local -g _dev 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@44 -- # ips=() 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@11 -- # local val=167772161 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:26:43.538 10.0.0.1 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@11 -- # local val=167772162 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:26:43.538 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:26:43.539 10.0.0.2 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@38 -- # ping_ips 1 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@107 -- # local dev=initiator0 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:26:43.539 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:43.539 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.345 ms 00:26:43.539 00:26:43.539 --- 10.0.0.1 ping statistics --- 00:26:43.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:43.539 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # get_net_dev target0 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@107 -- # local dev=target0 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:26:43.539 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:43.539 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:26:43.539 00:26:43.539 --- 10.0.0.2 ping statistics --- 00:26:43.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:43.539 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # (( pair++ )) 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@270 -- # return 0 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@107 -- # local dev=initiator0 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@107 -- # local dev=initiator1 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # return 1 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # dev= 00:26:43.539 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@169 -- # return 0 00:26:43.540 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:26:43.540 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:26:43.540 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:26:43.540 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:26:43.540 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:26:43.540 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:43.540 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:43.540 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # get_net_dev target0 00:26:43.540 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@107 -- # local dev=target0 00:26:43.540 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:26:43.540 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:26:43.540 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:26:43.540 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:26:43.540 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:26:43.540 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:26:43.540 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:26:43.540 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:26:43.540 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:26:43.540 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:43.540 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:26:43.540 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:26:43.540 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:26:43.540 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:26:43.540 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:43.540 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:43.540 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # get_net_dev target1 00:26:43.540 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@107 -- # local dev=target1 00:26:43.540 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:26:43.540 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:26:43.540 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # return 1 00:26:43.540 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # dev= 00:26:43.540 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@169 -- # return 0 00:26:43.540 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:26:43.540 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:43.540 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:26:43.540 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:26:43.540 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:43.540 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:26:43.540 12:10:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:26:43.540 12:10:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:26:43.540 12:10:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:43.540 12:10:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:43.540 12:10:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=163495 00:26:43.540 12:10:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:43.540 12:10:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 163495 00:26:43.540 12:10:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:43.540 12:10:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 163495 ']' 00:26:43.540 12:10:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:43.540 12:10:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:43.540 12:10:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:43.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:43.540 12:10:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:43.540 12:10:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:43.540 [2024-12-05 12:10:17.065835] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:26:43.540 [2024-12-05 12:10:17.065879] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:43.540 [2024-12-05 12:10:17.143764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:43.540 [2024-12-05 12:10:17.184651] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:43.540 [2024-12-05 12:10:17.184701] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:43.540 [2024-12-05 12:10:17.184707] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:43.540 [2024-12-05 12:10:17.184713] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:43.540 [2024-12-05 12:10:17.184718] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:43.540 [2024-12-05 12:10:17.186394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:43.540 [2024-12-05 12:10:17.186501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:43.540 [2024-12-05 12:10:17.186609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:43.540 [2024-12-05 12:10:17.186610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:43.799 12:10:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:43.799 12:10:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:26:43.799 12:10:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:43.799 12:10:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.799 12:10:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:43.799 [2024-12-05 12:10:17.893954] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:43.799 12:10:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.799 12:10:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:26:43.799 12:10:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:43.799 12:10:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:43.799 12:10:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:43.799 12:10:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.799 12:10:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:43.799 Malloc0 00:26:43.799 12:10:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.799 12:10:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:43.799 12:10:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.799 12:10:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:43.799 12:10:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.799 12:10:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:26:43.799 12:10:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.799 12:10:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:43.799 12:10:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.799 12:10:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:43.799 12:10:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.799 12:10:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:43.799 [2024-12-05 12:10:17.988853] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:43.799 12:10:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.799 12:10:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:43.799 12:10:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.799 12:10:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:44.060 12:10:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.060 12:10:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:26:44.060 12:10:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.060 12:10:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:44.060 [ 00:26:44.060 { 00:26:44.060 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:44.060 "subtype": "Discovery", 00:26:44.060 "listen_addresses": [ 00:26:44.060 { 00:26:44.060 "trtype": "TCP", 00:26:44.060 "adrfam": "IPv4", 00:26:44.060 "traddr": "10.0.0.2", 00:26:44.060 "trsvcid": "4420" 00:26:44.060 } 00:26:44.060 ], 00:26:44.060 "allow_any_host": true, 00:26:44.060 "hosts": [] 00:26:44.060 }, 00:26:44.060 { 00:26:44.060 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:44.060 "subtype": "NVMe", 00:26:44.060 "listen_addresses": [ 00:26:44.060 { 00:26:44.060 "trtype": "TCP", 00:26:44.060 "adrfam": "IPv4", 00:26:44.060 "traddr": "10.0.0.2", 00:26:44.060 "trsvcid": "4420" 00:26:44.060 } 00:26:44.060 ], 00:26:44.060 "allow_any_host": true, 00:26:44.060 "hosts": [], 00:26:44.060 "serial_number": "SPDK00000000000001", 00:26:44.060 "model_number": "SPDK bdev Controller", 00:26:44.060 "max_namespaces": 32, 00:26:44.060 "min_cntlid": 1, 00:26:44.060 "max_cntlid": 65519, 00:26:44.060 "namespaces": [ 00:26:44.060 { 00:26:44.060 "nsid": 1, 00:26:44.060 "bdev_name": "Malloc0", 00:26:44.060 "name": "Malloc0", 00:26:44.060 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:26:44.060 "eui64": "ABCDEF0123456789", 00:26:44.060 "uuid": "a41b1ad0-7585-4d6a-905e-2b32aad7e124" 00:26:44.060 } 00:26:44.060 ] 00:26:44.060 } 00:26:44.060 ] 00:26:44.060 12:10:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.060 12:10:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:26:44.060 [2024-12-05 12:10:18.041584] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:26:44.060 [2024-12-05 12:10:18.041628] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid163535 ] 00:26:44.060 [2024-12-05 12:10:18.081891] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:26:44.060 [2024-12-05 12:10:18.081937] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:26:44.060 [2024-12-05 12:10:18.081942] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:26:44.060 [2024-12-05 12:10:18.081958] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:26:44.060 [2024-12-05 12:10:18.081966] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:26:44.061 [2024-12-05 12:10:18.085680] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:26:44.061 [2024-12-05 12:10:18.085713] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1c8f690 0 00:26:44.061 [2024-12-05 12:10:18.093379] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:26:44.061 [2024-12-05 12:10:18.093392] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:26:44.061 [2024-12-05 12:10:18.093397] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:26:44.061 [2024-12-05 12:10:18.093400] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:26:44.061 [2024-12-05 12:10:18.093434] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.061 [2024-12-05 12:10:18.093440] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.061 [2024-12-05 12:10:18.093443] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c8f690) 00:26:44.061 [2024-12-05 12:10:18.093456] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:26:44.061 [2024-12-05 12:10:18.093474] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf1100, cid 0, qid 0 00:26:44.061 [2024-12-05 12:10:18.100377] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.061 [2024-12-05 12:10:18.100388] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.061 [2024-12-05 12:10:18.100391] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.061 [2024-12-05 12:10:18.100395] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf1100) on tqpair=0x1c8f690 00:26:44.061 [2024-12-05 12:10:18.100405] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:26:44.061 [2024-12-05 12:10:18.100413] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:26:44.061 [2024-12-05 12:10:18.100418] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:26:44.061 [2024-12-05 12:10:18.100433] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.061 [2024-12-05 12:10:18.100437] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.061 [2024-12-05 12:10:18.100440] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c8f690) 00:26:44.061 [2024-12-05 12:10:18.100447] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.061 [2024-12-05 12:10:18.100465] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf1100, cid 0, qid 0 00:26:44.061 [2024-12-05 12:10:18.100624] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.061 [2024-12-05 12:10:18.100630] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.061 [2024-12-05 12:10:18.100633] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.061 [2024-12-05 12:10:18.100637] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf1100) on tqpair=0x1c8f690 00:26:44.061 [2024-12-05 12:10:18.100644] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:26:44.061 [2024-12-05 12:10:18.100651] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:26:44.061 [2024-12-05 12:10:18.100658] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.061 [2024-12-05 12:10:18.100661] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.061 [2024-12-05 12:10:18.100665] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c8f690) 00:26:44.061 [2024-12-05 12:10:18.100671] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.061 [2024-12-05 12:10:18.100681] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf1100, cid 0, qid 0 00:26:44.061 [2024-12-05 12:10:18.100741] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.061 [2024-12-05 12:10:18.100747] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.061 [2024-12-05 12:10:18.100750] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.061 [2024-12-05 12:10:18.100754] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf1100) on tqpair=0x1c8f690 00:26:44.061 [2024-12-05 12:10:18.100758] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:26:44.061 [2024-12-05 12:10:18.100765] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:26:44.061 [2024-12-05 12:10:18.100771] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.061 [2024-12-05 12:10:18.100774] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.061 [2024-12-05 12:10:18.100777] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c8f690) 00:26:44.061 [2024-12-05 12:10:18.100783] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.061 [2024-12-05 12:10:18.100793] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf1100, cid 0, qid 0 00:26:44.061 [2024-12-05 12:10:18.100860] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.061 [2024-12-05 12:10:18.100866] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.061 [2024-12-05 12:10:18.100869] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.061 [2024-12-05 12:10:18.100872] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf1100) on tqpair=0x1c8f690 00:26:44.061 [2024-12-05 12:10:18.100877] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:26:44.061 [2024-12-05 12:10:18.100885] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.061 [2024-12-05 12:10:18.100889] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.061 [2024-12-05 12:10:18.100892] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c8f690) 00:26:44.061 [2024-12-05 12:10:18.100898] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.061 [2024-12-05 12:10:18.100907] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf1100, cid 0, qid 0 00:26:44.061 [2024-12-05 12:10:18.100965] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.061 [2024-12-05 12:10:18.100973] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.061 [2024-12-05 12:10:18.100976] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.061 [2024-12-05 12:10:18.100979] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf1100) on tqpair=0x1c8f690 00:26:44.061 [2024-12-05 12:10:18.100984] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:26:44.061 [2024-12-05 12:10:18.100988] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:26:44.061 [2024-12-05 12:10:18.100996] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:26:44.061 [2024-12-05 12:10:18.101117] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:26:44.061 [2024-12-05 12:10:18.101122] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:26:44.061 [2024-12-05 12:10:18.101129] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.061 [2024-12-05 12:10:18.101132] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.061 [2024-12-05 12:10:18.101135] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c8f690) 00:26:44.061 [2024-12-05 12:10:18.101141] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.061 [2024-12-05 12:10:18.101151] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf1100, cid 0, qid 0 00:26:44.061 [2024-12-05 12:10:18.101233] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.061 [2024-12-05 12:10:18.101238] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.061 [2024-12-05 12:10:18.101241] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.061 [2024-12-05 12:10:18.101244] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf1100) on tqpair=0x1c8f690 00:26:44.061 [2024-12-05 12:10:18.101248] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:26:44.061 [2024-12-05 12:10:18.101257] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.061 [2024-12-05 12:10:18.101260] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.061 [2024-12-05 12:10:18.101263] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c8f690) 00:26:44.061 [2024-12-05 12:10:18.101269] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.061 [2024-12-05 12:10:18.101279] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf1100, cid 0, qid 0 00:26:44.061 [2024-12-05 12:10:18.101350] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.061 [2024-12-05 12:10:18.101355] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.061 [2024-12-05 12:10:18.101358] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.061 [2024-12-05 12:10:18.101361] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf1100) on tqpair=0x1c8f690 00:26:44.061 [2024-12-05 12:10:18.101365] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:26:44.061 [2024-12-05 12:10:18.101375] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:26:44.061 [2024-12-05 12:10:18.101382] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:26:44.061 [2024-12-05 12:10:18.101389] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:26:44.061 [2024-12-05 12:10:18.101399] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.061 [2024-12-05 12:10:18.101403] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c8f690) 00:26:44.061 [2024-12-05 12:10:18.101409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.061 [2024-12-05 12:10:18.101418] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf1100, cid 0, qid 0 00:26:44.061 [2024-12-05 12:10:18.101523] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:44.061 [2024-12-05 12:10:18.101528] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:44.061 [2024-12-05 12:10:18.101532] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:44.061 [2024-12-05 12:10:18.101536] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c8f690): datao=0, datal=4096, cccid=0 00:26:44.061 [2024-12-05 12:10:18.101540] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cf1100) on tqpair(0x1c8f690): expected_datao=0, payload_size=4096 00:26:44.061 [2024-12-05 12:10:18.101544] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.061 [2024-12-05 12:10:18.101555] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:44.062 [2024-12-05 12:10:18.101560] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:44.062 [2024-12-05 12:10:18.142509] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.062 [2024-12-05 12:10:18.142520] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.062 [2024-12-05 12:10:18.142523] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.062 [2024-12-05 12:10:18.142527] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf1100) on tqpair=0x1c8f690 00:26:44.062 [2024-12-05 12:10:18.142535] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:26:44.062 [2024-12-05 12:10:18.142540] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:26:44.062 [2024-12-05 12:10:18.142544] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:26:44.062 [2024-12-05 12:10:18.142549] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:26:44.062 [2024-12-05 12:10:18.142553] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:26:44.062 [2024-12-05 12:10:18.142558] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:26:44.062 [2024-12-05 12:10:18.142566] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:26:44.062 [2024-12-05 12:10:18.142573] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.062 [2024-12-05 12:10:18.142577] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.062 [2024-12-05 12:10:18.142580] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c8f690) 00:26:44.062 [2024-12-05 12:10:18.142587] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:44.062 [2024-12-05 12:10:18.142598] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf1100, cid 0, qid 0 00:26:44.062 [2024-12-05 12:10:18.142657] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.062 [2024-12-05 12:10:18.142663] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.062 [2024-12-05 12:10:18.142666] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.062 [2024-12-05 12:10:18.142669] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf1100) on tqpair=0x1c8f690 00:26:44.062 [2024-12-05 12:10:18.142676] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.062 [2024-12-05 12:10:18.142680] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.062 [2024-12-05 12:10:18.142686] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c8f690) 00:26:44.062 [2024-12-05 12:10:18.142691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.062 [2024-12-05 12:10:18.142697] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.062 [2024-12-05 12:10:18.142700] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.062 [2024-12-05 12:10:18.142703] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1c8f690) 00:26:44.062 [2024-12-05 12:10:18.142708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.062 [2024-12-05 12:10:18.142713] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.062 [2024-12-05 12:10:18.142717] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.062 [2024-12-05 12:10:18.142720] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1c8f690) 00:26:44.062 [2024-12-05 12:10:18.142725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.062 [2024-12-05 12:10:18.142730] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.062 [2024-12-05 12:10:18.142733] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.062 [2024-12-05 12:10:18.142736] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c8f690) 00:26:44.062 [2024-12-05 12:10:18.142741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.062 [2024-12-05 12:10:18.142745] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:26:44.062 [2024-12-05 12:10:18.142757] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:26:44.062 [2024-12-05 12:10:18.142763] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.062 [2024-12-05 12:10:18.142766] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c8f690) 00:26:44.062 [2024-12-05 12:10:18.142771] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.062 [2024-12-05 12:10:18.142783] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf1100, cid 0, qid 0 00:26:44.062 [2024-12-05 12:10:18.142788] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf1280, cid 1, qid 0 00:26:44.062 [2024-12-05 12:10:18.142792] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf1400, cid 2, qid 0 00:26:44.062 [2024-12-05 12:10:18.142796] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf1580, cid 3, qid 0 00:26:44.062 [2024-12-05 12:10:18.142800] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf1700, cid 4, qid 0 00:26:44.062 [2024-12-05 12:10:18.142894] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.062 [2024-12-05 12:10:18.142899] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.062 [2024-12-05 12:10:18.142902] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.062 [2024-12-05 12:10:18.142906] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf1700) on tqpair=0x1c8f690 00:26:44.062 [2024-12-05 12:10:18.142911] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:26:44.062 [2024-12-05 12:10:18.142916] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:26:44.062 [2024-12-05 12:10:18.142926] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.062 [2024-12-05 12:10:18.142929] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c8f690) 00:26:44.062 [2024-12-05 12:10:18.142935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.062 [2024-12-05 12:10:18.142947] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf1700, cid 4, qid 0 00:26:44.062 [2024-12-05 12:10:18.143016] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:44.062 [2024-12-05 12:10:18.143022] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:44.062 [2024-12-05 12:10:18.143025] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:44.062 [2024-12-05 12:10:18.143028] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c8f690): datao=0, datal=4096, cccid=4 00:26:44.062 [2024-12-05 12:10:18.143032] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cf1700) on tqpair(0x1c8f690): expected_datao=0, payload_size=4096 00:26:44.062 [2024-12-05 12:10:18.143036] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.062 [2024-12-05 12:10:18.143058] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:44.062 [2024-12-05 12:10:18.143062] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:44.062 [2024-12-05 12:10:18.143094] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.062 [2024-12-05 12:10:18.143100] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.062 [2024-12-05 12:10:18.143103] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.062 [2024-12-05 12:10:18.143106] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf1700) on tqpair=0x1c8f690 00:26:44.062 [2024-12-05 12:10:18.143118] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:26:44.062 [2024-12-05 12:10:18.143140] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.062 [2024-12-05 12:10:18.143144] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c8f690) 00:26:44.062 [2024-12-05 12:10:18.143149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.062 [2024-12-05 12:10:18.143155] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.062 [2024-12-05 12:10:18.143158] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.062 [2024-12-05 12:10:18.143162] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c8f690) 00:26:44.062 [2024-12-05 12:10:18.143167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.062 [2024-12-05 12:10:18.143180] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf1700, cid 4, qid 0 00:26:44.062 [2024-12-05 12:10:18.143185] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf1880, cid 5, qid 0 00:26:44.062 [2024-12-05 12:10:18.143281] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:44.062 [2024-12-05 12:10:18.143287] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:44.062 [2024-12-05 12:10:18.143290] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:44.062 [2024-12-05 12:10:18.143293] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c8f690): datao=0, datal=1024, cccid=4 00:26:44.062 [2024-12-05 12:10:18.143297] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cf1700) on tqpair(0x1c8f690): expected_datao=0, payload_size=1024 00:26:44.062 [2024-12-05 12:10:18.143301] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.062 [2024-12-05 12:10:18.143306] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:44.062 [2024-12-05 12:10:18.143309] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:44.062 [2024-12-05 12:10:18.143314] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.062 [2024-12-05 12:10:18.143319] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.062 [2024-12-05 12:10:18.143322] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.062 [2024-12-05 12:10:18.143325] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf1880) on tqpair=0x1c8f690 00:26:44.062 [2024-12-05 12:10:18.184440] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.062 [2024-12-05 12:10:18.184451] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.062 [2024-12-05 12:10:18.184454] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.062 [2024-12-05 12:10:18.184458] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf1700) on tqpair=0x1c8f690 00:26:44.062 [2024-12-05 12:10:18.184470] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.062 [2024-12-05 12:10:18.184473] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c8f690) 00:26:44.062 [2024-12-05 12:10:18.184480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.062 [2024-12-05 12:10:18.184495] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf1700, cid 4, qid 0 00:26:44.062 [2024-12-05 12:10:18.184568] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:44.062 [2024-12-05 12:10:18.184574] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:44.063 [2024-12-05 12:10:18.184577] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:44.063 [2024-12-05 12:10:18.184581] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c8f690): datao=0, datal=3072, cccid=4 00:26:44.063 [2024-12-05 12:10:18.184585] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cf1700) on tqpair(0x1c8f690): expected_datao=0, payload_size=3072 00:26:44.063 [2024-12-05 12:10:18.184589] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.063 [2024-12-05 12:10:18.184602] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:44.063 [2024-12-05 12:10:18.184607] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:44.063 [2024-12-05 12:10:18.227381] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.063 [2024-12-05 12:10:18.227393] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.063 [2024-12-05 12:10:18.227396] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.063 [2024-12-05 12:10:18.227400] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf1700) on tqpair=0x1c8f690 00:26:44.063 [2024-12-05 12:10:18.227408] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.063 [2024-12-05 12:10:18.227412] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c8f690) 00:26:44.063 [2024-12-05 12:10:18.227419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.063 [2024-12-05 12:10:18.227434] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf1700, cid 4, qid 0 00:26:44.063 [2024-12-05 12:10:18.227559] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:44.063 [2024-12-05 12:10:18.227566] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:44.063 [2024-12-05 12:10:18.227569] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:44.063 [2024-12-05 12:10:18.227572] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c8f690): datao=0, datal=8, cccid=4 00:26:44.063 [2024-12-05 12:10:18.227576] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cf1700) on tqpair(0x1c8f690): expected_datao=0, payload_size=8 00:26:44.063 [2024-12-05 12:10:18.227580] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.063 [2024-12-05 12:10:18.227586] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:44.063 [2024-12-05 12:10:18.227590] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:44.329 [2024-12-05 12:10:18.270377] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.329 [2024-12-05 12:10:18.270386] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.329 [2024-12-05 12:10:18.270389] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.329 [2024-12-05 12:10:18.270392] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf1700) on tqpair=0x1c8f690 00:26:44.329 ===================================================== 00:26:44.329 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:44.329 ===================================================== 00:26:44.329 Controller Capabilities/Features 00:26:44.329 ================================ 00:26:44.329 Vendor ID: 0000 00:26:44.329 Subsystem Vendor ID: 0000 00:26:44.329 Serial Number: .................... 00:26:44.329 Model Number: ........................................ 00:26:44.329 Firmware Version: 25.01 00:26:44.329 Recommended Arb Burst: 0 00:26:44.329 IEEE OUI Identifier: 00 00 00 00:26:44.329 Multi-path I/O 00:26:44.329 May have multiple subsystem ports: No 00:26:44.329 May have multiple controllers: No 00:26:44.329 Associated with SR-IOV VF: No 00:26:44.329 Max Data Transfer Size: 131072 00:26:44.329 Max Number of Namespaces: 0 00:26:44.329 Max Number of I/O Queues: 1024 00:26:44.329 NVMe Specification Version (VS): 1.3 00:26:44.329 NVMe Specification Version (Identify): 1.3 00:26:44.329 Maximum Queue Entries: 128 00:26:44.329 Contiguous Queues Required: Yes 00:26:44.329 Arbitration Mechanisms Supported 00:26:44.329 Weighted Round Robin: Not Supported 00:26:44.329 Vendor Specific: Not Supported 00:26:44.329 Reset Timeout: 15000 ms 00:26:44.329 Doorbell Stride: 4 bytes 00:26:44.329 NVM Subsystem Reset: Not Supported 00:26:44.329 Command Sets Supported 00:26:44.329 NVM Command Set: Supported 00:26:44.329 Boot Partition: Not Supported 00:26:44.329 Memory Page Size Minimum: 4096 bytes 00:26:44.329 Memory Page Size Maximum: 4096 bytes 00:26:44.329 Persistent Memory Region: Not Supported 00:26:44.329 Optional Asynchronous Events Supported 00:26:44.329 Namespace Attribute Notices: Not Supported 00:26:44.329 Firmware Activation Notices: Not Supported 00:26:44.329 ANA Change Notices: Not Supported 00:26:44.329 PLE Aggregate Log Change Notices: Not Supported 00:26:44.329 LBA Status Info Alert Notices: Not Supported 00:26:44.330 EGE Aggregate Log Change Notices: Not Supported 00:26:44.330 Normal NVM Subsystem Shutdown event: Not Supported 00:26:44.330 Zone Descriptor Change Notices: Not Supported 00:26:44.330 Discovery Log Change Notices: Supported 00:26:44.330 Controller Attributes 00:26:44.330 128-bit Host Identifier: Not Supported 00:26:44.330 Non-Operational Permissive Mode: Not Supported 00:26:44.330 NVM Sets: Not Supported 00:26:44.330 Read Recovery Levels: Not Supported 00:26:44.330 Endurance Groups: Not Supported 00:26:44.330 Predictable Latency Mode: Not Supported 00:26:44.330 Traffic Based Keep ALive: Not Supported 00:26:44.330 Namespace Granularity: Not Supported 00:26:44.330 SQ Associations: Not Supported 00:26:44.330 UUID List: Not Supported 00:26:44.330 Multi-Domain Subsystem: Not Supported 00:26:44.330 Fixed Capacity Management: Not Supported 00:26:44.330 Variable Capacity Management: Not Supported 00:26:44.330 Delete Endurance Group: Not Supported 00:26:44.330 Delete NVM Set: Not Supported 00:26:44.330 Extended LBA Formats Supported: Not Supported 00:26:44.330 Flexible Data Placement Supported: Not Supported 00:26:44.330 00:26:44.330 Controller Memory Buffer Support 00:26:44.330 ================================ 00:26:44.330 Supported: No 00:26:44.330 00:26:44.330 Persistent Memory Region Support 00:26:44.330 ================================ 00:26:44.330 Supported: No 00:26:44.330 00:26:44.330 Admin Command Set Attributes 00:26:44.330 ============================ 00:26:44.330 Security Send/Receive: Not Supported 00:26:44.330 Format NVM: Not Supported 00:26:44.330 Firmware Activate/Download: Not Supported 00:26:44.330 Namespace Management: Not Supported 00:26:44.330 Device Self-Test: Not Supported 00:26:44.330 Directives: Not Supported 00:26:44.330 NVMe-MI: Not Supported 00:26:44.330 Virtualization Management: Not Supported 00:26:44.330 Doorbell Buffer Config: Not Supported 00:26:44.330 Get LBA Status Capability: Not Supported 00:26:44.330 Command & Feature Lockdown Capability: Not Supported 00:26:44.330 Abort Command Limit: 1 00:26:44.330 Async Event Request Limit: 4 00:26:44.330 Number of Firmware Slots: N/A 00:26:44.330 Firmware Slot 1 Read-Only: N/A 00:26:44.330 Firmware Activation Without Reset: N/A 00:26:44.330 Multiple Update Detection Support: N/A 00:26:44.330 Firmware Update Granularity: No Information Provided 00:26:44.330 Per-Namespace SMART Log: No 00:26:44.330 Asymmetric Namespace Access Log Page: Not Supported 00:26:44.330 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:44.330 Command Effects Log Page: Not Supported 00:26:44.330 Get Log Page Extended Data: Supported 00:26:44.330 Telemetry Log Pages: Not Supported 00:26:44.330 Persistent Event Log Pages: Not Supported 00:26:44.330 Supported Log Pages Log Page: May Support 00:26:44.330 Commands Supported & Effects Log Page: Not Supported 00:26:44.330 Feature Identifiers & Effects Log Page:May Support 00:26:44.330 NVMe-MI Commands & Effects Log Page: May Support 00:26:44.330 Data Area 4 for Telemetry Log: Not Supported 00:26:44.330 Error Log Page Entries Supported: 128 00:26:44.330 Keep Alive: Not Supported 00:26:44.330 00:26:44.330 NVM Command Set Attributes 00:26:44.330 ========================== 00:26:44.330 Submission Queue Entry Size 00:26:44.330 Max: 1 00:26:44.330 Min: 1 00:26:44.330 Completion Queue Entry Size 00:26:44.330 Max: 1 00:26:44.330 Min: 1 00:26:44.330 Number of Namespaces: 0 00:26:44.330 Compare Command: Not Supported 00:26:44.330 Write Uncorrectable Command: Not Supported 00:26:44.330 Dataset Management Command: Not Supported 00:26:44.330 Write Zeroes Command: Not Supported 00:26:44.330 Set Features Save Field: Not Supported 00:26:44.330 Reservations: Not Supported 00:26:44.330 Timestamp: Not Supported 00:26:44.330 Copy: Not Supported 00:26:44.330 Volatile Write Cache: Not Present 00:26:44.330 Atomic Write Unit (Normal): 1 00:26:44.330 Atomic Write Unit (PFail): 1 00:26:44.330 Atomic Compare & Write Unit: 1 00:26:44.330 Fused Compare & Write: Supported 00:26:44.330 Scatter-Gather List 00:26:44.330 SGL Command Set: Supported 00:26:44.330 SGL Keyed: Supported 00:26:44.330 SGL Bit Bucket Descriptor: Not Supported 00:26:44.330 SGL Metadata Pointer: Not Supported 00:26:44.330 Oversized SGL: Not Supported 00:26:44.330 SGL Metadata Address: Not Supported 00:26:44.330 SGL Offset: Supported 00:26:44.330 Transport SGL Data Block: Not Supported 00:26:44.330 Replay Protected Memory Block: Not Supported 00:26:44.330 00:26:44.330 Firmware Slot Information 00:26:44.330 ========================= 00:26:44.330 Active slot: 0 00:26:44.330 00:26:44.330 00:26:44.330 Error Log 00:26:44.330 ========= 00:26:44.330 00:26:44.330 Active Namespaces 00:26:44.330 ================= 00:26:44.330 Discovery Log Page 00:26:44.330 ================== 00:26:44.330 Generation Counter: 2 00:26:44.330 Number of Records: 2 00:26:44.330 Record Format: 0 00:26:44.330 00:26:44.330 Discovery Log Entry 0 00:26:44.330 ---------------------- 00:26:44.330 Transport Type: 3 (TCP) 00:26:44.330 Address Family: 1 (IPv4) 00:26:44.330 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:44.330 Entry Flags: 00:26:44.330 Duplicate Returned Information: 1 00:26:44.330 Explicit Persistent Connection Support for Discovery: 1 00:26:44.330 Transport Requirements: 00:26:44.330 Secure Channel: Not Required 00:26:44.330 Port ID: 0 (0x0000) 00:26:44.330 Controller ID: 65535 (0xffff) 00:26:44.330 Admin Max SQ Size: 128 00:26:44.330 Transport Service Identifier: 4420 00:26:44.330 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:44.330 Transport Address: 10.0.0.2 00:26:44.330 Discovery Log Entry 1 00:26:44.330 ---------------------- 00:26:44.330 Transport Type: 3 (TCP) 00:26:44.330 Address Family: 1 (IPv4) 00:26:44.330 Subsystem Type: 2 (NVM Subsystem) 00:26:44.330 Entry Flags: 00:26:44.330 Duplicate Returned Information: 0 00:26:44.330 Explicit Persistent Connection Support for Discovery: 0 00:26:44.330 Transport Requirements: 00:26:44.330 Secure Channel: Not Required 00:26:44.330 Port ID: 0 (0x0000) 00:26:44.330 Controller ID: 65535 (0xffff) 00:26:44.330 Admin Max SQ Size: 128 00:26:44.330 Transport Service Identifier: 4420 00:26:44.330 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:26:44.330 Transport Address: 10.0.0.2 [2024-12-05 12:10:18.270477] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:26:44.330 [2024-12-05 12:10:18.270489] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf1100) on tqpair=0x1c8f690 00:26:44.330 [2024-12-05 12:10:18.270496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.330 [2024-12-05 12:10:18.270501] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf1280) on tqpair=0x1c8f690 00:26:44.330 [2024-12-05 12:10:18.270505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.330 [2024-12-05 12:10:18.270509] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf1400) on tqpair=0x1c8f690 00:26:44.330 [2024-12-05 12:10:18.270513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.330 [2024-12-05 12:10:18.270518] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf1580) on tqpair=0x1c8f690 00:26:44.330 [2024-12-05 12:10:18.270522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.330 [2024-12-05 12:10:18.270530] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.330 [2024-12-05 12:10:18.270533] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.330 [2024-12-05 12:10:18.270536] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c8f690) 00:26:44.330 [2024-12-05 12:10:18.270543] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.330 [2024-12-05 12:10:18.270557] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf1580, cid 3, qid 0 00:26:44.330 [2024-12-05 12:10:18.270623] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.330 [2024-12-05 12:10:18.270629] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.330 [2024-12-05 12:10:18.270632] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.330 [2024-12-05 12:10:18.270636] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf1580) on tqpair=0x1c8f690 00:26:44.330 [2024-12-05 12:10:18.270641] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.330 [2024-12-05 12:10:18.270645] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.330 [2024-12-05 12:10:18.270648] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c8f690) 00:26:44.330 [2024-12-05 12:10:18.270654] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.330 [2024-12-05 12:10:18.270667] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf1580, cid 3, qid 0 00:26:44.330 [2024-12-05 12:10:18.270755] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.330 [2024-12-05 12:10:18.270761] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.330 [2024-12-05 12:10:18.270764] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.331 [2024-12-05 12:10:18.270767] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf1580) on tqpair=0x1c8f690 00:26:44.331 [2024-12-05 12:10:18.270772] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:26:44.331 [2024-12-05 12:10:18.270776] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:26:44.331 [2024-12-05 12:10:18.270784] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.331 [2024-12-05 12:10:18.270788] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.331 [2024-12-05 12:10:18.270791] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c8f690) 00:26:44.331 [2024-12-05 12:10:18.270797] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.331 [2024-12-05 12:10:18.270806] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf1580, cid 3, qid 0 00:26:44.331 [2024-12-05 12:10:18.270869] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.331 [2024-12-05 12:10:18.270876] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.331 [2024-12-05 12:10:18.270879] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.331 [2024-12-05 12:10:18.270883] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf1580) on tqpair=0x1c8f690 00:26:44.331 [2024-12-05 12:10:18.270891] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.331 [2024-12-05 12:10:18.270895] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.331 [2024-12-05 12:10:18.270898] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c8f690) 00:26:44.331 [2024-12-05 12:10:18.270904] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.331 [2024-12-05 12:10:18.270914] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf1580, cid 3, qid 0 00:26:44.331 [2024-12-05 12:10:18.270975] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.331 [2024-12-05 12:10:18.270981] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.331 [2024-12-05 12:10:18.270984] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.331 [2024-12-05 12:10:18.270988] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf1580) on tqpair=0x1c8f690 00:26:44.331 [2024-12-05 12:10:18.270996] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.331 [2024-12-05 12:10:18.271000] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.331 [2024-12-05 12:10:18.271003] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c8f690) 00:26:44.331 [2024-12-05 12:10:18.271009] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.331 [2024-12-05 12:10:18.271018] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf1580, cid 3, qid 0 00:26:44.331 [2024-12-05 12:10:18.271076] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.331 [2024-12-05 12:10:18.271081] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.331 [2024-12-05 12:10:18.271085] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.331 [2024-12-05 12:10:18.271088] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf1580) on tqpair=0x1c8f690 00:26:44.331 [2024-12-05 12:10:18.271096] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.331 [2024-12-05 12:10:18.271100] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.331 [2024-12-05 12:10:18.271103] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c8f690) 00:26:44.331 [2024-12-05 12:10:18.271108] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.331 [2024-12-05 12:10:18.271117] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf1580, cid 3, qid 0 00:26:44.331 [2024-12-05 12:10:18.271178] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.331 [2024-12-05 12:10:18.271184] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.331 [2024-12-05 12:10:18.271187] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.331 [2024-12-05 12:10:18.271190] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf1580) on tqpair=0x1c8f690 00:26:44.331 [2024-12-05 12:10:18.271198] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.331 [2024-12-05 12:10:18.271202] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.331 [2024-12-05 12:10:18.271205] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c8f690) 00:26:44.331 [2024-12-05 12:10:18.271210] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.331 [2024-12-05 12:10:18.271220] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf1580, cid 3, qid 0 00:26:44.331 [2024-12-05 12:10:18.271280] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.331 [2024-12-05 12:10:18.271285] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.331 [2024-12-05 12:10:18.271290] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.331 [2024-12-05 12:10:18.271294] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf1580) on tqpair=0x1c8f690 00:26:44.331 [2024-12-05 12:10:18.271303] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.331 [2024-12-05 12:10:18.271306] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.331 [2024-12-05 12:10:18.271309] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c8f690) 00:26:44.331 [2024-12-05 12:10:18.271315] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.331 [2024-12-05 12:10:18.271324] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf1580, cid 3, qid 0 00:26:44.331 [2024-12-05 12:10:18.271391] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.331 [2024-12-05 12:10:18.271397] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.331 [2024-12-05 12:10:18.271400] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.331 [2024-12-05 12:10:18.271403] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf1580) on tqpair=0x1c8f690 00:26:44.331 [2024-12-05 12:10:18.271411] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.331 [2024-12-05 12:10:18.271415] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.331 [2024-12-05 12:10:18.271418] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c8f690) 00:26:44.331 [2024-12-05 12:10:18.271424] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.331 [2024-12-05 12:10:18.271434] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf1580, cid 3, qid 0 00:26:44.331 [2024-12-05 12:10:18.271498] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.331 [2024-12-05 12:10:18.271504] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.331 [2024-12-05 12:10:18.271507] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.331 [2024-12-05 12:10:18.271510] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf1580) on tqpair=0x1c8f690 00:26:44.331 [2024-12-05 12:10:18.271518] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.331 [2024-12-05 12:10:18.271522] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.331 [2024-12-05 12:10:18.271525] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c8f690) 00:26:44.331 [2024-12-05 12:10:18.271531] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.331 [2024-12-05 12:10:18.271540] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf1580, cid 3, qid 0 00:26:44.331 [2024-12-05 12:10:18.271597] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.331 [2024-12-05 12:10:18.271603] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.331 [2024-12-05 12:10:18.271606] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.331 [2024-12-05 12:10:18.271609] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf1580) on tqpair=0x1c8f690 00:26:44.331 [2024-12-05 12:10:18.271617] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.331 [2024-12-05 12:10:18.271621] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.331 [2024-12-05 12:10:18.271624] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c8f690) 00:26:44.331 [2024-12-05 12:10:18.271630] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.331 [2024-12-05 12:10:18.271639] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf1580, cid 3, qid 0 00:26:44.331 [2024-12-05 12:10:18.271698] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.331 [2024-12-05 12:10:18.271704] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.331 [2024-12-05 12:10:18.271707] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.331 [2024-12-05 12:10:18.271712] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf1580) on tqpair=0x1c8f690 00:26:44.331 [2024-12-05 12:10:18.271721] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.331 [2024-12-05 12:10:18.271724] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.331 [2024-12-05 12:10:18.271727] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c8f690) 00:26:44.331 [2024-12-05 12:10:18.271733] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.331 [2024-12-05 12:10:18.271743] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf1580, cid 3, qid 0 00:26:44.331 [2024-12-05 12:10:18.271805] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.331 [2024-12-05 12:10:18.271811] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.331 [2024-12-05 12:10:18.271814] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.331 [2024-12-05 12:10:18.271817] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf1580) on tqpair=0x1c8f690 00:26:44.331 [2024-12-05 12:10:18.271825] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.331 [2024-12-05 12:10:18.271829] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.331 [2024-12-05 12:10:18.271832] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c8f690) 00:26:44.331 [2024-12-05 12:10:18.271838] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.331 [2024-12-05 12:10:18.271847] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf1580, cid 3, qid 0 00:26:44.331 [2024-12-05 12:10:18.271904] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.331 [2024-12-05 12:10:18.271910] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.331 [2024-12-05 12:10:18.271913] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.331 [2024-12-05 12:10:18.271916] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf1580) on tqpair=0x1c8f690 00:26:44.331 [2024-12-05 12:10:18.271924] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.331 [2024-12-05 12:10:18.271928] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.331 [2024-12-05 12:10:18.271931] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c8f690) 00:26:44.331 [2024-12-05 12:10:18.271936] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.331 [2024-12-05 12:10:18.271947] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf1580, cid 3, qid 0 00:26:44.332 [2024-12-05 12:10:18.272004] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.332 [2024-12-05 12:10:18.272010] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.332 [2024-12-05 12:10:18.272013] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.332 [2024-12-05 12:10:18.272016] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf1580) on tqpair=0x1c8f690 00:26:44.332 [2024-12-05 12:10:18.272024] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.332 [2024-12-05 12:10:18.272028] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.332 [2024-12-05 12:10:18.272031] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c8f690) 00:26:44.332 [2024-12-05 12:10:18.272037] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.332 [2024-12-05 12:10:18.272046] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf1580, cid 3, qid 0 00:26:44.332 [2024-12-05 12:10:18.272103] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.332 [2024-12-05 12:10:18.272109] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.332 [2024-12-05 12:10:18.272112] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.332 [2024-12-05 12:10:18.272115] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf1580) on tqpair=0x1c8f690 00:26:44.332 [2024-12-05 12:10:18.272125] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.332 [2024-12-05 12:10:18.272129] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.332 [2024-12-05 12:10:18.272132] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c8f690) 00:26:44.332 [2024-12-05 12:10:18.272137] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.332 [2024-12-05 12:10:18.272147] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf1580, cid 3, qid 0 00:26:44.332 [2024-12-05 12:10:18.272216] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.332 [2024-12-05 12:10:18.272221] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.332 [2024-12-05 12:10:18.272225] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.332 [2024-12-05 12:10:18.272228] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf1580) on tqpair=0x1c8f690 00:26:44.332 [2024-12-05 12:10:18.272236] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.332 [2024-12-05 12:10:18.272240] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.332 [2024-12-05 12:10:18.272243] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c8f690) 00:26:44.332 [2024-12-05 12:10:18.272249] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.332 [2024-12-05 12:10:18.272259] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf1580, cid 3, qid 0 00:26:44.332 [2024-12-05 12:10:18.272319] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.332 [2024-12-05 12:10:18.272325] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.332 [2024-12-05 12:10:18.272328] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.332 [2024-12-05 12:10:18.272331] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf1580) on tqpair=0x1c8f690 00:26:44.332 [2024-12-05 12:10:18.272339] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.332 [2024-12-05 12:10:18.272343] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.332 [2024-12-05 12:10:18.272346] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c8f690) 00:26:44.332 [2024-12-05 12:10:18.272352] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.332 [2024-12-05 12:10:18.272361] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf1580, cid 3, qid 0 00:26:44.332 [2024-12-05 12:10:18.272429] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.332 [2024-12-05 12:10:18.272435] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.332 [2024-12-05 12:10:18.272438] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.332 [2024-12-05 12:10:18.272442] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf1580) on tqpair=0x1c8f690 00:26:44.332 [2024-12-05 12:10:18.272450] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.332 [2024-12-05 12:10:18.272454] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.332 [2024-12-05 12:10:18.272457] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c8f690) 00:26:44.332 [2024-12-05 12:10:18.272463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.332 [2024-12-05 12:10:18.272473] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf1580, cid 3, qid 0 00:26:44.332 [2024-12-05 12:10:18.272535] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.332 [2024-12-05 12:10:18.272541] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.332 [2024-12-05 12:10:18.272544] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.332 [2024-12-05 12:10:18.272547] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf1580) on tqpair=0x1c8f690 00:26:44.332 [2024-12-05 12:10:18.272555] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.332 [2024-12-05 12:10:18.272560] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.332 [2024-12-05 12:10:18.272564] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c8f690) 00:26:44.332 [2024-12-05 12:10:18.272569] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.332 [2024-12-05 12:10:18.272578] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf1580, cid 3, qid 0 00:26:44.332 [2024-12-05 12:10:18.272633] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.332 [2024-12-05 12:10:18.272639] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.332 [2024-12-05 12:10:18.272642] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.332 [2024-12-05 12:10:18.272646] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf1580) on tqpair=0x1c8f690 00:26:44.332 [2024-12-05 12:10:18.272653] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.332 [2024-12-05 12:10:18.272657] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.332 [2024-12-05 12:10:18.272660] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c8f690) 00:26:44.332 [2024-12-05 12:10:18.272666] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.332 [2024-12-05 12:10:18.272675] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf1580, cid 3, qid 0 00:26:44.332 [2024-12-05 12:10:18.272741] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.332 [2024-12-05 12:10:18.272746] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.332 [2024-12-05 12:10:18.272749] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.332 [2024-12-05 12:10:18.272753] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf1580) on tqpair=0x1c8f690 00:26:44.332 [2024-12-05 12:10:18.272761] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.332 [2024-12-05 12:10:18.272765] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.332 [2024-12-05 12:10:18.272768] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c8f690) 00:26:44.332 [2024-12-05 12:10:18.272773] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.332 [2024-12-05 12:10:18.272783] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf1580, cid 3, qid 0 00:26:44.332 [2024-12-05 12:10:18.272840] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.332 [2024-12-05 12:10:18.272846] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.332 [2024-12-05 12:10:18.272849] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.332 [2024-12-05 12:10:18.272853] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf1580) on tqpair=0x1c8f690 00:26:44.332 [2024-12-05 12:10:18.272861] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.332 [2024-12-05 12:10:18.272864] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.332 [2024-12-05 12:10:18.272867] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c8f690) 00:26:44.332 [2024-12-05 12:10:18.272873] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.332 [2024-12-05 12:10:18.272883] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf1580, cid 3, qid 0 00:26:44.332 [2024-12-05 12:10:18.272949] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.332 [2024-12-05 12:10:18.272955] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.332 [2024-12-05 12:10:18.272958] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.332 [2024-12-05 12:10:18.272961] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf1580) on tqpair=0x1c8f690 00:26:44.332 [2024-12-05 12:10:18.272970] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.332 [2024-12-05 12:10:18.272974] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.332 [2024-12-05 12:10:18.272979] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c8f690) 00:26:44.332 [2024-12-05 12:10:18.272985] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.332 [2024-12-05 12:10:18.272996] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf1580, cid 3, qid 0 00:26:44.332 [2024-12-05 12:10:18.273059] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.332 [2024-12-05 12:10:18.273065] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.332 [2024-12-05 12:10:18.273068] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.332 [2024-12-05 12:10:18.273072] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf1580) on tqpair=0x1c8f690 00:26:44.332 [2024-12-05 12:10:18.273080] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.332 [2024-12-05 12:10:18.273083] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.332 [2024-12-05 12:10:18.273086] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c8f690) 00:26:44.332 [2024-12-05 12:10:18.273092] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.332 [2024-12-05 12:10:18.273101] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf1580, cid 3, qid 0 00:26:44.332 [2024-12-05 12:10:18.273164] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.332 [2024-12-05 12:10:18.273169] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.332 [2024-12-05 12:10:18.273172] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.332 [2024-12-05 12:10:18.273176] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf1580) on tqpair=0x1c8f690 00:26:44.332 [2024-12-05 12:10:18.273184] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.332 [2024-12-05 12:10:18.273187] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.333 [2024-12-05 12:10:18.273191] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c8f690) 00:26:44.333 [2024-12-05 12:10:18.273196] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.333 [2024-12-05 12:10:18.273206] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf1580, cid 3, qid 0 00:26:44.333 [2024-12-05 12:10:18.273272] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.333 [2024-12-05 12:10:18.273278] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.333 [2024-12-05 12:10:18.273281] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.333 [2024-12-05 12:10:18.273284] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf1580) on tqpair=0x1c8f690 00:26:44.333 [2024-12-05 12:10:18.273293] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.333 [2024-12-05 12:10:18.273297] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.333 [2024-12-05 12:10:18.273300] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c8f690) 00:26:44.333 [2024-12-05 12:10:18.273306] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.333 [2024-12-05 12:10:18.273316] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf1580, cid 3, qid 0 00:26:44.333 [2024-12-05 12:10:18.273375] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.333 [2024-12-05 12:10:18.273381] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.333 [2024-12-05 12:10:18.273385] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.333 [2024-12-05 12:10:18.273388] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf1580) on tqpair=0x1c8f690 00:26:44.333 [2024-12-05 12:10:18.273396] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.333 [2024-12-05 12:10:18.273399] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.333 [2024-12-05 12:10:18.273402] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c8f690) 00:26:44.333 [2024-12-05 12:10:18.273410] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.333 [2024-12-05 12:10:18.273420] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf1580, cid 3, qid 0 00:26:44.333 [2024-12-05 12:10:18.273483] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.333 [2024-12-05 12:10:18.273488] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.333 [2024-12-05 12:10:18.273491] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.333 [2024-12-05 12:10:18.273494] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf1580) on tqpair=0x1c8f690 00:26:44.333 [2024-12-05 12:10:18.273503] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.333 [2024-12-05 12:10:18.273506] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.333 [2024-12-05 12:10:18.273510] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c8f690) 00:26:44.333 [2024-12-05 12:10:18.273515] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.333 [2024-12-05 12:10:18.273526] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf1580, cid 3, qid 0 00:26:44.333 [2024-12-05 12:10:18.273589] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.333 [2024-12-05 12:10:18.273595] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.333 [2024-12-05 12:10:18.273598] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.333 [2024-12-05 12:10:18.273601] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf1580) on tqpair=0x1c8f690 00:26:44.333 [2024-12-05 12:10:18.273609] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.333 [2024-12-05 12:10:18.273613] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.333 [2024-12-05 12:10:18.273616] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c8f690) 00:26:44.333 [2024-12-05 12:10:18.273621] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.333 [2024-12-05 12:10:18.273631] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf1580, cid 3, qid 0 00:26:44.333 [2024-12-05 12:10:18.273691] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.333 [2024-12-05 12:10:18.273696] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.333 [2024-12-05 12:10:18.273699] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.333 [2024-12-05 12:10:18.273703] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf1580) on tqpair=0x1c8f690 00:26:44.333 [2024-12-05 12:10:18.273711] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.333 [2024-12-05 12:10:18.273714] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.333 [2024-12-05 12:10:18.273718] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c8f690) 00:26:44.333 [2024-12-05 12:10:18.273723] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.333 [2024-12-05 12:10:18.273733] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf1580, cid 3, qid 0 00:26:44.333 [2024-12-05 12:10:18.273795] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.333 [2024-12-05 12:10:18.273801] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.333 [2024-12-05 12:10:18.273804] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.333 [2024-12-05 12:10:18.273807] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf1580) on tqpair=0x1c8f690 00:26:44.333 [2024-12-05 12:10:18.273815] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.333 [2024-12-05 12:10:18.273819] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.333 [2024-12-05 12:10:18.273822] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c8f690) 00:26:44.333 [2024-12-05 12:10:18.273828] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.333 [2024-12-05 12:10:18.273839] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf1580, cid 3, qid 0 00:26:44.333 [2024-12-05 12:10:18.273897] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.333 [2024-12-05 12:10:18.273902] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.333 [2024-12-05 12:10:18.273905] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.333 [2024-12-05 12:10:18.273909] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf1580) on tqpair=0x1c8f690 00:26:44.333 [2024-12-05 12:10:18.273917] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.333 [2024-12-05 12:10:18.273920] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.333 [2024-12-05 12:10:18.273924] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c8f690) 00:26:44.333 [2024-12-05 12:10:18.273929] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.333 [2024-12-05 12:10:18.273938] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf1580, cid 3, qid 0 00:26:44.333 [2024-12-05 12:10:18.274007] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.333 [2024-12-05 12:10:18.274012] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.333 [2024-12-05 12:10:18.274016] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.333 [2024-12-05 12:10:18.274019] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf1580) on tqpair=0x1c8f690 00:26:44.333 [2024-12-05 12:10:18.274027] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.333 [2024-12-05 12:10:18.274031] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.333 [2024-12-05 12:10:18.274034] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c8f690) 00:26:44.333 [2024-12-05 12:10:18.274040] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.333 [2024-12-05 12:10:18.274050] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf1580, cid 3, qid 0 00:26:44.333 [2024-12-05 12:10:18.274110] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.333 [2024-12-05 12:10:18.274115] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.333 [2024-12-05 12:10:18.274118] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.333 [2024-12-05 12:10:18.274122] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf1580) on tqpair=0x1c8f690 00:26:44.333 [2024-12-05 12:10:18.274130] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.333 [2024-12-05 12:10:18.274133] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.333 [2024-12-05 12:10:18.274137] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c8f690) 00:26:44.333 [2024-12-05 12:10:18.274142] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.333 [2024-12-05 12:10:18.274152] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf1580, cid 3, qid 0 00:26:44.333 [2024-12-05 12:10:18.274213] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.333 [2024-12-05 12:10:18.274218] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.333 [2024-12-05 12:10:18.274221] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.333 [2024-12-05 12:10:18.274225] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf1580) on tqpair=0x1c8f690 00:26:44.333 [2024-12-05 12:10:18.274233] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.333 [2024-12-05 12:10:18.274236] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.333 [2024-12-05 12:10:18.274239] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c8f690) 00:26:44.333 [2024-12-05 12:10:18.274245] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.333 [2024-12-05 12:10:18.274258] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf1580, cid 3, qid 0 00:26:44.333 [2024-12-05 12:10:18.274323] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.333 [2024-12-05 12:10:18.274329] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.333 [2024-12-05 12:10:18.274332] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.333 [2024-12-05 12:10:18.274335] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf1580) on tqpair=0x1c8f690 00:26:44.333 [2024-12-05 12:10:18.274343] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.333 [2024-12-05 12:10:18.274347] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.333 [2024-12-05 12:10:18.274350] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c8f690) 00:26:44.333 [2024-12-05 12:10:18.274356] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.333 [2024-12-05 12:10:18.274365] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf1580, cid 3, qid 0 00:26:44.333 [2024-12-05 12:10:18.278382] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.333 [2024-12-05 12:10:18.278387] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.333 [2024-12-05 12:10:18.278391] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.333 [2024-12-05 12:10:18.278394] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf1580) on tqpair=0x1c8f690 00:26:44.333 [2024-12-05 12:10:18.278402] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:26:44.334 00:26:44.334 12:10:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:26:44.334 [2024-12-05 12:10:18.316769] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:26:44.334 [2024-12-05 12:10:18.316816] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid163685 ] 00:26:44.334 [2024-12-05 12:10:18.355641] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:26:44.334 [2024-12-05 12:10:18.355681] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:26:44.334 [2024-12-05 12:10:18.355686] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:26:44.334 [2024-12-05 12:10:18.355700] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:26:44.334 [2024-12-05 12:10:18.355708] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:26:44.334 [2024-12-05 12:10:18.359554] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:26:44.334 [2024-12-05 12:10:18.359580] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x82c690 0 00:26:44.334 [2024-12-05 12:10:18.367375] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:26:44.334 [2024-12-05 12:10:18.367389] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:26:44.334 [2024-12-05 12:10:18.367393] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:26:44.334 [2024-12-05 12:10:18.367396] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:26:44.334 [2024-12-05 12:10:18.367420] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.334 [2024-12-05 12:10:18.367425] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.334 [2024-12-05 12:10:18.367428] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x82c690) 00:26:44.334 [2024-12-05 12:10:18.367441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:26:44.334 [2024-12-05 12:10:18.367458] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88e100, cid 0, qid 0 00:26:44.334 [2024-12-05 12:10:18.375377] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.334 [2024-12-05 12:10:18.375386] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.334 [2024-12-05 12:10:18.375389] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.334 [2024-12-05 12:10:18.375393] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x88e100) on tqpair=0x82c690 00:26:44.334 [2024-12-05 12:10:18.375401] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:26:44.334 [2024-12-05 12:10:18.375407] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:26:44.334 [2024-12-05 12:10:18.375411] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:26:44.334 [2024-12-05 12:10:18.375423] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.334 [2024-12-05 12:10:18.375427] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.334 [2024-12-05 12:10:18.375430] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x82c690) 00:26:44.334 [2024-12-05 12:10:18.375436] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.334 [2024-12-05 12:10:18.375448] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88e100, cid 0, qid 0 00:26:44.334 [2024-12-05 12:10:18.375516] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.334 [2024-12-05 12:10:18.375522] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.334 [2024-12-05 12:10:18.375525] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.334 [2024-12-05 12:10:18.375529] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x88e100) on tqpair=0x82c690 00:26:44.334 [2024-12-05 12:10:18.375535] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:26:44.334 [2024-12-05 12:10:18.375543] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:26:44.334 [2024-12-05 12:10:18.375549] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.334 [2024-12-05 12:10:18.375552] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.334 [2024-12-05 12:10:18.375556] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x82c690) 00:26:44.334 [2024-12-05 12:10:18.375561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.334 [2024-12-05 12:10:18.375571] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88e100, cid 0, qid 0 00:26:44.334 [2024-12-05 12:10:18.375674] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.334 [2024-12-05 12:10:18.375680] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.334 [2024-12-05 12:10:18.375682] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.334 [2024-12-05 12:10:18.375686] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x88e100) on tqpair=0x82c690 00:26:44.334 [2024-12-05 12:10:18.375690] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:26:44.334 [2024-12-05 12:10:18.375697] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:26:44.334 [2024-12-05 12:10:18.375703] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.334 [2024-12-05 12:10:18.375706] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.334 [2024-12-05 12:10:18.375709] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x82c690) 00:26:44.334 [2024-12-05 12:10:18.375717] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.334 [2024-12-05 12:10:18.375727] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88e100, cid 0, qid 0 00:26:44.334 [2024-12-05 12:10:18.375825] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.334 [2024-12-05 12:10:18.375831] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.334 [2024-12-05 12:10:18.375833] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.334 [2024-12-05 12:10:18.375837] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x88e100) on tqpair=0x82c690 00:26:44.334 [2024-12-05 12:10:18.375841] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:26:44.334 [2024-12-05 12:10:18.375849] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.334 [2024-12-05 12:10:18.375853] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.334 [2024-12-05 12:10:18.375856] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x82c690) 00:26:44.334 [2024-12-05 12:10:18.375862] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.334 [2024-12-05 12:10:18.375871] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88e100, cid 0, qid 0 00:26:44.334 [2024-12-05 12:10:18.375977] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.334 [2024-12-05 12:10:18.375983] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.334 [2024-12-05 12:10:18.375986] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.334 [2024-12-05 12:10:18.375989] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x88e100) on tqpair=0x82c690 00:26:44.334 [2024-12-05 12:10:18.375993] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:26:44.334 [2024-12-05 12:10:18.375997] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:26:44.334 [2024-12-05 12:10:18.376004] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:26:44.334 [2024-12-05 12:10:18.376111] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:26:44.334 [2024-12-05 12:10:18.376115] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:26:44.334 [2024-12-05 12:10:18.376122] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.334 [2024-12-05 12:10:18.376125] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.334 [2024-12-05 12:10:18.376128] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x82c690) 00:26:44.334 [2024-12-05 12:10:18.376134] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.334 [2024-12-05 12:10:18.376143] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88e100, cid 0, qid 0 00:26:44.334 [2024-12-05 12:10:18.376201] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.334 [2024-12-05 12:10:18.376207] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.334 [2024-12-05 12:10:18.376210] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.334 [2024-12-05 12:10:18.376213] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x88e100) on tqpair=0x82c690 00:26:44.335 [2024-12-05 12:10:18.376217] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:26:44.335 [2024-12-05 12:10:18.376225] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.335 [2024-12-05 12:10:18.376229] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.335 [2024-12-05 12:10:18.376232] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x82c690) 00:26:44.335 [2024-12-05 12:10:18.376242] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.335 [2024-12-05 12:10:18.376252] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88e100, cid 0, qid 0 00:26:44.335 [2024-12-05 12:10:18.376359] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.335 [2024-12-05 12:10:18.376365] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.335 [2024-12-05 12:10:18.376373] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.335 [2024-12-05 12:10:18.376376] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x88e100) on tqpair=0x82c690 00:26:44.335 [2024-12-05 12:10:18.376379] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:26:44.335 [2024-12-05 12:10:18.376384] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:26:44.335 [2024-12-05 12:10:18.376390] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:26:44.335 [2024-12-05 12:10:18.376397] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:26:44.335 [2024-12-05 12:10:18.376405] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.335 [2024-12-05 12:10:18.376408] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x82c690) 00:26:44.335 [2024-12-05 12:10:18.376414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.335 [2024-12-05 12:10:18.376424] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88e100, cid 0, qid 0 00:26:44.335 [2024-12-05 12:10:18.376545] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:44.335 [2024-12-05 12:10:18.376551] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:44.335 [2024-12-05 12:10:18.376554] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:44.335 [2024-12-05 12:10:18.376557] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x82c690): datao=0, datal=4096, cccid=0 00:26:44.335 [2024-12-05 12:10:18.376561] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x88e100) on tqpair(0x82c690): expected_datao=0, payload_size=4096 00:26:44.335 [2024-12-05 12:10:18.376565] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.335 [2024-12-05 12:10:18.376571] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:44.335 [2024-12-05 12:10:18.376574] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:44.335 [2024-12-05 12:10:18.376612] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.335 [2024-12-05 12:10:18.376618] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.335 [2024-12-05 12:10:18.376621] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.335 [2024-12-05 12:10:18.376624] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x88e100) on tqpair=0x82c690 00:26:44.335 [2024-12-05 12:10:18.376630] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:26:44.335 [2024-12-05 12:10:18.376634] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:26:44.335 [2024-12-05 12:10:18.376638] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:26:44.335 [2024-12-05 12:10:18.376642] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:26:44.335 [2024-12-05 12:10:18.376645] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:26:44.335 [2024-12-05 12:10:18.376650] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:26:44.335 [2024-12-05 12:10:18.376660] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:26:44.335 [2024-12-05 12:10:18.376665] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.335 [2024-12-05 12:10:18.376669] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.335 [2024-12-05 12:10:18.376672] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x82c690) 00:26:44.335 [2024-12-05 12:10:18.376678] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:44.335 [2024-12-05 12:10:18.376688] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88e100, cid 0, qid 0 00:26:44.335 [2024-12-05 12:10:18.376763] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.335 [2024-12-05 12:10:18.376769] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.335 [2024-12-05 12:10:18.376773] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.335 [2024-12-05 12:10:18.376776] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x88e100) on tqpair=0x82c690 00:26:44.335 [2024-12-05 12:10:18.376781] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.335 [2024-12-05 12:10:18.376784] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.335 [2024-12-05 12:10:18.376787] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x82c690) 00:26:44.335 [2024-12-05 12:10:18.376793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.335 [2024-12-05 12:10:18.376798] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.335 [2024-12-05 12:10:18.376801] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.335 [2024-12-05 12:10:18.376804] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x82c690) 00:26:44.335 [2024-12-05 12:10:18.376809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.335 [2024-12-05 12:10:18.376814] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.335 [2024-12-05 12:10:18.376817] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.335 [2024-12-05 12:10:18.376820] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x82c690) 00:26:44.335 [2024-12-05 12:10:18.376825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.335 [2024-12-05 12:10:18.376830] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.335 [2024-12-05 12:10:18.376834] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.335 [2024-12-05 12:10:18.376836] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x82c690) 00:26:44.335 [2024-12-05 12:10:18.376841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.335 [2024-12-05 12:10:18.376845] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:26:44.335 [2024-12-05 12:10:18.376855] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:26:44.335 [2024-12-05 12:10:18.376861] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.335 [2024-12-05 12:10:18.376864] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x82c690) 00:26:44.335 [2024-12-05 12:10:18.376870] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.335 [2024-12-05 12:10:18.376880] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88e100, cid 0, qid 0 00:26:44.335 [2024-12-05 12:10:18.376885] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88e280, cid 1, qid 0 00:26:44.335 [2024-12-05 12:10:18.376890] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88e400, cid 2, qid 0 00:26:44.335 [2024-12-05 12:10:18.376895] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88e580, cid 3, qid 0 00:26:44.335 [2024-12-05 12:10:18.376899] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88e700, cid 4, qid 0 00:26:44.335 [2024-12-05 12:10:18.377015] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.335 [2024-12-05 12:10:18.377021] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.335 [2024-12-05 12:10:18.377024] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.335 [2024-12-05 12:10:18.377027] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x88e700) on tqpair=0x82c690 00:26:44.335 [2024-12-05 12:10:18.377031] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:26:44.335 [2024-12-05 12:10:18.377035] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:26:44.335 [2024-12-05 12:10:18.377044] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:26:44.335 [2024-12-05 12:10:18.377051] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:26:44.335 [2024-12-05 12:10:18.377056] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.335 [2024-12-05 12:10:18.377059] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.335 [2024-12-05 12:10:18.377062] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x82c690) 00:26:44.335 [2024-12-05 12:10:18.377067] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:44.335 [2024-12-05 12:10:18.377077] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88e700, cid 4, qid 0 00:26:44.335 [2024-12-05 12:10:18.377142] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.335 [2024-12-05 12:10:18.377148] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.335 [2024-12-05 12:10:18.377151] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.335 [2024-12-05 12:10:18.377154] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x88e700) on tqpair=0x82c690 00:26:44.335 [2024-12-05 12:10:18.377203] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:26:44.335 [2024-12-05 12:10:18.377213] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:26:44.335 [2024-12-05 12:10:18.377220] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.335 [2024-12-05 12:10:18.377223] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x82c690) 00:26:44.335 [2024-12-05 12:10:18.377229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.335 [2024-12-05 12:10:18.377238] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88e700, cid 4, qid 0 00:26:44.335 [2024-12-05 12:10:18.377325] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:44.335 [2024-12-05 12:10:18.377331] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:44.335 [2024-12-05 12:10:18.377334] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:44.335 [2024-12-05 12:10:18.377337] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x82c690): datao=0, datal=4096, cccid=4 00:26:44.336 [2024-12-05 12:10:18.377341] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x88e700) on tqpair(0x82c690): expected_datao=0, payload_size=4096 00:26:44.336 [2024-12-05 12:10:18.377345] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.336 [2024-12-05 12:10:18.377351] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:44.336 [2024-12-05 12:10:18.377355] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:44.336 [2024-12-05 12:10:18.377364] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.336 [2024-12-05 12:10:18.377375] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.336 [2024-12-05 12:10:18.377379] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.336 [2024-12-05 12:10:18.377382] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x88e700) on tqpair=0x82c690 00:26:44.336 [2024-12-05 12:10:18.377393] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:26:44.336 [2024-12-05 12:10:18.377405] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:26:44.336 [2024-12-05 12:10:18.377413] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:26:44.336 [2024-12-05 12:10:18.377420] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.336 [2024-12-05 12:10:18.377423] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x82c690) 00:26:44.336 [2024-12-05 12:10:18.377429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.336 [2024-12-05 12:10:18.377439] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88e700, cid 4, qid 0 00:26:44.336 [2024-12-05 12:10:18.377524] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:44.336 [2024-12-05 12:10:18.377530] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:44.336 [2024-12-05 12:10:18.377533] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:44.336 [2024-12-05 12:10:18.377537] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x82c690): datao=0, datal=4096, cccid=4 00:26:44.336 [2024-12-05 12:10:18.377541] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x88e700) on tqpair(0x82c690): expected_datao=0, payload_size=4096 00:26:44.336 [2024-12-05 12:10:18.377544] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.336 [2024-12-05 12:10:18.377550] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:44.336 [2024-12-05 12:10:18.377553] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:44.336 [2024-12-05 12:10:18.377561] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.336 [2024-12-05 12:10:18.377567] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.336 [2024-12-05 12:10:18.377570] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.336 [2024-12-05 12:10:18.377573] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x88e700) on tqpair=0x82c690 00:26:44.336 [2024-12-05 12:10:18.377583] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:26:44.336 [2024-12-05 12:10:18.377592] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:26:44.336 [2024-12-05 12:10:18.377599] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.336 [2024-12-05 12:10:18.377602] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x82c690) 00:26:44.336 [2024-12-05 12:10:18.377608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.336 [2024-12-05 12:10:18.377617] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88e700, cid 4, qid 0 00:26:44.336 [2024-12-05 12:10:18.377725] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:44.336 [2024-12-05 12:10:18.377731] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:44.336 [2024-12-05 12:10:18.377734] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:44.336 [2024-12-05 12:10:18.377737] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x82c690): datao=0, datal=4096, cccid=4 00:26:44.336 [2024-12-05 12:10:18.377741] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x88e700) on tqpair(0x82c690): expected_datao=0, payload_size=4096 00:26:44.336 [2024-12-05 12:10:18.377746] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.336 [2024-12-05 12:10:18.377752] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:44.336 [2024-12-05 12:10:18.377755] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:44.336 [2024-12-05 12:10:18.377763] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.336 [2024-12-05 12:10:18.377769] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.336 [2024-12-05 12:10:18.377772] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.336 [2024-12-05 12:10:18.377775] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x88e700) on tqpair=0x82c690 00:26:44.336 [2024-12-05 12:10:18.377783] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:26:44.336 [2024-12-05 12:10:18.377792] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:26:44.336 [2024-12-05 12:10:18.377799] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:26:44.336 [2024-12-05 12:10:18.377804] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:26:44.336 [2024-12-05 12:10:18.377809] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:26:44.336 [2024-12-05 12:10:18.377814] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:26:44.336 [2024-12-05 12:10:18.377818] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:26:44.336 [2024-12-05 12:10:18.377822] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:26:44.336 [2024-12-05 12:10:18.377827] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:26:44.336 [2024-12-05 12:10:18.377838] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.336 [2024-12-05 12:10:18.377842] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x82c690) 00:26:44.336 [2024-12-05 12:10:18.377847] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.336 [2024-12-05 12:10:18.377853] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.336 [2024-12-05 12:10:18.377856] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.336 [2024-12-05 12:10:18.377859] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x82c690) 00:26:44.336 [2024-12-05 12:10:18.377864] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.336 [2024-12-05 12:10:18.377877] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88e700, cid 4, qid 0 00:26:44.336 [2024-12-05 12:10:18.377881] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88e880, cid 5, qid 0 00:26:44.336 [2024-12-05 12:10:18.377998] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.336 [2024-12-05 12:10:18.378004] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.336 [2024-12-05 12:10:18.378006] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.336 [2024-12-05 12:10:18.378010] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x88e700) on tqpair=0x82c690 00:26:44.336 [2024-12-05 12:10:18.378015] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.336 [2024-12-05 12:10:18.378020] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.336 [2024-12-05 12:10:18.378023] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.336 [2024-12-05 12:10:18.378028] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x88e880) on tqpair=0x82c690 00:26:44.336 [2024-12-05 12:10:18.378036] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.336 [2024-12-05 12:10:18.378039] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x82c690) 00:26:44.336 [2024-12-05 12:10:18.378045] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.336 [2024-12-05 12:10:18.378055] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88e880, cid 5, qid 0 00:26:44.336 [2024-12-05 12:10:18.378148] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.336 [2024-12-05 12:10:18.378153] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.336 [2024-12-05 12:10:18.378157] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.336 [2024-12-05 12:10:18.378160] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x88e880) on tqpair=0x82c690 00:26:44.336 [2024-12-05 12:10:18.378167] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.336 [2024-12-05 12:10:18.378171] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x82c690) 00:26:44.336 [2024-12-05 12:10:18.378176] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.336 [2024-12-05 12:10:18.378185] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88e880, cid 5, qid 0 00:26:44.336 [2024-12-05 12:10:18.378248] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.336 [2024-12-05 12:10:18.378254] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.336 [2024-12-05 12:10:18.378257] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.336 [2024-12-05 12:10:18.378260] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x88e880) on tqpair=0x82c690 00:26:44.336 [2024-12-05 12:10:18.378268] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.336 [2024-12-05 12:10:18.378271] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x82c690) 00:26:44.336 [2024-12-05 12:10:18.378277] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.336 [2024-12-05 12:10:18.378286] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88e880, cid 5, qid 0 00:26:44.336 [2024-12-05 12:10:18.378357] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.336 [2024-12-05 12:10:18.378362] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.336 [2024-12-05 12:10:18.378365] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.336 [2024-12-05 12:10:18.378377] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x88e880) on tqpair=0x82c690 00:26:44.336 [2024-12-05 12:10:18.378391] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.336 [2024-12-05 12:10:18.378395] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x82c690) 00:26:44.336 [2024-12-05 12:10:18.378401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.336 [2024-12-05 12:10:18.378407] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.336 [2024-12-05 12:10:18.378410] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x82c690) 00:26:44.337 [2024-12-05 12:10:18.378416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.337 [2024-12-05 12:10:18.378422] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.337 [2024-12-05 12:10:18.378425] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x82c690) 00:26:44.337 [2024-12-05 12:10:18.378430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.337 [2024-12-05 12:10:18.378438] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.337 [2024-12-05 12:10:18.378441] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x82c690) 00:26:44.337 [2024-12-05 12:10:18.378446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.337 [2024-12-05 12:10:18.378458] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88e880, cid 5, qid 0 00:26:44.337 [2024-12-05 12:10:18.378463] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88e700, cid 4, qid 0 00:26:44.337 [2024-12-05 12:10:18.378467] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88ea00, cid 6, qid 0 00:26:44.337 [2024-12-05 12:10:18.378471] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88eb80, cid 7, qid 0 00:26:44.337 [2024-12-05 12:10:18.378608] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:44.337 [2024-12-05 12:10:18.378614] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:44.337 [2024-12-05 12:10:18.378617] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:44.337 [2024-12-05 12:10:18.378620] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x82c690): datao=0, datal=8192, cccid=5 00:26:44.337 [2024-12-05 12:10:18.378623] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x88e880) on tqpair(0x82c690): expected_datao=0, payload_size=8192 00:26:44.337 [2024-12-05 12:10:18.378627] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.337 [2024-12-05 12:10:18.378660] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:44.337 [2024-12-05 12:10:18.378664] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:44.337 [2024-12-05 12:10:18.378668] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:44.337 [2024-12-05 12:10:18.378673] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:44.337 [2024-12-05 12:10:18.378676] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:44.337 [2024-12-05 12:10:18.378679] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x82c690): datao=0, datal=512, cccid=4 00:26:44.337 [2024-12-05 12:10:18.378683] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x88e700) on tqpair(0x82c690): expected_datao=0, payload_size=512 00:26:44.337 [2024-12-05 12:10:18.378687] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.337 [2024-12-05 12:10:18.378692] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:44.337 [2024-12-05 12:10:18.378695] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:44.337 [2024-12-05 12:10:18.378700] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:44.337 [2024-12-05 12:10:18.378704] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:44.337 [2024-12-05 12:10:18.378707] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:44.337 [2024-12-05 12:10:18.378710] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x82c690): datao=0, datal=512, cccid=6 00:26:44.337 [2024-12-05 12:10:18.378714] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x88ea00) on tqpair(0x82c690): expected_datao=0, payload_size=512 00:26:44.337 [2024-12-05 12:10:18.378718] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.337 [2024-12-05 12:10:18.378723] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:44.337 [2024-12-05 12:10:18.378726] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:44.337 [2024-12-05 12:10:18.378731] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:44.337 [2024-12-05 12:10:18.378736] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:44.337 [2024-12-05 12:10:18.378738] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:44.337 [2024-12-05 12:10:18.378741] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x82c690): datao=0, datal=4096, cccid=7 00:26:44.337 [2024-12-05 12:10:18.378746] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x88eb80) on tqpair(0x82c690): expected_datao=0, payload_size=4096 00:26:44.337 [2024-12-05 12:10:18.378752] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.337 [2024-12-05 12:10:18.378757] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:44.337 [2024-12-05 12:10:18.378761] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:44.337 [2024-12-05 12:10:18.378768] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.337 [2024-12-05 12:10:18.378772] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.337 [2024-12-05 12:10:18.378776] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.337 [2024-12-05 12:10:18.378779] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x88e880) on tqpair=0x82c690 00:26:44.337 [2024-12-05 12:10:18.378789] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.337 [2024-12-05 12:10:18.378794] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.337 [2024-12-05 12:10:18.378797] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.337 [2024-12-05 12:10:18.378800] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x88e700) on tqpair=0x82c690 00:26:44.337 [2024-12-05 12:10:18.378808] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.337 [2024-12-05 12:10:18.378813] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.337 [2024-12-05 12:10:18.378816] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.337 [2024-12-05 12:10:18.378819] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x88ea00) on tqpair=0x82c690 00:26:44.337 [2024-12-05 12:10:18.378825] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.337 [2024-12-05 12:10:18.378830] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.337 [2024-12-05 12:10:18.378833] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.337 [2024-12-05 12:10:18.378836] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x88eb80) on tqpair=0x82c690 00:26:44.337 ===================================================== 00:26:44.337 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:44.337 ===================================================== 00:26:44.337 Controller Capabilities/Features 00:26:44.337 ================================ 00:26:44.337 Vendor ID: 8086 00:26:44.337 Subsystem Vendor ID: 8086 00:26:44.337 Serial Number: SPDK00000000000001 00:26:44.337 Model Number: SPDK bdev Controller 00:26:44.337 Firmware Version: 25.01 00:26:44.337 Recommended Arb Burst: 6 00:26:44.337 IEEE OUI Identifier: e4 d2 5c 00:26:44.337 Multi-path I/O 00:26:44.337 May have multiple subsystem ports: Yes 00:26:44.337 May have multiple controllers: Yes 00:26:44.337 Associated with SR-IOV VF: No 00:26:44.337 Max Data Transfer Size: 131072 00:26:44.337 Max Number of Namespaces: 32 00:26:44.337 Max Number of I/O Queues: 127 00:26:44.337 NVMe Specification Version (VS): 1.3 00:26:44.337 NVMe Specification Version (Identify): 1.3 00:26:44.337 Maximum Queue Entries: 128 00:26:44.337 Contiguous Queues Required: Yes 00:26:44.337 Arbitration Mechanisms Supported 00:26:44.337 Weighted Round Robin: Not Supported 00:26:44.337 Vendor Specific: Not Supported 00:26:44.337 Reset Timeout: 15000 ms 00:26:44.337 Doorbell Stride: 4 bytes 00:26:44.337 NVM Subsystem Reset: Not Supported 00:26:44.337 Command Sets Supported 00:26:44.337 NVM Command Set: Supported 00:26:44.337 Boot Partition: Not Supported 00:26:44.337 Memory Page Size Minimum: 4096 bytes 00:26:44.337 Memory Page Size Maximum: 4096 bytes 00:26:44.337 Persistent Memory Region: Not Supported 00:26:44.337 Optional Asynchronous Events Supported 00:26:44.337 Namespace Attribute Notices: Supported 00:26:44.337 Firmware Activation Notices: Not Supported 00:26:44.337 ANA Change Notices: Not Supported 00:26:44.337 PLE Aggregate Log Change Notices: Not Supported 00:26:44.337 LBA Status Info Alert Notices: Not Supported 00:26:44.337 EGE Aggregate Log Change Notices: Not Supported 00:26:44.337 Normal NVM Subsystem Shutdown event: Not Supported 00:26:44.337 Zone Descriptor Change Notices: Not Supported 00:26:44.337 Discovery Log Change Notices: Not Supported 00:26:44.337 Controller Attributes 00:26:44.337 128-bit Host Identifier: Supported 00:26:44.337 Non-Operational Permissive Mode: Not Supported 00:26:44.337 NVM Sets: Not Supported 00:26:44.337 Read Recovery Levels: Not Supported 00:26:44.337 Endurance Groups: Not Supported 00:26:44.337 Predictable Latency Mode: Not Supported 00:26:44.337 Traffic Based Keep ALive: Not Supported 00:26:44.337 Namespace Granularity: Not Supported 00:26:44.337 SQ Associations: Not Supported 00:26:44.337 UUID List: Not Supported 00:26:44.337 Multi-Domain Subsystem: Not Supported 00:26:44.337 Fixed Capacity Management: Not Supported 00:26:44.337 Variable Capacity Management: Not Supported 00:26:44.337 Delete Endurance Group: Not Supported 00:26:44.337 Delete NVM Set: Not Supported 00:26:44.337 Extended LBA Formats Supported: Not Supported 00:26:44.337 Flexible Data Placement Supported: Not Supported 00:26:44.337 00:26:44.337 Controller Memory Buffer Support 00:26:44.337 ================================ 00:26:44.337 Supported: No 00:26:44.337 00:26:44.337 Persistent Memory Region Support 00:26:44.337 ================================ 00:26:44.337 Supported: No 00:26:44.337 00:26:44.337 Admin Command Set Attributes 00:26:44.337 ============================ 00:26:44.337 Security Send/Receive: Not Supported 00:26:44.337 Format NVM: Not Supported 00:26:44.337 Firmware Activate/Download: Not Supported 00:26:44.337 Namespace Management: Not Supported 00:26:44.337 Device Self-Test: Not Supported 00:26:44.337 Directives: Not Supported 00:26:44.337 NVMe-MI: Not Supported 00:26:44.337 Virtualization Management: Not Supported 00:26:44.337 Doorbell Buffer Config: Not Supported 00:26:44.337 Get LBA Status Capability: Not Supported 00:26:44.337 Command & Feature Lockdown Capability: Not Supported 00:26:44.337 Abort Command Limit: 4 00:26:44.337 Async Event Request Limit: 4 00:26:44.337 Number of Firmware Slots: N/A 00:26:44.338 Firmware Slot 1 Read-Only: N/A 00:26:44.338 Firmware Activation Without Reset: N/A 00:26:44.338 Multiple Update Detection Support: N/A 00:26:44.338 Firmware Update Granularity: No Information Provided 00:26:44.338 Per-Namespace SMART Log: No 00:26:44.338 Asymmetric Namespace Access Log Page: Not Supported 00:26:44.338 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:26:44.338 Command Effects Log Page: Supported 00:26:44.338 Get Log Page Extended Data: Supported 00:26:44.338 Telemetry Log Pages: Not Supported 00:26:44.338 Persistent Event Log Pages: Not Supported 00:26:44.338 Supported Log Pages Log Page: May Support 00:26:44.338 Commands Supported & Effects Log Page: Not Supported 00:26:44.338 Feature Identifiers & Effects Log Page:May Support 00:26:44.338 NVMe-MI Commands & Effects Log Page: May Support 00:26:44.338 Data Area 4 for Telemetry Log: Not Supported 00:26:44.338 Error Log Page Entries Supported: 128 00:26:44.338 Keep Alive: Supported 00:26:44.338 Keep Alive Granularity: 10000 ms 00:26:44.338 00:26:44.338 NVM Command Set Attributes 00:26:44.338 ========================== 00:26:44.338 Submission Queue Entry Size 00:26:44.338 Max: 64 00:26:44.338 Min: 64 00:26:44.338 Completion Queue Entry Size 00:26:44.338 Max: 16 00:26:44.338 Min: 16 00:26:44.338 Number of Namespaces: 32 00:26:44.338 Compare Command: Supported 00:26:44.338 Write Uncorrectable Command: Not Supported 00:26:44.338 Dataset Management Command: Supported 00:26:44.338 Write Zeroes Command: Supported 00:26:44.338 Set Features Save Field: Not Supported 00:26:44.338 Reservations: Supported 00:26:44.338 Timestamp: Not Supported 00:26:44.338 Copy: Supported 00:26:44.338 Volatile Write Cache: Present 00:26:44.338 Atomic Write Unit (Normal): 1 00:26:44.338 Atomic Write Unit (PFail): 1 00:26:44.338 Atomic Compare & Write Unit: 1 00:26:44.338 Fused Compare & Write: Supported 00:26:44.338 Scatter-Gather List 00:26:44.338 SGL Command Set: Supported 00:26:44.338 SGL Keyed: Supported 00:26:44.338 SGL Bit Bucket Descriptor: Not Supported 00:26:44.338 SGL Metadata Pointer: Not Supported 00:26:44.338 Oversized SGL: Not Supported 00:26:44.338 SGL Metadata Address: Not Supported 00:26:44.338 SGL Offset: Supported 00:26:44.338 Transport SGL Data Block: Not Supported 00:26:44.338 Replay Protected Memory Block: Not Supported 00:26:44.338 00:26:44.338 Firmware Slot Information 00:26:44.338 ========================= 00:26:44.338 Active slot: 1 00:26:44.338 Slot 1 Firmware Revision: 25.01 00:26:44.338 00:26:44.338 00:26:44.338 Commands Supported and Effects 00:26:44.338 ============================== 00:26:44.338 Admin Commands 00:26:44.338 -------------- 00:26:44.338 Get Log Page (02h): Supported 00:26:44.338 Identify (06h): Supported 00:26:44.338 Abort (08h): Supported 00:26:44.338 Set Features (09h): Supported 00:26:44.338 Get Features (0Ah): Supported 00:26:44.338 Asynchronous Event Request (0Ch): Supported 00:26:44.338 Keep Alive (18h): Supported 00:26:44.338 I/O Commands 00:26:44.338 ------------ 00:26:44.338 Flush (00h): Supported LBA-Change 00:26:44.338 Write (01h): Supported LBA-Change 00:26:44.338 Read (02h): Supported 00:26:44.338 Compare (05h): Supported 00:26:44.338 Write Zeroes (08h): Supported LBA-Change 00:26:44.338 Dataset Management (09h): Supported LBA-Change 00:26:44.338 Copy (19h): Supported LBA-Change 00:26:44.338 00:26:44.338 Error Log 00:26:44.338 ========= 00:26:44.338 00:26:44.338 Arbitration 00:26:44.338 =========== 00:26:44.338 Arbitration Burst: 1 00:26:44.338 00:26:44.338 Power Management 00:26:44.338 ================ 00:26:44.338 Number of Power States: 1 00:26:44.338 Current Power State: Power State #0 00:26:44.338 Power State #0: 00:26:44.338 Max Power: 0.00 W 00:26:44.338 Non-Operational State: Operational 00:26:44.338 Entry Latency: Not Reported 00:26:44.338 Exit Latency: Not Reported 00:26:44.338 Relative Read Throughput: 0 00:26:44.338 Relative Read Latency: 0 00:26:44.338 Relative Write Throughput: 0 00:26:44.338 Relative Write Latency: 0 00:26:44.338 Idle Power: Not Reported 00:26:44.338 Active Power: Not Reported 00:26:44.338 Non-Operational Permissive Mode: Not Supported 00:26:44.338 00:26:44.338 Health Information 00:26:44.338 ================== 00:26:44.338 Critical Warnings: 00:26:44.338 Available Spare Space: OK 00:26:44.338 Temperature: OK 00:26:44.338 Device Reliability: OK 00:26:44.338 Read Only: No 00:26:44.338 Volatile Memory Backup: OK 00:26:44.338 Current Temperature: 0 Kelvin (-273 Celsius) 00:26:44.338 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:26:44.338 Available Spare: 0% 00:26:44.338 Available Spare Threshold: 0% 00:26:44.338 Life Percentage Used:[2024-12-05 12:10:18.378914] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.338 [2024-12-05 12:10:18.378918] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x82c690) 00:26:44.338 [2024-12-05 12:10:18.378924] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.338 [2024-12-05 12:10:18.378935] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88eb80, cid 7, qid 0 00:26:44.338 [2024-12-05 12:10:18.379008] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.338 [2024-12-05 12:10:18.379014] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.338 [2024-12-05 12:10:18.379017] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.338 [2024-12-05 12:10:18.379020] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x88eb80) on tqpair=0x82c690 00:26:44.338 [2024-12-05 12:10:18.379050] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:26:44.338 [2024-12-05 12:10:18.379059] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x88e100) on tqpair=0x82c690 00:26:44.338 [2024-12-05 12:10:18.379065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.338 [2024-12-05 12:10:18.379069] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x88e280) on tqpair=0x82c690 00:26:44.338 [2024-12-05 12:10:18.379074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.338 [2024-12-05 12:10:18.379078] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x88e400) on tqpair=0x82c690 00:26:44.338 [2024-12-05 12:10:18.379082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.338 [2024-12-05 12:10:18.379086] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x88e580) on tqpair=0x82c690 00:26:44.338 [2024-12-05 12:10:18.379090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.338 [2024-12-05 12:10:18.379098] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.338 [2024-12-05 12:10:18.379102] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.338 [2024-12-05 12:10:18.379105] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x82c690) 00:26:44.338 [2024-12-05 12:10:18.379110] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.338 [2024-12-05 12:10:18.379122] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88e580, cid 3, qid 0 00:26:44.338 [2024-12-05 12:10:18.379208] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.338 [2024-12-05 12:10:18.379213] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.338 [2024-12-05 12:10:18.379216] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.338 [2024-12-05 12:10:18.379219] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x88e580) on tqpair=0x82c690 00:26:44.338 [2024-12-05 12:10:18.379225] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.338 [2024-12-05 12:10:18.379228] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.338 [2024-12-05 12:10:18.379231] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x82c690) 00:26:44.338 [2024-12-05 12:10:18.379237] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.338 [2024-12-05 12:10:18.379250] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88e580, cid 3, qid 0 00:26:44.338 [2024-12-05 12:10:18.379353] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.338 [2024-12-05 12:10:18.379359] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.338 [2024-12-05 12:10:18.379362] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.338 [2024-12-05 12:10:18.379365] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x88e580) on tqpair=0x82c690 00:26:44.338 [2024-12-05 12:10:18.383377] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:26:44.339 [2024-12-05 12:10:18.383382] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:26:44.339 [2024-12-05 12:10:18.383391] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:44.339 [2024-12-05 12:10:18.383396] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:44.339 [2024-12-05 12:10:18.383399] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x82c690) 00:26:44.339 [2024-12-05 12:10:18.383405] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.339 [2024-12-05 12:10:18.383416] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88e580, cid 3, qid 0 00:26:44.339 [2024-12-05 12:10:18.383500] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:44.339 [2024-12-05 12:10:18.383506] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:44.339 [2024-12-05 12:10:18.383509] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:44.339 [2024-12-05 12:10:18.383512] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x88e580) on tqpair=0x82c690 00:26:44.339 [2024-12-05 12:10:18.383519] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 0 milliseconds 00:26:44.339 0% 00:26:44.339 Data Units Read: 0 00:26:44.339 Data Units Written: 0 00:26:44.339 Host Read Commands: 0 00:26:44.339 Host Write Commands: 0 00:26:44.339 Controller Busy Time: 0 minutes 00:26:44.339 Power Cycles: 0 00:26:44.339 Power On Hours: 0 hours 00:26:44.339 Unsafe Shutdowns: 0 00:26:44.339 Unrecoverable Media Errors: 0 00:26:44.339 Lifetime Error Log Entries: 0 00:26:44.339 Warning Temperature Time: 0 minutes 00:26:44.339 Critical Temperature Time: 0 minutes 00:26:44.339 00:26:44.339 Number of Queues 00:26:44.339 ================ 00:26:44.339 Number of I/O Submission Queues: 127 00:26:44.339 Number of I/O Completion Queues: 127 00:26:44.339 00:26:44.339 Active Namespaces 00:26:44.339 ================= 00:26:44.339 Namespace ID:1 00:26:44.339 Error Recovery Timeout: Unlimited 00:26:44.339 Command Set Identifier: NVM (00h) 00:26:44.339 Deallocate: Supported 00:26:44.339 Deallocated/Unwritten Error: Not Supported 00:26:44.339 Deallocated Read Value: Unknown 00:26:44.339 Deallocate in Write Zeroes: Not Supported 00:26:44.339 Deallocated Guard Field: 0xFFFF 00:26:44.339 Flush: Supported 00:26:44.339 Reservation: Supported 00:26:44.339 Namespace Sharing Capabilities: Multiple Controllers 00:26:44.339 Size (in LBAs): 131072 (0GiB) 00:26:44.339 Capacity (in LBAs): 131072 (0GiB) 00:26:44.339 Utilization (in LBAs): 131072 (0GiB) 00:26:44.339 NGUID: ABCDEF0123456789ABCDEF0123456789 00:26:44.339 EUI64: ABCDEF0123456789 00:26:44.339 UUID: a41b1ad0-7585-4d6a-905e-2b32aad7e124 00:26:44.339 Thin Provisioning: Not Supported 00:26:44.339 Per-NS Atomic Units: Yes 00:26:44.339 Atomic Boundary Size (Normal): 0 00:26:44.339 Atomic Boundary Size (PFail): 0 00:26:44.339 Atomic Boundary Offset: 0 00:26:44.339 Maximum Single Source Range Length: 65535 00:26:44.339 Maximum Copy Length: 65535 00:26:44.339 Maximum Source Range Count: 1 00:26:44.339 NGUID/EUI64 Never Reused: No 00:26:44.339 Namespace Write Protected: No 00:26:44.339 Number of LBA Formats: 1 00:26:44.339 Current LBA Format: LBA Format #00 00:26:44.339 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:44.339 00:26:44.339 12:10:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:26:44.339 12:10:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:44.339 12:10:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.339 12:10:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:44.339 12:10:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.339 12:10:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:26:44.339 12:10:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:26:44.339 12:10:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # nvmfcleanup 00:26:44.339 12:10:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@99 -- # sync 00:26:44.339 12:10:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:26:44.339 12:10:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@102 -- # set +e 00:26:44.339 12:10:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@103 -- # for i in {1..20} 00:26:44.339 12:10:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:26:44.339 rmmod nvme_tcp 00:26:44.339 rmmod nvme_fabrics 00:26:44.339 rmmod nvme_keyring 00:26:44.339 12:10:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:26:44.339 12:10:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # set -e 00:26:44.339 12:10:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # return 0 00:26:44.339 12:10:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # '[' -n 163495 ']' 00:26:44.339 12:10:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@337 -- # killprocess 163495 00:26:44.339 12:10:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 163495 ']' 00:26:44.339 12:10:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 163495 00:26:44.339 12:10:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:26:44.339 12:10:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:44.339 12:10:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 163495 00:26:44.598 12:10:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:44.598 12:10:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:44.598 12:10:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 163495' 00:26:44.598 killing process with pid 163495 00:26:44.598 12:10:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 163495 00:26:44.599 12:10:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 163495 00:26:44.599 12:10:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:26:44.599 12:10:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # nvmf_fini 00:26:44.599 12:10:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@264 -- # local dev 00:26:44.599 12:10:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@267 -- # remove_target_ns 00:26:44.599 12:10:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:26:44.599 12:10:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:26:44.599 12:10:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_target_ns 00:26:47.134 12:10:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@268 -- # delete_main_bridge 00:26:47.134 12:10:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:26:47.134 12:10:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@130 -- # return 0 00:26:47.134 12:10:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:26:47.134 12:10:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:26:47.134 12:10:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:26:47.134 12:10:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:26:47.134 12:10:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:26:47.134 12:10:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:26:47.134 12:10:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:26:47.134 12:10:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:26:47.134 12:10:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:26:47.134 12:10:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:26:47.134 12:10:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:26:47.134 12:10:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:26:47.134 12:10:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:26:47.134 12:10:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:26:47.134 12:10:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:26:47.134 12:10:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:26:47.134 12:10:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:26:47.134 12:10:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@41 -- # _dev=0 00:26:47.134 12:10:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@41 -- # dev_map=() 00:26:47.134 12:10:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@284 -- # iptr 00:26:47.134 12:10:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@542 -- # iptables-save 00:26:47.134 12:10:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:26:47.134 12:10:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@542 -- # iptables-restore 00:26:47.134 00:26:47.134 real 0m10.071s 00:26:47.134 user 0m8.060s 00:26:47.134 sys 0m4.943s 00:26:47.134 12:10:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:47.134 12:10:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:47.134 ************************************ 00:26:47.134 END TEST nvmf_identify 00:26:47.134 ************************************ 00:26:47.134 12:10:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@21 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:26:47.134 12:10:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:47.134 12:10:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:47.134 12:10:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.134 ************************************ 00:26:47.134 START TEST nvmf_perf 00:26:47.134 ************************************ 00:26:47.134 12:10:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:26:47.134 * Looking for test storage... 00:26:47.134 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:47.134 12:10:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:47.134 12:10:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:26:47.134 12:10:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:47.134 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:47.134 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:47.134 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:47.134 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:47.134 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:26:47.134 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:26:47.134 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:26:47.134 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:26:47.134 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:26:47.134 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:26:47.134 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:26:47.134 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:47.134 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:26:47.134 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:26:47.134 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:47.134 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:47.134 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:26:47.134 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:26:47.134 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:47.134 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:26:47.134 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:26:47.134 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:26:47.134 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:26:47.134 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:47.134 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:26:47.134 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:26:47.134 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:47.134 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:47.134 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:26:47.134 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:47.134 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:47.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:47.134 --rc genhtml_branch_coverage=1 00:26:47.134 --rc genhtml_function_coverage=1 00:26:47.134 --rc genhtml_legend=1 00:26:47.134 --rc geninfo_all_blocks=1 00:26:47.134 --rc geninfo_unexecuted_blocks=1 00:26:47.134 00:26:47.134 ' 00:26:47.134 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:47.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:47.134 --rc genhtml_branch_coverage=1 00:26:47.134 --rc genhtml_function_coverage=1 00:26:47.134 --rc genhtml_legend=1 00:26:47.134 --rc geninfo_all_blocks=1 00:26:47.134 --rc geninfo_unexecuted_blocks=1 00:26:47.134 00:26:47.134 ' 00:26:47.134 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:47.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:47.134 --rc genhtml_branch_coverage=1 00:26:47.134 --rc genhtml_function_coverage=1 00:26:47.134 --rc genhtml_legend=1 00:26:47.134 --rc geninfo_all_blocks=1 00:26:47.134 --rc geninfo_unexecuted_blocks=1 00:26:47.134 00:26:47.134 ' 00:26:47.135 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:47.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:47.135 --rc genhtml_branch_coverage=1 00:26:47.135 --rc genhtml_function_coverage=1 00:26:47.135 --rc genhtml_legend=1 00:26:47.135 --rc geninfo_all_blocks=1 00:26:47.135 --rc geninfo_unexecuted_blocks=1 00:26:47.135 00:26:47.135 ' 00:26:47.135 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:47.135 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:26:47.135 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:47.135 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:47.135 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:47.135 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:47.135 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:47.135 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:26:47.135 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:47.135 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:26:47.135 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:26:47.135 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:26:47.135 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:47.135 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:26:47.135 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:26:47.135 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:47.135 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:47.135 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:26:47.135 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:47.135 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:47.135 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:47.135 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.135 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.135 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.135 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:26:47.135 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.135 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:26:47.135 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:26:47.135 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:26:47.135 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:26:47.135 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@50 -- # : 0 00:26:47.135 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:26:47.135 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:26:47.135 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:26:47.135 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:47.135 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:47.135 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:26:47.135 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:26:47.135 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:26:47.135 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:26:47.135 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@54 -- # have_pci_nics=0 00:26:47.135 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:47.135 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:47.135 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:47.135 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:26:47.135 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:26:47.135 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:47.135 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # prepare_net_devs 00:26:47.135 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # local -g is_hw=no 00:26:47.135 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@260 -- # remove_target_ns 00:26:47.135 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:26:47.135 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:26:47.135 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_target_ns 00:26:47.135 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:26:47.135 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:26:47.135 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # xtrace_disable 00:26:47.135 12:10:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:53.703 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:53.703 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@131 -- # pci_devs=() 00:26:53.703 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@131 -- # local -a pci_devs 00:26:53.703 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@132 -- # pci_net_devs=() 00:26:53.703 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:26:53.703 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@133 -- # pci_drivers=() 00:26:53.703 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@133 -- # local -A pci_drivers 00:26:53.703 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@135 -- # net_devs=() 00:26:53.703 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@135 -- # local -ga net_devs 00:26:53.703 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@136 -- # e810=() 00:26:53.703 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@136 -- # local -ga e810 00:26:53.703 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@137 -- # x722=() 00:26:53.703 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@137 -- # local -ga x722 00:26:53.703 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@138 -- # mlx=() 00:26:53.703 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@138 -- # local -ga mlx 00:26:53.703 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:53.703 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:53.703 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:53.703 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:53.703 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:53.703 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:53.703 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:53.703 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:53.703 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:53.703 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:53.703 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:53.703 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:53.703 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:26:53.703 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:26:53.703 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:26:53.703 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:26:53.703 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:26:53.703 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:26:53.703 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:26:53.703 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:53.703 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:53.703 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:26:53.703 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:26:53.703 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:53.703 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:53.703 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:53.704 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # [[ up == up ]] 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:53.704 Found net devices under 0000:86:00.0: cvl_0_0 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # [[ up == up ]] 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:53.704 Found net devices under 0000:86:00.1: cvl_0_1 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # is_hw=yes 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@257 -- # create_target_ns 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@27 -- # local -gA dev_map 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@28 -- # local -g _dev 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@44 -- # ips=() 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@11 -- # local val=167772161 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:26:53.704 10.0.0.1 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@11 -- # local val=167772162 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:26:53.704 10.0.0.2 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:26:53.704 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:53.705 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@38 -- # ping_ips 1 00:26:53.705 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:26:53.705 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:26:53.705 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:26:53.705 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:26:53.705 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:26:53.705 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:26:53.705 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:26:53.705 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:26:53.705 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:26:53.705 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@107 -- # local dev=initiator0 00:26:53.705 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:26:53.705 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:26:53.705 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:26:53.705 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:26:53.705 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:26:53.705 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:26:53.705 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:26:53.705 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:26:53.705 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:26:53.705 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:26:53.705 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:26:53.705 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:53.705 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:53.705 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:26:53.705 12:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:26:53.705 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:53.705 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.377 ms 00:26:53.705 00:26:53.705 --- 10.0.0.1 ping statistics --- 00:26:53.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:53.705 rtt min/avg/max/mdev = 0.377/0.377/0.377/0.000 ms 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # get_net_dev target0 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@107 -- # local dev=target0 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:26:53.705 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:53.705 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:26:53.705 00:26:53.705 --- 10.0.0.2 ping statistics --- 00:26:53.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:53.705 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # (( pair++ )) 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@270 -- # return 0 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@107 -- # local dev=initiator0 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@107 -- # local dev=initiator1 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # return 1 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # dev= 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@169 -- # return 0 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # get_net_dev target0 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@107 -- # local dev=target0 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # get_net_dev target1 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@107 -- # local dev=target1 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:26:53.705 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # return 1 00:26:53.706 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # dev= 00:26:53.706 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@169 -- # return 0 00:26:53.706 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:26:53.706 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:53.706 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:26:53.706 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:26:53.706 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:53.706 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:26:53.706 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:26:53.706 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:26:53.706 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:26:53.706 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:53.706 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:53.706 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # nvmfpid=167295 00:26:53.706 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:53.706 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@329 -- # waitforlisten 167295 00:26:53.706 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 167295 ']' 00:26:53.706 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:53.706 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:53.706 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:53.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:53.706 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:53.706 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:53.706 [2024-12-05 12:10:27.194150] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:26:53.706 [2024-12-05 12:10:27.194200] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:53.706 [2024-12-05 12:10:27.271762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:53.706 [2024-12-05 12:10:27.314649] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:53.706 [2024-12-05 12:10:27.314686] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:53.706 [2024-12-05 12:10:27.314693] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:53.706 [2024-12-05 12:10:27.314702] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:53.706 [2024-12-05 12:10:27.314707] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:53.706 [2024-12-05 12:10:27.316364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:53.706 [2024-12-05 12:10:27.316544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:53.706 [2024-12-05 12:10:27.316577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:53.706 [2024-12-05 12:10:27.316578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:53.706 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:53.706 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:26:53.706 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:26:53.706 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:53.706 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:53.706 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:53.706 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:26:53.706 12:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:26:56.995 12:10:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:26:56.995 12:10:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:26:56.995 12:10:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:26:56.995 12:10:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:56.995 12:10:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:26:56.995 12:10:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:26:56.995 12:10:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:26:56.995 12:10:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:26:56.995 12:10:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:26:56.995 [2024-12-05 12:10:31.095643] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:56.995 12:10:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:57.254 12:10:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:26:57.254 12:10:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:57.513 12:10:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:26:57.513 12:10:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:26:57.772 12:10:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:57.772 [2024-12-05 12:10:31.904027] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:57.772 12:10:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:58.031 12:10:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:26:58.031 12:10:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:26:58.031 12:10:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:26:58.031 12:10:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:26:59.409 Initializing NVMe Controllers 00:26:59.409 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:26:59.409 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:26:59.409 Initialization complete. Launching workers. 00:26:59.409 ======================================================== 00:26:59.409 Latency(us) 00:26:59.409 Device Information : IOPS MiB/s Average min max 00:26:59.409 PCIE (0000:5e:00.0) NSID 1 from core 0: 99714.71 389.51 320.55 20.21 4383.13 00:26:59.409 ======================================================== 00:26:59.409 Total : 99714.71 389.51 320.55 20.21 4383.13 00:26:59.409 00:26:59.409 12:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:00.788 Initializing NVMe Controllers 00:27:00.788 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:00.788 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:00.788 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:00.788 Initialization complete. Launching workers. 00:27:00.788 ======================================================== 00:27:00.788 Latency(us) 00:27:00.788 Device Information : IOPS MiB/s Average min max 00:27:00.788 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 122.00 0.48 8431.78 111.18 45218.67 00:27:00.788 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 46.00 0.18 22709.61 7204.75 47897.94 00:27:00.788 ======================================================== 00:27:00.788 Total : 168.00 0.66 12341.19 111.18 47897.94 00:27:00.788 00:27:00.788 12:10:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:02.194 Initializing NVMe Controllers 00:27:02.194 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:02.194 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:02.194 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:02.194 Initialization complete. Launching workers. 00:27:02.194 ======================================================== 00:27:02.194 Latency(us) 00:27:02.194 Device Information : IOPS MiB/s Average min max 00:27:02.194 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11246.15 43.93 2848.04 387.98 45182.84 00:27:02.194 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3859.96 15.08 8322.60 4636.14 16032.46 00:27:02.194 ======================================================== 00:27:02.194 Total : 15106.11 59.01 4246.92 387.98 45182.84 00:27:02.194 00:27:02.194 12:10:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:27:02.194 12:10:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:27:02.194 12:10:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:04.802 Initializing NVMe Controllers 00:27:04.802 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:04.802 Controller IO queue size 128, less than required. 00:27:04.802 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:04.802 Controller IO queue size 128, less than required. 00:27:04.802 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:04.802 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:04.802 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:04.802 Initialization complete. Launching workers. 00:27:04.802 ======================================================== 00:27:04.802 Latency(us) 00:27:04.802 Device Information : IOPS MiB/s Average min max 00:27:04.802 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1773.05 443.26 73078.02 50684.64 116633.58 00:27:04.802 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 567.56 141.89 235168.31 79718.39 336383.56 00:27:04.802 ======================================================== 00:27:04.802 Total : 2340.60 585.15 112382.06 50684.64 336383.56 00:27:04.802 00:27:04.802 12:10:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:27:04.802 No valid NVMe controllers or AIO or URING devices found 00:27:04.802 Initializing NVMe Controllers 00:27:04.802 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:04.802 Controller IO queue size 128, less than required. 00:27:04.802 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:04.802 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:27:04.802 Controller IO queue size 128, less than required. 00:27:04.802 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:04.802 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:27:04.802 WARNING: Some requested NVMe devices were skipped 00:27:04.802 12:10:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:27:07.368 Initializing NVMe Controllers 00:27:07.368 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:07.368 Controller IO queue size 128, less than required. 00:27:07.368 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:07.368 Controller IO queue size 128, less than required. 00:27:07.368 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:07.368 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:07.368 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:07.368 Initialization complete. Launching workers. 00:27:07.368 00:27:07.368 ==================== 00:27:07.368 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:27:07.368 TCP transport: 00:27:07.368 polls: 11483 00:27:07.368 idle_polls: 8285 00:27:07.368 sock_completions: 3198 00:27:07.368 nvme_completions: 5975 00:27:07.368 submitted_requests: 8964 00:27:07.368 queued_requests: 1 00:27:07.368 00:27:07.368 ==================== 00:27:07.368 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:27:07.368 TCP transport: 00:27:07.368 polls: 15600 00:27:07.368 idle_polls: 11752 00:27:07.368 sock_completions: 3848 00:27:07.368 nvme_completions: 6771 00:27:07.368 submitted_requests: 10248 00:27:07.368 queued_requests: 1 00:27:07.368 ======================================================== 00:27:07.368 Latency(us) 00:27:07.368 Device Information : IOPS MiB/s Average min max 00:27:07.368 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1492.64 373.16 87085.93 56704.28 145051.66 00:27:07.368 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1691.52 422.88 76484.60 41022.62 124672.44 00:27:07.368 ======================================================== 00:27:07.368 Total : 3184.16 796.04 81454.18 41022.62 145051.66 00:27:07.368 00:27:07.627 12:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:27:07.627 12:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:07.627 12:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:27:07.627 12:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:27:07.627 12:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:27:07.627 12:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # nvmfcleanup 00:27:07.627 12:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@99 -- # sync 00:27:07.627 12:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:27:07.627 12:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@102 -- # set +e 00:27:07.627 12:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@103 -- # for i in {1..20} 00:27:07.627 12:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:27:07.627 rmmod nvme_tcp 00:27:07.627 rmmod nvme_fabrics 00:27:07.946 rmmod nvme_keyring 00:27:07.946 12:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:27:07.946 12:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # set -e 00:27:07.946 12:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # return 0 00:27:07.946 12:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # '[' -n 167295 ']' 00:27:07.946 12:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@337 -- # killprocess 167295 00:27:07.946 12:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 167295 ']' 00:27:07.946 12:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 167295 00:27:07.946 12:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:27:07.946 12:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:07.946 12:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 167295 00:27:07.946 12:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:07.946 12:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:07.946 12:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 167295' 00:27:07.946 killing process with pid 167295 00:27:07.946 12:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 167295 00:27:07.946 12:10:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 167295 00:27:09.847 12:10:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:27:09.847 12:10:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # nvmf_fini 00:27:09.847 12:10:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@264 -- # local dev 00:27:09.847 12:10:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@267 -- # remove_target_ns 00:27:09.847 12:10:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:27:09.847 12:10:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:27:09.847 12:10:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_target_ns 00:27:12.381 12:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@268 -- # delete_main_bridge 00:27:12.381 12:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:27:12.381 12:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@130 -- # return 0 00:27:12.381 12:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:27:12.381 12:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:27:12.381 12:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:27:12.381 12:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:27:12.381 12:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:27:12.381 12:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:27:12.381 12:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:27:12.381 12:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:27:12.381 12:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:27:12.381 12:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:27:12.381 12:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:27:12.381 12:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:27:12.381 12:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:27:12.381 12:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:27:12.381 12:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:27:12.381 12:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:27:12.381 12:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:27:12.381 12:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@41 -- # _dev=0 00:27:12.381 12:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@41 -- # dev_map=() 00:27:12.381 12:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@284 -- # iptr 00:27:12.381 12:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@542 -- # iptables-save 00:27:12.381 12:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:27:12.381 12:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@542 -- # iptables-restore 00:27:12.381 00:27:12.381 real 0m25.215s 00:27:12.381 user 1m6.328s 00:27:12.381 sys 0m8.462s 00:27:12.381 12:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:12.381 12:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:12.381 ************************************ 00:27:12.381 END TEST nvmf_perf 00:27:12.381 ************************************ 00:27:12.381 12:10:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:27:12.381 12:10:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:12.381 12:10:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:12.381 12:10:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.381 ************************************ 00:27:12.381 START TEST nvmf_fio_host 00:27:12.381 ************************************ 00:27:12.381 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:27:12.381 * Looking for test storage... 00:27:12.381 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:12.381 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:12.381 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:27:12.381 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:12.381 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:12.381 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:12.381 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:12.381 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:12.382 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:27:12.382 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:27:12.382 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:27:12.382 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:27:12.382 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:27:12.382 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:27:12.382 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:27:12.382 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:12.382 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:27:12.382 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:27:12.382 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:12.382 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:12.382 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:27:12.382 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:27:12.382 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:12.382 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:27:12.382 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:27:12.382 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:27:12.382 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:27:12.382 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:12.382 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:27:12.382 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:27:12.382 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:12.382 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:12.382 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:27:12.382 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:12.382 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:12.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:12.382 --rc genhtml_branch_coverage=1 00:27:12.382 --rc genhtml_function_coverage=1 00:27:12.382 --rc genhtml_legend=1 00:27:12.382 --rc geninfo_all_blocks=1 00:27:12.382 --rc geninfo_unexecuted_blocks=1 00:27:12.382 00:27:12.382 ' 00:27:12.382 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:12.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:12.382 --rc genhtml_branch_coverage=1 00:27:12.382 --rc genhtml_function_coverage=1 00:27:12.382 --rc genhtml_legend=1 00:27:12.382 --rc geninfo_all_blocks=1 00:27:12.382 --rc geninfo_unexecuted_blocks=1 00:27:12.382 00:27:12.382 ' 00:27:12.382 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:12.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:12.382 --rc genhtml_branch_coverage=1 00:27:12.382 --rc genhtml_function_coverage=1 00:27:12.382 --rc genhtml_legend=1 00:27:12.382 --rc geninfo_all_blocks=1 00:27:12.382 --rc geninfo_unexecuted_blocks=1 00:27:12.382 00:27:12.382 ' 00:27:12.382 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:12.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:12.382 --rc genhtml_branch_coverage=1 00:27:12.382 --rc genhtml_function_coverage=1 00:27:12.382 --rc genhtml_legend=1 00:27:12.382 --rc geninfo_all_blocks=1 00:27:12.382 --rc geninfo_unexecuted_blocks=1 00:27:12.382 00:27:12.382 ' 00:27:12.382 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:12.382 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:27:12.382 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:12.382 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:12.382 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:12.382 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.382 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.382 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.382 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:27:12.382 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.382 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:12.382 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:27:12.382 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:12.382 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:12.382 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:12.382 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:12.382 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:12.382 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:27:12.382 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:12.382 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:27:12.382 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:27:12.382 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:27:12.382 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:12.382 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:27:12.382 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:27:12.382 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:12.382 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:12.382 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:27:12.382 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:12.382 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:12.382 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:12.382 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.382 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.382 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.382 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:27:12.382 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.383 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:27:12.383 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:27:12.383 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:27:12.383 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:27:12.383 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@50 -- # : 0 00:27:12.383 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:27:12.383 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:27:12.383 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:27:12.383 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:12.383 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:12.383 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:27:12.383 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:27:12.383 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:27:12.383 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:27:12.383 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@54 -- # have_pci_nics=0 00:27:12.383 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:12.383 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:27:12.383 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:27:12.383 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:12.383 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # prepare_net_devs 00:27:12.383 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # local -g is_hw=no 00:27:12.383 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@260 -- # remove_target_ns 00:27:12.383 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:27:12.383 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:27:12.383 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_target_ns 00:27:12.383 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:27:12.383 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:27:12.383 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # xtrace_disable 00:27:12.383 12:10:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.956 12:10:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:18.956 12:10:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@131 -- # pci_devs=() 00:27:18.956 12:10:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@131 -- # local -a pci_devs 00:27:18.956 12:10:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@132 -- # pci_net_devs=() 00:27:18.956 12:10:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:27:18.956 12:10:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@133 -- # pci_drivers=() 00:27:18.956 12:10:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@133 -- # local -A pci_drivers 00:27:18.956 12:10:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@135 -- # net_devs=() 00:27:18.956 12:10:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@135 -- # local -ga net_devs 00:27:18.956 12:10:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@136 -- # e810=() 00:27:18.956 12:10:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@136 -- # local -ga e810 00:27:18.956 12:10:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@137 -- # x722=() 00:27:18.956 12:10:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@137 -- # local -ga x722 00:27:18.956 12:10:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@138 -- # mlx=() 00:27:18.956 12:10:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@138 -- # local -ga mlx 00:27:18.956 12:10:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:18.956 12:10:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:18.956 12:10:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:18.956 12:10:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:18.956 12:10:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:18.956 12:10:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:18.956 12:10:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:18.956 12:10:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:18.956 12:10:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:18.956 12:10:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:18.956 12:10:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:18.956 12:10:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:18.957 12:10:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:27:18.957 12:10:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:27:18.957 12:10:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:27:18.957 12:10:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:27:18.957 12:10:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:27:18.957 12:10:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:27:18.957 12:10:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:27:18.957 12:10:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:18.957 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:18.957 12:10:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:27:18.957 12:10:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:27:18.957 12:10:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:18.957 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # [[ up == up ]] 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:18.957 Found net devices under 0000:86:00.0: cvl_0_0 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # [[ up == up ]] 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:18.957 Found net devices under 0000:86:00.1: cvl_0_1 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # is_hw=yes 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@257 -- # create_target_ns 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@27 -- # local -gA dev_map 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@28 -- # local -g _dev 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@44 -- # ips=() 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@11 -- # local val=167772161 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:27:18.957 10.0.0.1 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@11 -- # local val=167772162 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:27:18.957 10.0.0.2 00:27:18.957 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@38 -- # ping_ips 1 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@107 -- # local dev=initiator0 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:27:18.958 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:18.958 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.461 ms 00:27:18.958 00:27:18.958 --- 10.0.0.1 ping statistics --- 00:27:18.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:18.958 rtt min/avg/max/mdev = 0.461/0.461/0.461/0.000 ms 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # get_net_dev target0 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@107 -- # local dev=target0 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:27:18.958 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:18.958 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:27:18.958 00:27:18.958 --- 10.0.0.2 ping statistics --- 00:27:18.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:18.958 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # (( pair++ )) 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@270 -- # return 0 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@107 -- # local dev=initiator0 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@107 -- # local dev=initiator1 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # return 1 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # dev= 00:27:18.958 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@169 -- # return 0 00:27:18.959 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:27:18.959 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:27:18.959 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:27:18.959 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:27:18.959 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:27:18.959 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:18.959 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:18.959 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # get_net_dev target0 00:27:18.959 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@107 -- # local dev=target0 00:27:18.959 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:27:18.959 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:27:18.959 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:27:18.959 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:27:18.959 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:27:18.959 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:27:18.959 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:27:18.959 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:27:18.959 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:27:18.959 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:18.959 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:27:18.959 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:27:18.959 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:27:18.959 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:27:18.959 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:18.959 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:18.959 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # get_net_dev target1 00:27:18.959 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@107 -- # local dev=target1 00:27:18.959 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:27:18.959 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:27:18.959 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # return 1 00:27:18.959 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # dev= 00:27:18.959 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@169 -- # return 0 00:27:18.959 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:27:18.959 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:18.959 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:27:18.959 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:27:18.959 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:18.959 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:27:18.959 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:27:18.959 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:27:18.959 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:27:18.959 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:18.959 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.959 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=173440 00:27:18.959 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:18.959 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:18.959 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 173440 00:27:18.959 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 173440 ']' 00:27:18.959 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:18.959 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:18.959 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:18.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:18.959 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:18.959 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.959 [2024-12-05 12:10:52.479313] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:27:18.959 [2024-12-05 12:10:52.479366] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:18.959 [2024-12-05 12:10:52.558344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:18.959 [2024-12-05 12:10:52.601098] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:18.959 [2024-12-05 12:10:52.601136] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:18.959 [2024-12-05 12:10:52.601143] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:18.959 [2024-12-05 12:10:52.601149] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:18.959 [2024-12-05 12:10:52.601154] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:18.959 [2024-12-05 12:10:52.602739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:18.959 [2024-12-05 12:10:52.602778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:18.959 [2024-12-05 12:10:52.602890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:18.959 [2024-12-05 12:10:52.602891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:18.959 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:18.959 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:27:18.959 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:18.959 [2024-12-05 12:10:52.878011] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:18.959 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:27:18.959 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:18.959 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.959 12:10:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:27:18.959 Malloc1 00:27:19.219 12:10:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:19.219 12:10:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:19.478 12:10:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:19.737 [2024-12-05 12:10:53.724729] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:19.737 12:10:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:20.009 12:10:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:27:20.009 12:10:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:20.009 12:10:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:20.009 12:10:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:20.009 12:10:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:20.010 12:10:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:20.010 12:10:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:20.010 12:10:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:27:20.010 12:10:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:20.010 12:10:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:20.010 12:10:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:20.010 12:10:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:27:20.010 12:10:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:20.010 12:10:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:20.010 12:10:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:20.010 12:10:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:20.010 12:10:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:20.010 12:10:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:27:20.010 12:10:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:20.010 12:10:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:20.010 12:10:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:20.010 12:10:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:20.010 12:10:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:20.271 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:20.271 fio-3.35 00:27:20.271 Starting 1 thread 00:27:22.802 00:27:22.802 test: (groupid=0, jobs=1): err= 0: pid=174027: Thu Dec 5 12:10:56 2024 00:27:22.802 read: IOPS=12.0k, BW=46.8MiB/s (49.0MB/s)(93.7MiB/2005msec) 00:27:22.802 slat (nsec): min=1552, max=254098, avg=1708.92, stdev=2250.53 00:27:22.802 clat (usec): min=3156, max=10393, avg=5910.47, stdev=470.37 00:27:22.802 lat (usec): min=3188, max=10394, avg=5912.18, stdev=470.34 00:27:22.802 clat percentiles (usec): 00:27:22.802 | 1.00th=[ 4817], 5.00th=[ 5145], 10.00th=[ 5342], 20.00th=[ 5538], 00:27:22.802 | 30.00th=[ 5669], 40.00th=[ 5800], 50.00th=[ 5932], 60.00th=[ 5997], 00:27:22.802 | 70.00th=[ 6128], 80.00th=[ 6259], 90.00th=[ 6456], 95.00th=[ 6587], 00:27:22.802 | 99.00th=[ 6915], 99.50th=[ 7177], 99.90th=[ 8717], 99.95th=[ 9765], 00:27:22.802 | 99.99th=[10421] 00:27:22.803 bw ( KiB/s): min=47240, max=48408, per=99.95%, avg=47856.00, stdev=527.31, samples=4 00:27:22.803 iops : min=11810, max=12102, avg=11964.00, stdev=131.83, samples=4 00:27:22.803 write: IOPS=11.9k, BW=46.5MiB/s (48.8MB/s)(93.3MiB/2005msec); 0 zone resets 00:27:22.803 slat (nsec): min=1572, max=227687, avg=1760.45, stdev=1645.48 00:27:22.803 clat (usec): min=2437, max=9156, avg=4771.29, stdev=377.26 00:27:22.803 lat (usec): min=2452, max=9158, avg=4773.05, stdev=377.36 00:27:22.803 clat percentiles (usec): 00:27:22.803 | 1.00th=[ 3916], 5.00th=[ 4228], 10.00th=[ 4359], 20.00th=[ 4490], 00:27:22.803 | 30.00th=[ 4621], 40.00th=[ 4686], 50.00th=[ 4752], 60.00th=[ 4883], 00:27:22.803 | 70.00th=[ 4948], 80.00th=[ 5080], 90.00th=[ 5211], 95.00th=[ 5342], 00:27:22.803 | 99.00th=[ 5604], 99.50th=[ 5800], 99.90th=[ 7177], 99.95th=[ 8160], 00:27:22.803 | 99.99th=[ 8717] 00:27:22.803 bw ( KiB/s): min=47320, max=47936, per=100.00%, avg=47672.00, stdev=286.44, samples=4 00:27:22.803 iops : min=11830, max=11984, avg=11918.00, stdev=71.61, samples=4 00:27:22.803 lat (msec) : 4=0.98%, 10=99.01%, 20=0.01% 00:27:22.803 cpu : usr=73.40%, sys=25.55%, ctx=89, majf=0, minf=2 00:27:22.803 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:27:22.803 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:22.803 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:22.803 issued rwts: total=23999,23892,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:22.803 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:22.803 00:27:22.803 Run status group 0 (all jobs): 00:27:22.803 READ: bw=46.8MiB/s (49.0MB/s), 46.8MiB/s-46.8MiB/s (49.0MB/s-49.0MB/s), io=93.7MiB (98.3MB), run=2005-2005msec 00:27:22.803 WRITE: bw=46.5MiB/s (48.8MB/s), 46.5MiB/s-46.5MiB/s (48.8MB/s-48.8MB/s), io=93.3MiB (97.9MB), run=2005-2005msec 00:27:22.803 12:10:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:22.803 12:10:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:22.803 12:10:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:22.803 12:10:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:22.803 12:10:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:22.803 12:10:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:22.803 12:10:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:27:22.803 12:10:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:22.803 12:10:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:22.803 12:10:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:22.803 12:10:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:27:22.803 12:10:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:22.803 12:10:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:22.803 12:10:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:22.803 12:10:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:22.803 12:10:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:22.803 12:10:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:27:22.803 12:10:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:22.803 12:10:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:22.803 12:10:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:22.803 12:10:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:22.803 12:10:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:22.803 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:27:22.803 fio-3.35 00:27:22.803 Starting 1 thread 00:27:25.338 00:27:25.338 test: (groupid=0, jobs=1): err= 0: pid=174591: Thu Dec 5 12:10:59 2024 00:27:25.338 read: IOPS=11.0k, BW=172MiB/s (180MB/s)(345MiB/2007msec) 00:27:25.338 slat (nsec): min=2463, max=86998, avg=2804.99, stdev=1247.15 00:27:25.338 clat (usec): min=1286, max=13999, avg=6634.93, stdev=1533.12 00:27:25.338 lat (usec): min=1288, max=14013, avg=6637.74, stdev=1533.27 00:27:25.338 clat percentiles (usec): 00:27:25.338 | 1.00th=[ 3490], 5.00th=[ 4228], 10.00th=[ 4686], 20.00th=[ 5276], 00:27:25.338 | 30.00th=[ 5735], 40.00th=[ 6128], 50.00th=[ 6587], 60.00th=[ 7111], 00:27:25.338 | 70.00th=[ 7504], 80.00th=[ 7963], 90.00th=[ 8455], 95.00th=[ 9110], 00:27:25.338 | 99.00th=[10290], 99.50th=[10945], 99.90th=[12649], 99.95th=[13566], 00:27:25.338 | 99.99th=[13960] 00:27:25.338 bw ( KiB/s): min=84640, max=96128, per=51.16%, avg=90128.00, stdev=5148.49, samples=4 00:27:25.338 iops : min= 5290, max= 6008, avg=5633.00, stdev=321.78, samples=4 00:27:25.338 write: IOPS=6578, BW=103MiB/s (108MB/s)(184MiB/1790msec); 0 zone resets 00:27:25.338 slat (usec): min=28, max=426, avg=31.28, stdev= 7.25 00:27:25.338 clat (usec): min=4812, max=15360, avg=8522.60, stdev=1470.09 00:27:25.338 lat (usec): min=4842, max=15476, avg=8553.88, stdev=1471.69 00:27:25.338 clat percentiles (usec): 00:27:25.338 | 1.00th=[ 5800], 5.00th=[ 6456], 10.00th=[ 6783], 20.00th=[ 7308], 00:27:25.338 | 30.00th=[ 7701], 40.00th=[ 8029], 50.00th=[ 8356], 60.00th=[ 8717], 00:27:25.338 | 70.00th=[ 9110], 80.00th=[ 9765], 90.00th=[10552], 95.00th=[11207], 00:27:25.338 | 99.00th=[12256], 99.50th=[12649], 99.90th=[15139], 99.95th=[15270], 00:27:25.338 | 99.99th=[15401] 00:27:25.338 bw ( KiB/s): min=88992, max=99424, per=89.15%, avg=93840.00, stdev=4740.43, samples=4 00:27:25.338 iops : min= 5562, max= 6214, avg=5865.00, stdev=296.28, samples=4 00:27:25.338 lat (msec) : 2=0.09%, 4=2.05%, 10=90.90%, 20=6.97% 00:27:25.338 cpu : usr=84.50%, sys=14.71%, ctx=28, majf=0, minf=2 00:27:25.338 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:27:25.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.338 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:25.339 issued rwts: total=22099,11776,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:25.339 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:25.339 00:27:25.339 Run status group 0 (all jobs): 00:27:25.339 READ: bw=172MiB/s (180MB/s), 172MiB/s-172MiB/s (180MB/s-180MB/s), io=345MiB (362MB), run=2007-2007msec 00:27:25.339 WRITE: bw=103MiB/s (108MB/s), 103MiB/s-103MiB/s (108MB/s-108MB/s), io=184MiB (193MB), run=1790-1790msec 00:27:25.339 12:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:25.339 12:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:27:25.339 12:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:27:25.339 12:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:27:25.339 12:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:27:25.339 12:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # nvmfcleanup 00:27:25.339 12:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@99 -- # sync 00:27:25.339 12:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:27:25.339 12:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@102 -- # set +e 00:27:25.339 12:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@103 -- # for i in {1..20} 00:27:25.339 12:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:27:25.339 rmmod nvme_tcp 00:27:25.339 rmmod nvme_fabrics 00:27:25.339 rmmod nvme_keyring 00:27:25.598 12:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:27:25.598 12:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # set -e 00:27:25.598 12:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # return 0 00:27:25.599 12:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # '[' -n 173440 ']' 00:27:25.599 12:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@337 -- # killprocess 173440 00:27:25.599 12:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 173440 ']' 00:27:25.599 12:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 173440 00:27:25.599 12:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:27:25.599 12:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:25.599 12:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 173440 00:27:25.599 12:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:25.599 12:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:25.599 12:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 173440' 00:27:25.599 killing process with pid 173440 00:27:25.599 12:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 173440 00:27:25.599 12:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 173440 00:27:25.599 12:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:27:25.599 12:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # nvmf_fini 00:27:25.599 12:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@264 -- # local dev 00:27:25.599 12:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@267 -- # remove_target_ns 00:27:25.599 12:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:27:25.599 12:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:27:25.599 12:10:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_target_ns 00:27:28.130 12:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@268 -- # delete_main_bridge 00:27:28.130 12:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:27:28.130 12:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@130 -- # return 0 00:27:28.130 12:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:27:28.130 12:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:27:28.130 12:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:27:28.130 12:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:27:28.130 12:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:27:28.130 12:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:27:28.130 12:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:27:28.130 12:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:27:28.130 12:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:27:28.130 12:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:27:28.130 12:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:27:28.130 12:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:27:28.130 12:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:27:28.130 12:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:27:28.130 12:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:27:28.130 12:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:27:28.130 12:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:27:28.130 12:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@41 -- # _dev=0 00:27:28.130 12:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@41 -- # dev_map=() 00:27:28.130 12:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@284 -- # iptr 00:27:28.130 12:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@542 -- # iptables-save 00:27:28.130 12:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:27:28.130 12:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@542 -- # iptables-restore 00:27:28.130 00:27:28.130 real 0m15.703s 00:27:28.130 user 0m46.049s 00:27:28.130 sys 0m6.508s 00:27:28.130 12:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:28.130 12:11:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.130 ************************************ 00:27:28.130 END TEST nvmf_fio_host 00:27:28.130 ************************************ 00:27:28.130 12:11:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:27:28.130 12:11:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:28.130 12:11:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:28.130 12:11:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.130 ************************************ 00:27:28.130 START TEST nvmf_failover 00:27:28.130 ************************************ 00:27:28.130 12:11:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:27:28.130 * Looking for test storage... 00:27:28.130 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:28.130 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:28.130 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:27:28.130 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:28.130 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:28.130 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:28.130 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:28.130 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:28.130 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:27:28.130 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:27:28.130 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:27:28.130 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:27:28.130 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:27:28.130 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:27:28.130 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:27:28.130 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:28.130 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:27:28.130 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:27:28.130 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:28.130 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:28.130 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:27:28.130 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:27:28.130 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:28.130 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:27:28.130 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:27:28.130 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:27:28.130 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:27:28.130 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:28.130 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:27:28.130 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:27:28.130 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:28.130 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:28.130 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:27:28.130 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:28.130 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:28.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:28.130 --rc genhtml_branch_coverage=1 00:27:28.130 --rc genhtml_function_coverage=1 00:27:28.130 --rc genhtml_legend=1 00:27:28.130 --rc geninfo_all_blocks=1 00:27:28.130 --rc geninfo_unexecuted_blocks=1 00:27:28.130 00:27:28.130 ' 00:27:28.130 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:28.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:28.130 --rc genhtml_branch_coverage=1 00:27:28.130 --rc genhtml_function_coverage=1 00:27:28.130 --rc genhtml_legend=1 00:27:28.130 --rc geninfo_all_blocks=1 00:27:28.130 --rc geninfo_unexecuted_blocks=1 00:27:28.130 00:27:28.130 ' 00:27:28.130 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:28.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:28.131 --rc genhtml_branch_coverage=1 00:27:28.131 --rc genhtml_function_coverage=1 00:27:28.131 --rc genhtml_legend=1 00:27:28.131 --rc geninfo_all_blocks=1 00:27:28.131 --rc geninfo_unexecuted_blocks=1 00:27:28.131 00:27:28.131 ' 00:27:28.131 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:28.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:28.131 --rc genhtml_branch_coverage=1 00:27:28.131 --rc genhtml_function_coverage=1 00:27:28.131 --rc genhtml_legend=1 00:27:28.131 --rc geninfo_all_blocks=1 00:27:28.131 --rc geninfo_unexecuted_blocks=1 00:27:28.131 00:27:28.131 ' 00:27:28.131 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:28.131 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:27:28.131 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:28.131 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:28.131 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:28.131 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:28.131 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:28.131 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:27:28.131 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:28.131 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:27:28.131 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:27:28.131 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:27:28.131 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:28.131 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:27:28.131 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:27:28.131 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:28.131 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:28.131 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:27:28.131 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:28.131 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:28.131 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:28.131 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.131 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.131 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.131 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:27:28.131 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.131 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:27:28.131 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:27:28.131 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:27:28.131 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:27:28.131 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@50 -- # : 0 00:27:28.131 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:27:28.131 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:27:28.131 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:27:28.131 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:28.131 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:28.131 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:27:28.131 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:27:28.131 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:27:28.131 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:27:28.131 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@54 -- # have_pci_nics=0 00:27:28.131 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:28.131 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:28.131 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:28.131 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:28.131 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:27:28.131 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:27:28.131 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:28.131 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # prepare_net_devs 00:27:28.131 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # local -g is_hw=no 00:27:28.131 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@260 -- # remove_target_ns 00:27:28.131 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:27:28.131 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:27:28.131 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_target_ns 00:27:28.131 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:27:28.131 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:27:28.131 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # xtrace_disable 00:27:28.131 12:11:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:34.700 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:34.700 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@131 -- # pci_devs=() 00:27:34.700 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@131 -- # local -a pci_devs 00:27:34.700 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@132 -- # pci_net_devs=() 00:27:34.700 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:27:34.700 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@133 -- # pci_drivers=() 00:27:34.700 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@133 -- # local -A pci_drivers 00:27:34.700 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@135 -- # net_devs=() 00:27:34.700 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@135 -- # local -ga net_devs 00:27:34.700 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@136 -- # e810=() 00:27:34.700 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@136 -- # local -ga e810 00:27:34.700 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@137 -- # x722=() 00:27:34.700 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@137 -- # local -ga x722 00:27:34.700 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@138 -- # mlx=() 00:27:34.700 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@138 -- # local -ga mlx 00:27:34.700 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:34.700 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:34.700 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:34.700 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:34.700 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:34.700 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:34.700 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:34.700 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:34.700 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:34.700 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:34.700 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:34.700 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:34.700 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:27:34.700 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:27:34.700 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:27:34.700 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:27:34.700 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:27:34.700 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:27:34.700 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:27:34.700 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:34.700 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:34.700 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:27:34.700 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:27:34.700 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:34.700 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:34.700 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:27:34.700 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:27:34.700 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:34.700 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:34.700 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:27:34.700 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:27:34.700 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:34.700 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:34.700 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:27:34.700 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:27:34.700 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:27:34.700 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:27:34.700 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:27:34.700 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:34.700 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:27:34.700 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:34.700 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # [[ up == up ]] 00:27:34.700 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:27:34.700 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:34.701 Found net devices under 0000:86:00.0: cvl_0_0 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # [[ up == up ]] 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:34.701 Found net devices under 0000:86:00.1: cvl_0_1 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # is_hw=yes 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@257 -- # create_target_ns 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@27 -- # local -gA dev_map 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@28 -- # local -g _dev 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@44 -- # ips=() 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@11 -- # local val=167772161 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:27:34.701 10.0.0.1 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@11 -- # local val=167772162 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:27:34.701 10.0.0.2 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@38 -- # ping_ips 1 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:27:34.701 12:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:27:34.701 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:27:34.701 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:27:34.701 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:27:34.701 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:27:34.701 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:27:34.701 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@107 -- # local dev=initiator0 00:27:34.701 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:27:34.701 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:27:34.701 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:27:34.701 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:27:34.702 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:34.702 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.398 ms 00:27:34.702 00:27:34.702 --- 10.0.0.1 ping statistics --- 00:27:34.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:34.702 rtt min/avg/max/mdev = 0.398/0.398/0.398/0.000 ms 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # get_net_dev target0 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@107 -- # local dev=target0 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:27:34.702 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:34.702 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:27:34.702 00:27:34.702 --- 10.0.0.2 ping statistics --- 00:27:34.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:34.702 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # (( pair++ )) 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@270 -- # return 0 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@107 -- # local dev=initiator0 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@107 -- # local dev=initiator1 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # return 1 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # dev= 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@169 -- # return 0 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # get_net_dev target0 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@107 -- # local dev=target0 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # get_net_dev target1 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@107 -- # local dev=target1 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # return 1 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # dev= 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@169 -- # return 0 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:27:34.702 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:27:34.703 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:34.703 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:34.703 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # nvmfpid=178378 00:27:34.703 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@329 -- # waitforlisten 178378 00:27:34.703 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:34.703 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 178378 ']' 00:27:34.703 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:34.703 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:34.703 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:34.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:34.703 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:34.703 12:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:34.703 [2024-12-05 12:11:08.200318] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:27:34.703 [2024-12-05 12:11:08.200363] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:34.703 [2024-12-05 12:11:08.280560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:34.703 [2024-12-05 12:11:08.322326] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:34.703 [2024-12-05 12:11:08.322362] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:34.703 [2024-12-05 12:11:08.322375] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:34.703 [2024-12-05 12:11:08.322381] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:34.703 [2024-12-05 12:11:08.322386] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:34.703 [2024-12-05 12:11:08.323771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:34.703 [2024-12-05 12:11:08.323882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:34.703 [2024-12-05 12:11:08.323883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:34.962 12:11:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:34.962 12:11:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:27:34.962 12:11:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:27:34.962 12:11:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:34.962 12:11:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:34.962 12:11:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:34.962 12:11:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:35.221 [2024-12-05 12:11:09.243906] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:35.221 12:11:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:35.480 Malloc0 00:27:35.480 12:11:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:35.480 12:11:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:35.739 12:11:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:35.999 [2024-12-05 12:11:10.056070] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:35.999 12:11:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:36.258 [2024-12-05 12:11:10.252662] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:36.258 12:11:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:27:36.258 [2024-12-05 12:11:10.453262] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:27:36.517 12:11:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:27:36.517 12:11:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=178854 00:27:36.517 12:11:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:36.517 12:11:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 178854 /var/tmp/bdevperf.sock 00:27:36.517 12:11:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 178854 ']' 00:27:36.517 12:11:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:36.517 12:11:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:36.517 12:11:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:36.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:36.517 12:11:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:36.517 12:11:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:36.775 12:11:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:36.775 12:11:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:27:36.775 12:11:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:27:37.034 NVMe0n1 00:27:37.034 12:11:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:27:37.293 00:27:37.293 12:11:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=179063 00:27:37.293 12:11:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:37.293 12:11:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:27:38.230 12:11:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:38.489 [2024-12-05 12:11:12.592212] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543120 is same with the state(6) to be set 00:27:38.489 [2024-12-05 12:11:12.592279] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543120 is same with the state(6) to be set 00:27:38.489 [2024-12-05 12:11:12.592290] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543120 is same with the state(6) to be set 00:27:38.489 [2024-12-05 12:11:12.592298] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543120 is same with the state(6) to be set 00:27:38.489 [2024-12-05 12:11:12.592307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543120 is same with the state(6) to be set 00:27:38.489 [2024-12-05 12:11:12.592315] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543120 is same with the state(6) to be set 00:27:38.489 [2024-12-05 12:11:12.592322] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543120 is same with the state(6) to be set 00:27:38.489 [2024-12-05 12:11:12.592330] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543120 is same with the state(6) to be set 00:27:38.489 [2024-12-05 12:11:12.592338] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543120 is same with the state(6) to be set 00:27:38.489 12:11:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:27:41.777 12:11:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:27:41.777 00:27:41.777 12:11:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:42.036 [2024-12-05 12:11:16.149749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543dd0 is same with the state(6) to be set 00:27:42.036 [2024-12-05 12:11:16.149793] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543dd0 is same with the state(6) to be set 00:27:42.036 [2024-12-05 12:11:16.149800] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543dd0 is same with the state(6) to be set 00:27:42.036 [2024-12-05 12:11:16.149807] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543dd0 is same with the state(6) to be set 00:27:42.036 [2024-12-05 12:11:16.149813] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543dd0 is same with the state(6) to be set 00:27:42.036 [2024-12-05 12:11:16.149819] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543dd0 is same with the state(6) to be set 00:27:42.036 [2024-12-05 12:11:16.149825] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543dd0 is same with the state(6) to be set 00:27:42.036 [2024-12-05 12:11:16.149831] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543dd0 is same with the state(6) to be set 00:27:42.036 [2024-12-05 12:11:16.149837] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543dd0 is same with the state(6) to be set 00:27:42.036 [2024-12-05 12:11:16.149842] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543dd0 is same with the state(6) to be set 00:27:42.036 [2024-12-05 12:11:16.149849] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543dd0 is same with the state(6) to be set 00:27:42.036 [2024-12-05 12:11:16.149855] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543dd0 is same with the state(6) to be set 00:27:42.036 [2024-12-05 12:11:16.149861] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543dd0 is same with the state(6) to be set 00:27:42.036 [2024-12-05 12:11:16.149866] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543dd0 is same with the state(6) to be set 00:27:42.036 [2024-12-05 12:11:16.149872] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543dd0 is same with the state(6) to be set 00:27:42.036 [2024-12-05 12:11:16.149878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543dd0 is same with the state(6) to be set 00:27:42.036 [2024-12-05 12:11:16.149884] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543dd0 is same with the state(6) to be set 00:27:42.036 [2024-12-05 12:11:16.149890] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543dd0 is same with the state(6) to be set 00:27:42.036 [2024-12-05 12:11:16.149897] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543dd0 is same with the state(6) to be set 00:27:42.036 [2024-12-05 12:11:16.149903] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543dd0 is same with the state(6) to be set 00:27:42.036 [2024-12-05 12:11:16.149909] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543dd0 is same with the state(6) to be set 00:27:42.036 [2024-12-05 12:11:16.149915] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543dd0 is same with the state(6) to be set 00:27:42.036 [2024-12-05 12:11:16.149920] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543dd0 is same with the state(6) to be set 00:27:42.036 [2024-12-05 12:11:16.149935] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543dd0 is same with the state(6) to be set 00:27:42.036 [2024-12-05 12:11:16.149941] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543dd0 is same with the state(6) to be set 00:27:42.036 [2024-12-05 12:11:16.149947] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543dd0 is same with the state(6) to be set 00:27:42.036 [2024-12-05 12:11:16.149953] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543dd0 is same with the state(6) to be set 00:27:42.036 [2024-12-05 12:11:16.149959] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543dd0 is same with the state(6) to be set 00:27:42.036 [2024-12-05 12:11:16.149966] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543dd0 is same with the state(6) to be set 00:27:42.036 [2024-12-05 12:11:16.149971] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543dd0 is same with the state(6) to be set 00:27:42.036 [2024-12-05 12:11:16.149977] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543dd0 is same with the state(6) to be set 00:27:42.036 [2024-12-05 12:11:16.149983] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543dd0 is same with the state(6) to be set 00:27:42.036 [2024-12-05 12:11:16.149989] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543dd0 is same with the state(6) to be set 00:27:42.036 [2024-12-05 12:11:16.149994] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543dd0 is same with the state(6) to be set 00:27:42.036 [2024-12-05 12:11:16.150001] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543dd0 is same with the state(6) to be set 00:27:42.036 [2024-12-05 12:11:16.150007] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543dd0 is same with the state(6) to be set 00:27:42.036 [2024-12-05 12:11:16.150013] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543dd0 is same with the state(6) to be set 00:27:42.036 [2024-12-05 12:11:16.150019] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543dd0 is same with the state(6) to be set 00:27:42.036 [2024-12-05 12:11:16.150024] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543dd0 is same with the state(6) to be set 00:27:42.036 [2024-12-05 12:11:16.150030] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543dd0 is same with the state(6) to be set 00:27:42.036 [2024-12-05 12:11:16.150036] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543dd0 is same with the state(6) to be set 00:27:42.036 [2024-12-05 12:11:16.150041] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543dd0 is same with the state(6) to be set 00:27:42.036 [2024-12-05 12:11:16.150047] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543dd0 is same with the state(6) to be set 00:27:42.036 [2024-12-05 12:11:16.150053] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543dd0 is same with the state(6) to be set 00:27:42.036 [2024-12-05 12:11:16.150058] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543dd0 is same with the state(6) to be set 00:27:42.036 [2024-12-05 12:11:16.150064] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543dd0 is same with the state(6) to be set 00:27:42.036 [2024-12-05 12:11:16.150070] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543dd0 is same with the state(6) to be set 00:27:42.036 [2024-12-05 12:11:16.150075] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543dd0 is same with the state(6) to be set 00:27:42.036 [2024-12-05 12:11:16.150081] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543dd0 is same with the state(6) to be set 00:27:42.036 [2024-12-05 12:11:16.150086] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543dd0 is same with the state(6) to be set 00:27:42.036 [2024-12-05 12:11:16.150094] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543dd0 is same with the state(6) to be set 00:27:42.036 [2024-12-05 12:11:16.150100] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543dd0 is same with the state(6) to be set 00:27:42.036 [2024-12-05 12:11:16.150106] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543dd0 is same with the state(6) to be set 00:27:42.036 [2024-12-05 12:11:16.150112] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543dd0 is same with the state(6) to be set 00:27:42.036 [2024-12-05 12:11:16.150118] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543dd0 is same with the state(6) to be set 00:27:42.036 [2024-12-05 12:11:16.150124] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543dd0 is same with the state(6) to be set 00:27:42.036 [2024-12-05 12:11:16.150130] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543dd0 is same with the state(6) to be set 00:27:42.037 [2024-12-05 12:11:16.150135] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543dd0 is same with the state(6) to be set 00:27:42.037 [2024-12-05 12:11:16.150141] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543dd0 is same with the state(6) to be set 00:27:42.037 [2024-12-05 12:11:16.150146] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543dd0 is same with the state(6) to be set 00:27:42.037 [2024-12-05 12:11:16.150152] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543dd0 is same with the state(6) to be set 00:27:42.037 [2024-12-05 12:11:16.150158] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543dd0 is same with the state(6) to be set 00:27:42.037 [2024-12-05 12:11:16.150164] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543dd0 is same with the state(6) to be set 00:27:42.037 [2024-12-05 12:11:16.150169] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543dd0 is same with the state(6) to be set 00:27:42.037 [2024-12-05 12:11:16.150175] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543dd0 is same with the state(6) to be set 00:27:42.037 [2024-12-05 12:11:16.150181] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543dd0 is same with the state(6) to be set 00:27:42.037 [2024-12-05 12:11:16.150188] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543dd0 is same with the state(6) to be set 00:27:42.037 [2024-12-05 12:11:16.150194] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543dd0 is same with the state(6) to be set 00:27:42.037 [2024-12-05 12:11:16.150199] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543dd0 is same with the state(6) to be set 00:27:42.037 [2024-12-05 12:11:16.150205] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x543dd0 is same with the state(6) to be set 00:27:42.037 12:11:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:27:45.325 12:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:45.325 [2024-12-05 12:11:19.357193] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:45.325 12:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:27:46.261 12:11:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:27:46.521 [2024-12-05 12:11:20.574337] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.521 [2024-12-05 12:11:20.574388] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.521 [2024-12-05 12:11:20.574396] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.521 [2024-12-05 12:11:20.574408] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.521 [2024-12-05 12:11:20.574414] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.521 [2024-12-05 12:11:20.574420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.521 [2024-12-05 12:11:20.574426] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.521 [2024-12-05 12:11:20.574432] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.521 [2024-12-05 12:11:20.574438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.521 [2024-12-05 12:11:20.574444] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.521 [2024-12-05 12:11:20.574450] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.521 [2024-12-05 12:11:20.574456] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.521 [2024-12-05 12:11:20.574461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.521 [2024-12-05 12:11:20.574467] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.521 [2024-12-05 12:11:20.574473] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.521 [2024-12-05 12:11:20.574479] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.521 [2024-12-05 12:11:20.574485] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.521 [2024-12-05 12:11:20.574491] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.521 [2024-12-05 12:11:20.574497] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.521 [2024-12-05 12:11:20.574502] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.521 [2024-12-05 12:11:20.574508] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.521 [2024-12-05 12:11:20.574514] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.521 [2024-12-05 12:11:20.574521] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.521 [2024-12-05 12:11:20.574526] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.521 [2024-12-05 12:11:20.574532] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.521 [2024-12-05 12:11:20.574538] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.521 [2024-12-05 12:11:20.574543] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.521 [2024-12-05 12:11:20.574549] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.521 [2024-12-05 12:11:20.574555] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.521 [2024-12-05 12:11:20.574561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.521 [2024-12-05 12:11:20.574568] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.521 [2024-12-05 12:11:20.574574] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.521 [2024-12-05 12:11:20.574580] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.521 [2024-12-05 12:11:20.574585] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.521 [2024-12-05 12:11:20.574591] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.521 [2024-12-05 12:11:20.574597] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.521 [2024-12-05 12:11:20.574603] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.521 [2024-12-05 12:11:20.574609] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.521 [2024-12-05 12:11:20.574615] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.521 [2024-12-05 12:11:20.574620] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.521 [2024-12-05 12:11:20.574626] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.521 [2024-12-05 12:11:20.574632] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.521 [2024-12-05 12:11:20.574638] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.521 [2024-12-05 12:11:20.574644] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.521 [2024-12-05 12:11:20.574650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.521 [2024-12-05 12:11:20.574656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.521 [2024-12-05 12:11:20.574662] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.521 [2024-12-05 12:11:20.574668] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.521 [2024-12-05 12:11:20.574674] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.522 [2024-12-05 12:11:20.574679] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.522 [2024-12-05 12:11:20.574685] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.522 [2024-12-05 12:11:20.574691] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.522 [2024-12-05 12:11:20.574697] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.522 [2024-12-05 12:11:20.574704] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.522 [2024-12-05 12:11:20.574710] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.522 [2024-12-05 12:11:20.574715] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.522 [2024-12-05 12:11:20.574721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.522 [2024-12-05 12:11:20.574729] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.522 [2024-12-05 12:11:20.574735] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.522 [2024-12-05 12:11:20.574741] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.522 [2024-12-05 12:11:20.574746] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.522 [2024-12-05 12:11:20.574752] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.522 [2024-12-05 12:11:20.574758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.522 [2024-12-05 12:11:20.574764] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.522 [2024-12-05 12:11:20.574770] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.522 [2024-12-05 12:11:20.574775] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.522 [2024-12-05 12:11:20.574781] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.522 [2024-12-05 12:11:20.574788] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.522 [2024-12-05 12:11:20.574793] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.522 [2024-12-05 12:11:20.574799] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.522 [2024-12-05 12:11:20.574805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.522 [2024-12-05 12:11:20.574810] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.522 [2024-12-05 12:11:20.574816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.522 [2024-12-05 12:11:20.574822] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.522 [2024-12-05 12:11:20.574828] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.522 [2024-12-05 12:11:20.574833] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.522 [2024-12-05 12:11:20.574839] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.522 [2024-12-05 12:11:20.574845] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.522 [2024-12-05 12:11:20.574851] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.522 [2024-12-05 12:11:20.574856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.522 [2024-12-05 12:11:20.574862] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.522 [2024-12-05 12:11:20.574868] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.522 [2024-12-05 12:11:20.574873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.522 [2024-12-05 12:11:20.574879] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.522 [2024-12-05 12:11:20.574885] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.522 [2024-12-05 12:11:20.574892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.522 [2024-12-05 12:11:20.574898] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.522 [2024-12-05 12:11:20.574904] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.522 [2024-12-05 12:11:20.574909] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.522 [2024-12-05 12:11:20.574915] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.522 [2024-12-05 12:11:20.574921] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.522 [2024-12-05 12:11:20.574927] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.522 [2024-12-05 12:11:20.574933] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.522 [2024-12-05 12:11:20.574938] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.522 [2024-12-05 12:11:20.574944] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.522 [2024-12-05 12:11:20.574950] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.522 [2024-12-05 12:11:20.574956] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.522 [2024-12-05 12:11:20.574961] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.522 [2024-12-05 12:11:20.574967] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.522 [2024-12-05 12:11:20.574973] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.522 [2024-12-05 12:11:20.574979] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.522 [2024-12-05 12:11:20.574986] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690340 is same with the state(6) to be set 00:27:46.522 12:11:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 179063 00:27:53.091 { 00:27:53.091 "results": [ 00:27:53.091 { 00:27:53.091 "job": "NVMe0n1", 00:27:53.091 "core_mask": "0x1", 00:27:53.091 "workload": "verify", 00:27:53.091 "status": "finished", 00:27:53.091 "verify_range": { 00:27:53.091 "start": 0, 00:27:53.091 "length": 16384 00:27:53.091 }, 00:27:53.091 "queue_depth": 128, 00:27:53.091 "io_size": 4096, 00:27:53.091 "runtime": 15.010571, 00:27:53.091 "iops": 11310.495783271668, 00:27:53.091 "mibps": 44.181624153404954, 00:27:53.091 "io_failed": 9917, 00:27:53.091 "io_timeout": 0, 00:27:53.091 "avg_latency_us": 10670.83334638197, 00:27:53.091 "min_latency_us": 421.30285714285714, 00:27:53.091 "max_latency_us": 12732.708571428571 00:27:53.091 } 00:27:53.091 ], 00:27:53.091 "core_count": 1 00:27:53.091 } 00:27:53.091 12:11:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 178854 00:27:53.091 12:11:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 178854 ']' 00:27:53.091 12:11:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 178854 00:27:53.091 12:11:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:27:53.091 12:11:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:53.091 12:11:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 178854 00:27:53.091 12:11:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:53.091 12:11:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:53.091 12:11:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 178854' 00:27:53.091 killing process with pid 178854 00:27:53.091 12:11:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 178854 00:27:53.091 12:11:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 178854 00:27:53.091 12:11:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:53.091 [2024-12-05 12:11:10.512054] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:27:53.091 [2024-12-05 12:11:10.512104] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid178854 ] 00:27:53.091 [2024-12-05 12:11:10.585233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:53.091 [2024-12-05 12:11:10.626402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:53.091 Running I/O for 15 seconds... 00:27:53.091 11341.00 IOPS, 44.30 MiB/s [2024-12-05T11:11:27.287Z] [2024-12-05 12:11:12.593043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:99920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.091 [2024-12-05 12:11:12.593081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.091 [2024-12-05 12:11:12.593097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:99928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.091 [2024-12-05 12:11:12.593105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.091 [2024-12-05 12:11:12.593115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:99936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.091 [2024-12-05 12:11:12.593122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.091 [2024-12-05 12:11:12.593130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.091 [2024-12-05 12:11:12.593137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.091 [2024-12-05 12:11:12.593146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:99952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.091 [2024-12-05 12:11:12.593153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.091 [2024-12-05 12:11:12.593162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:99960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.091 [2024-12-05 12:11:12.593170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.091 [2024-12-05 12:11:12.593179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:99968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.091 [2024-12-05 12:11:12.593185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.091 [2024-12-05 12:11:12.593194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:99976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.091 [2024-12-05 12:11:12.593201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.091 [2024-12-05 12:11:12.593209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:99984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.091 [2024-12-05 12:11:12.593216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.091 [2024-12-05 12:11:12.593224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:100000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.091 [2024-12-05 12:11:12.593231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.091 [2024-12-05 12:11:12.593239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:100008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.091 [2024-12-05 12:11:12.593246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.091 [2024-12-05 12:11:12.593258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:100016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.091 [2024-12-05 12:11:12.593266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.091 [2024-12-05 12:11:12.593274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.091 [2024-12-05 12:11:12.593281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.091 [2024-12-05 12:11:12.593289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:100032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.091 [2024-12-05 12:11:12.593296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.091 [2024-12-05 12:11:12.593305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:100040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.091 [2024-12-05 12:11:12.593312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.091 [2024-12-05 12:11:12.593320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:100048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.091 [2024-12-05 12:11:12.593327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.091 [2024-12-05 12:11:12.593335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:99992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.091 [2024-12-05 12:11:12.593341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.091 [2024-12-05 12:11:12.593350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:100056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.091 [2024-12-05 12:11:12.593357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.091 [2024-12-05 12:11:12.593364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:100064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.091 [2024-12-05 12:11:12.593377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.091 [2024-12-05 12:11:12.593385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:100072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.091 [2024-12-05 12:11:12.593391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.091 [2024-12-05 12:11:12.593400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.091 [2024-12-05 12:11:12.593407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.091 [2024-12-05 12:11:12.593418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:100088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.091 [2024-12-05 12:11:12.593425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.092 [2024-12-05 12:11:12.593433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:100096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.092 [2024-12-05 12:11:12.593440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.092 [2024-12-05 12:11:12.593448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.092 [2024-12-05 12:11:12.593457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.092 [2024-12-05 12:11:12.593465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.092 [2024-12-05 12:11:12.593472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.092 [2024-12-05 12:11:12.593481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.092 [2024-12-05 12:11:12.593487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.092 [2024-12-05 12:11:12.593495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.092 [2024-12-05 12:11:12.593502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.092 [2024-12-05 12:11:12.593510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.092 [2024-12-05 12:11:12.593516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.092 [2024-12-05 12:11:12.593524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.092 [2024-12-05 12:11:12.593531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.092 [2024-12-05 12:11:12.593539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.092 [2024-12-05 12:11:12.593545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.092 [2024-12-05 12:11:12.593553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.092 [2024-12-05 12:11:12.593560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.092 [2024-12-05 12:11:12.593568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.092 [2024-12-05 12:11:12.593574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.092 [2024-12-05 12:11:12.593582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.092 [2024-12-05 12:11:12.593589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.092 [2024-12-05 12:11:12.593597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.092 [2024-12-05 12:11:12.593603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.092 [2024-12-05 12:11:12.593611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.092 [2024-12-05 12:11:12.593617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.092 [2024-12-05 12:11:12.593625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.092 [2024-12-05 12:11:12.593631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.092 [2024-12-05 12:11:12.593641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.092 [2024-12-05 12:11:12.593648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.092 [2024-12-05 12:11:12.593656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.092 [2024-12-05 12:11:12.593663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.092 [2024-12-05 12:11:12.593671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.092 [2024-12-05 12:11:12.593678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.092 [2024-12-05 12:11:12.593686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:100232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.092 [2024-12-05 12:11:12.593692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.092 [2024-12-05 12:11:12.593700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.092 [2024-12-05 12:11:12.593706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.092 [2024-12-05 12:11:12.593714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.092 [2024-12-05 12:11:12.593721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.092 [2024-12-05 12:11:12.593729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.092 [2024-12-05 12:11:12.593736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.092 [2024-12-05 12:11:12.593744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:100264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.092 [2024-12-05 12:11:12.593750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.092 [2024-12-05 12:11:12.593758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.092 [2024-12-05 12:11:12.593764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.092 [2024-12-05 12:11:12.593772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.092 [2024-12-05 12:11:12.593780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.092 [2024-12-05 12:11:12.593788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.092 [2024-12-05 12:11:12.593794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.092 [2024-12-05 12:11:12.593802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:100296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.092 [2024-12-05 12:11:12.593808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.092 [2024-12-05 12:11:12.593816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.092 [2024-12-05 12:11:12.593825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.092 [2024-12-05 12:11:12.593833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:100312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.092 [2024-12-05 12:11:12.593839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.092 [2024-12-05 12:11:12.593847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:100320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.092 [2024-12-05 12:11:12.593854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.092 [2024-12-05 12:11:12.593861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:100328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.092 [2024-12-05 12:11:12.593868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.092 [2024-12-05 12:11:12.593876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:100336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.092 [2024-12-05 12:11:12.593883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.092 [2024-12-05 12:11:12.593891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.092 [2024-12-05 12:11:12.593898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.092 [2024-12-05 12:11:12.593905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:100352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.092 [2024-12-05 12:11:12.593912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.092 [2024-12-05 12:11:12.593920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:100360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.092 [2024-12-05 12:11:12.593926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.092 [2024-12-05 12:11:12.593935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:100368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.092 [2024-12-05 12:11:12.593941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.092 [2024-12-05 12:11:12.593949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:100376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.092 [2024-12-05 12:11:12.593955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.092 [2024-12-05 12:11:12.593963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:100384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.092 [2024-12-05 12:11:12.593969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.092 [2024-12-05 12:11:12.593977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:100392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.092 [2024-12-05 12:11:12.593984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.092 [2024-12-05 12:11:12.593992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:100400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.092 [2024-12-05 12:11:12.593999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.092 [2024-12-05 12:11:12.594007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:100408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.093 [2024-12-05 12:11:12.594019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.093 [2024-12-05 12:11:12.594028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.093 [2024-12-05 12:11:12.594035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.093 [2024-12-05 12:11:12.594043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:100424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.093 [2024-12-05 12:11:12.594049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.093 [2024-12-05 12:11:12.594058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:100432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.093 [2024-12-05 12:11:12.594064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.093 [2024-12-05 12:11:12.594072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:100440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.093 [2024-12-05 12:11:12.594079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.093 [2024-12-05 12:11:12.594087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:100448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.093 [2024-12-05 12:11:12.594093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.093 [2024-12-05 12:11:12.594101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:100456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.093 [2024-12-05 12:11:12.594108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.093 [2024-12-05 12:11:12.594115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.093 [2024-12-05 12:11:12.594122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.093 [2024-12-05 12:11:12.594130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:100472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.093 [2024-12-05 12:11:12.594137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.093 [2024-12-05 12:11:12.594145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:100480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.093 [2024-12-05 12:11:12.594151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.093 [2024-12-05 12:11:12.594159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.093 [2024-12-05 12:11:12.594165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.093 [2024-12-05 12:11:12.594173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.093 [2024-12-05 12:11:12.594180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.093 [2024-12-05 12:11:12.594189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:100504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.093 [2024-12-05 12:11:12.594195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.093 [2024-12-05 12:11:12.594205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:100512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.093 [2024-12-05 12:11:12.594212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.093 [2024-12-05 12:11:12.594220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:100520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.093 [2024-12-05 12:11:12.594226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.093 [2024-12-05 12:11:12.594234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:100528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.093 [2024-12-05 12:11:12.594241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.093 [2024-12-05 12:11:12.594249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:100536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.093 [2024-12-05 12:11:12.594255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.093 [2024-12-05 12:11:12.594263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:100544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.093 [2024-12-05 12:11:12.594270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.093 [2024-12-05 12:11:12.594278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:100552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.093 [2024-12-05 12:11:12.594284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.093 [2024-12-05 12:11:12.594292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:100560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.093 [2024-12-05 12:11:12.594299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.093 [2024-12-05 12:11:12.594307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:100568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.093 [2024-12-05 12:11:12.594313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.093 [2024-12-05 12:11:12.594321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:100576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.093 [2024-12-05 12:11:12.594327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.093 [2024-12-05 12:11:12.594335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:100584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.093 [2024-12-05 12:11:12.594342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.093 [2024-12-05 12:11:12.594350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:100592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.093 [2024-12-05 12:11:12.594357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.093 [2024-12-05 12:11:12.594365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:100600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.093 [2024-12-05 12:11:12.594376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.093 [2024-12-05 12:11:12.594384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.093 [2024-12-05 12:11:12.594392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.093 [2024-12-05 12:11:12.594400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:100616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.093 [2024-12-05 12:11:12.594407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.093 [2024-12-05 12:11:12.594415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:100624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.093 [2024-12-05 12:11:12.594422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.093 [2024-12-05 12:11:12.594431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:100632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.093 [2024-12-05 12:11:12.594437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.093 [2024-12-05 12:11:12.594445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:100640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.093 [2024-12-05 12:11:12.594452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.093 [2024-12-05 12:11:12.594460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.093 [2024-12-05 12:11:12.594467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.093 [2024-12-05 12:11:12.594475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:100656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.093 [2024-12-05 12:11:12.594481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.093 [2024-12-05 12:11:12.594489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:100664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.093 [2024-12-05 12:11:12.594496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.093 [2024-12-05 12:11:12.594504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:100672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.093 [2024-12-05 12:11:12.594511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.093 [2024-12-05 12:11:12.594519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:100680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.093 [2024-12-05 12:11:12.594525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.093 [2024-12-05 12:11:12.594533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:100688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.093 [2024-12-05 12:11:12.594540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.093 [2024-12-05 12:11:12.594562] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:53.093 [2024-12-05 12:11:12.594569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100696 len:8 PRP1 0x0 PRP2 0x0 00:27:53.093 [2024-12-05 12:11:12.594578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.093 [2024-12-05 12:11:12.594587] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:53.093 [2024-12-05 12:11:12.594593] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:53.093 [2024-12-05 12:11:12.594600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100704 len:8 PRP1 0x0 PRP2 0x0 00:27:53.093 [2024-12-05 12:11:12.594607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.093 [2024-12-05 12:11:12.594613] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:53.093 [2024-12-05 12:11:12.594619] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:53.094 [2024-12-05 12:11:12.594624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100712 len:8 PRP1 0x0 PRP2 0x0 00:27:53.094 [2024-12-05 12:11:12.594631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.094 [2024-12-05 12:11:12.594637] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:53.094 [2024-12-05 12:11:12.594642] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:53.094 [2024-12-05 12:11:12.594647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100720 len:8 PRP1 0x0 PRP2 0x0 00:27:53.094 [2024-12-05 12:11:12.594653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.094 [2024-12-05 12:11:12.594660] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:53.094 [2024-12-05 12:11:12.594666] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:53.094 [2024-12-05 12:11:12.594671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100728 len:8 PRP1 0x0 PRP2 0x0 00:27:53.094 [2024-12-05 12:11:12.594677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.094 [2024-12-05 12:11:12.594684] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:53.094 [2024-12-05 12:11:12.594689] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:53.094 [2024-12-05 12:11:12.594694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100736 len:8 PRP1 0x0 PRP2 0x0 00:27:53.094 [2024-12-05 12:11:12.594700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.094 [2024-12-05 12:11:12.594706] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:53.094 [2024-12-05 12:11:12.594712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:53.094 [2024-12-05 12:11:12.594717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100744 len:8 PRP1 0x0 PRP2 0x0 00:27:53.094 [2024-12-05 12:11:12.594723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.094 [2024-12-05 12:11:12.594730] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:53.094 [2024-12-05 12:11:12.594735] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:53.094 [2024-12-05 12:11:12.594740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100752 len:8 PRP1 0x0 PRP2 0x0 00:27:53.094 [2024-12-05 12:11:12.594746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.094 [2024-12-05 12:11:12.594753] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:53.094 [2024-12-05 12:11:12.594758] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:53.094 [2024-12-05 12:11:12.594763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100760 len:8 PRP1 0x0 PRP2 0x0 00:27:53.094 [2024-12-05 12:11:12.594771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.094 [2024-12-05 12:11:12.594777] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:53.094 [2024-12-05 12:11:12.594784] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:53.094 [2024-12-05 12:11:12.594790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100768 len:8 PRP1 0x0 PRP2 0x0 00:27:53.094 [2024-12-05 12:11:12.594796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.094 [2024-12-05 12:11:12.594802] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:53.094 [2024-12-05 12:11:12.594807] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:53.094 [2024-12-05 12:11:12.594813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100776 len:8 PRP1 0x0 PRP2 0x0 00:27:53.094 [2024-12-05 12:11:12.594819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.094 [2024-12-05 12:11:12.594826] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:53.094 [2024-12-05 12:11:12.594831] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:53.094 [2024-12-05 12:11:12.594837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100784 len:8 PRP1 0x0 PRP2 0x0 00:27:53.094 [2024-12-05 12:11:12.594843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.094 [2024-12-05 12:11:12.594850] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:53.094 [2024-12-05 12:11:12.594855] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:53.094 [2024-12-05 12:11:12.594860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100792 len:8 PRP1 0x0 PRP2 0x0 00:27:53.094 [2024-12-05 12:11:12.594866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.094 [2024-12-05 12:11:12.594873] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:53.094 [2024-12-05 12:11:12.594878] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:53.094 [2024-12-05 12:11:12.594883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100800 len:8 PRP1 0x0 PRP2 0x0 00:27:53.094 [2024-12-05 12:11:12.594890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.094 [2024-12-05 12:11:12.594896] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:53.094 [2024-12-05 12:11:12.594901] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:53.094 [2024-12-05 12:11:12.594907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100808 len:8 PRP1 0x0 PRP2 0x0 00:27:53.094 [2024-12-05 12:11:12.594913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.094 [2024-12-05 12:11:12.594919] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:53.094 [2024-12-05 12:11:12.594925] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:53.094 [2024-12-05 12:11:12.594930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100816 len:8 PRP1 0x0 PRP2 0x0 00:27:53.094 [2024-12-05 12:11:12.594936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.094 [2024-12-05 12:11:12.594943] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:53.094 [2024-12-05 12:11:12.594948] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:53.094 [2024-12-05 12:11:12.594953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100824 len:8 PRP1 0x0 PRP2 0x0 00:27:53.094 [2024-12-05 12:11:12.594960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.094 [2024-12-05 12:11:12.594968] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:53.094 [2024-12-05 12:11:12.594973] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:53.094 [2024-12-05 12:11:12.594979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100832 len:8 PRP1 0x0 PRP2 0x0 00:27:53.094 [2024-12-05 12:11:12.594985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.094 [2024-12-05 12:11:12.594992] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:53.094 [2024-12-05 12:11:12.594998] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:53.094 [2024-12-05 12:11:12.595004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100840 len:8 PRP1 0x0 PRP2 0x0 00:27:53.094 [2024-12-05 12:11:12.595010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.094 [2024-12-05 12:11:12.595017] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:53.094 [2024-12-05 12:11:12.595021] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:53.094 [2024-12-05 12:11:12.595027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100848 len:8 PRP1 0x0 PRP2 0x0 00:27:53.094 [2024-12-05 12:11:12.595034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.094 [2024-12-05 12:11:12.595040] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:53.094 [2024-12-05 12:11:12.595045] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:53.094 [2024-12-05 12:11:12.595051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100856 len:8 PRP1 0x0 PRP2 0x0 00:27:53.094 [2024-12-05 12:11:12.595057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.094 [2024-12-05 12:11:12.595063] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:53.094 [2024-12-05 12:11:12.595068] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:53.094 [2024-12-05 12:11:12.595074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100864 len:8 PRP1 0x0 PRP2 0x0 00:27:53.094 [2024-12-05 12:11:12.595080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.094 [2024-12-05 12:11:12.595087] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:53.094 [2024-12-05 12:11:12.595092] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:53.094 [2024-12-05 12:11:12.595098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100872 len:8 PRP1 0x0 PRP2 0x0 00:27:53.094 [2024-12-05 12:11:12.595104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.094 [2024-12-05 12:11:12.595110] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:53.094 [2024-12-05 12:11:12.595115] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:53.094 [2024-12-05 12:11:12.595120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100880 len:8 PRP1 0x0 PRP2 0x0 00:27:53.094 [2024-12-05 12:11:12.595126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.094 [2024-12-05 12:11:12.595133] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:53.094 [2024-12-05 12:11:12.595139] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:53.094 [2024-12-05 12:11:12.595145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100888 len:8 PRP1 0x0 PRP2 0x0 00:27:53.094 [2024-12-05 12:11:12.595153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.094 [2024-12-05 12:11:12.595160] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:53.094 [2024-12-05 12:11:12.595164] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:53.094 [2024-12-05 12:11:12.595169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100896 len:8 PRP1 0x0 PRP2 0x0 00:27:53.094 [2024-12-05 12:11:12.595175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.095 [2024-12-05 12:11:12.595182] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:53.095 [2024-12-05 12:11:12.595189] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:53.095 [2024-12-05 12:11:12.595194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100904 len:8 PRP1 0x0 PRP2 0x0 00:27:53.095 [2024-12-05 12:11:12.595201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.095 [2024-12-05 12:11:12.595208] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:53.095 [2024-12-05 12:11:12.595213] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:53.095 [2024-12-05 12:11:12.595218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100912 len:8 PRP1 0x0 PRP2 0x0 00:27:53.095 [2024-12-05 12:11:12.595224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.095 [2024-12-05 12:11:12.595231] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:53.095 [2024-12-05 12:11:12.595236] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:53.095 [2024-12-05 12:11:12.595241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100920 len:8 PRP1 0x0 PRP2 0x0 00:27:53.095 [2024-12-05 12:11:12.595248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.095 [2024-12-05 12:11:12.595254] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:53.095 [2024-12-05 12:11:12.595259] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:53.095 [2024-12-05 12:11:12.595265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100928 len:8 PRP1 0x0 PRP2 0x0 00:27:53.095 [2024-12-05 12:11:12.595271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.095 [2024-12-05 12:11:12.595277] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:53.095 [2024-12-05 12:11:12.595282] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:53.095 [2024-12-05 12:11:12.595288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100936 len:8 PRP1 0x0 PRP2 0x0 00:27:53.095 [2024-12-05 12:11:12.595294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.095 [2024-12-05 12:11:12.595338] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:27:53.095 [2024-12-05 12:11:12.595360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.095 [2024-12-05 12:11:12.595372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.095 [2024-12-05 12:11:12.595380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.095 [2024-12-05 12:11:12.595389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.095 [2024-12-05 12:11:12.595396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.095 [2024-12-05 12:11:12.595402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.095 [2024-12-05 12:11:12.595411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.095 [2024-12-05 12:11:12.595418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.095 [2024-12-05 12:11:12.595425] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:27:53.095 [2024-12-05 12:11:12.595461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x230d370 (9): Bad file descriptor 00:27:53.095 [2024-12-05 12:11:12.598215] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:27:53.095 [2024-12-05 12:11:12.701906] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:27:53.095 10850.50 IOPS, 42.38 MiB/s [2024-12-05T11:11:27.291Z] 11060.67 IOPS, 43.21 MiB/s [2024-12-05T11:11:27.291Z] 11193.50 IOPS, 43.72 MiB/s [2024-12-05T11:11:27.291Z] [2024-12-05 12:11:16.151401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:64656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.095 [2024-12-05 12:11:16.151435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.095 [2024-12-05 12:11:16.151449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:64664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.095 [2024-12-05 12:11:16.151457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.095 [2024-12-05 12:11:16.151466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:64672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.095 [2024-12-05 12:11:16.151472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.095 [2024-12-05 12:11:16.151481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:64680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.095 [2024-12-05 12:11:16.151487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.095 [2024-12-05 12:11:16.151496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:64688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.095 [2024-12-05 12:11:16.151502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.095 [2024-12-05 12:11:16.151511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:64696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.095 [2024-12-05 12:11:16.151517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.095 [2024-12-05 12:11:16.151525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:64704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.095 [2024-12-05 12:11:16.151532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.095 [2024-12-05 12:11:16.151540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:64712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.095 [2024-12-05 12:11:16.151546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.095 [2024-12-05 12:11:16.151558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:64720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.095 [2024-12-05 12:11:16.151565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.095 [2024-12-05 12:11:16.151574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:64728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.095 [2024-12-05 12:11:16.151580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.095 [2024-12-05 12:11:16.151589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:64736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.095 [2024-12-05 12:11:16.151595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.095 [2024-12-05 12:11:16.151603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:64744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.095 [2024-12-05 12:11:16.151610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.095 [2024-12-05 12:11:16.151618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:64752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.095 [2024-12-05 12:11:16.151625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.095 [2024-12-05 12:11:16.151633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:64760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.095 [2024-12-05 12:11:16.151640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.095 [2024-12-05 12:11:16.151648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:64768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.095 [2024-12-05 12:11:16.151654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.095 [2024-12-05 12:11:16.151663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:64776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.095 [2024-12-05 12:11:16.151669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.095 [2024-12-05 12:11:16.151678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:64784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.095 [2024-12-05 12:11:16.151684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.095 [2024-12-05 12:11:16.151693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:64792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.095 [2024-12-05 12:11:16.151700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.095 [2024-12-05 12:11:16.151708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:64800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.095 [2024-12-05 12:11:16.151715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.096 [2024-12-05 12:11:16.151723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:64808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.096 [2024-12-05 12:11:16.151730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.096 [2024-12-05 12:11:16.151738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:64816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.096 [2024-12-05 12:11:16.151744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.096 [2024-12-05 12:11:16.151754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:64824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.096 [2024-12-05 12:11:16.151761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.096 [2024-12-05 12:11:16.151769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:64832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.096 [2024-12-05 12:11:16.151776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.096 [2024-12-05 12:11:16.151787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:64840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.096 [2024-12-05 12:11:16.151794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.096 [2024-12-05 12:11:16.151802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:64848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.096 [2024-12-05 12:11:16.151809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.096 [2024-12-05 12:11:16.151817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:64856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.096 [2024-12-05 12:11:16.151824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.096 [2024-12-05 12:11:16.151832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:64864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.096 [2024-12-05 12:11:16.151838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.096 [2024-12-05 12:11:16.151846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:64872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.096 [2024-12-05 12:11:16.151853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.096 [2024-12-05 12:11:16.151861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:65200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.096 [2024-12-05 12:11:16.151867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.096 [2024-12-05 12:11:16.151875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:65208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.096 [2024-12-05 12:11:16.151882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.096 [2024-12-05 12:11:16.151890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:65216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.096 [2024-12-05 12:11:16.151896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.096 [2024-12-05 12:11:16.151904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:65224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.096 [2024-12-05 12:11:16.151910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.096 [2024-12-05 12:11:16.151918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:65232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.096 [2024-12-05 12:11:16.151925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.096 [2024-12-05 12:11:16.151935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:65240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.096 [2024-12-05 12:11:16.151943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.096 [2024-12-05 12:11:16.151951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:65248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.096 [2024-12-05 12:11:16.151957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.096 [2024-12-05 12:11:16.151966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:65256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.096 [2024-12-05 12:11:16.151972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.096 [2024-12-05 12:11:16.151980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:65264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.096 [2024-12-05 12:11:16.151987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.096 [2024-12-05 12:11:16.151995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:65272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.096 [2024-12-05 12:11:16.152001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.096 [2024-12-05 12:11:16.152009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.096 [2024-12-05 12:11:16.152016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.096 [2024-12-05 12:11:16.152025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:65288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.096 [2024-12-05 12:11:16.152031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.096 [2024-12-05 12:11:16.152040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:65296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.096 [2024-12-05 12:11:16.152046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.096 [2024-12-05 12:11:16.152054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:65304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.096 [2024-12-05 12:11:16.152061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.096 [2024-12-05 12:11:16.152068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:65312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.096 [2024-12-05 12:11:16.152075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.096 [2024-12-05 12:11:16.152083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:65320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.096 [2024-12-05 12:11:16.152089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.096 [2024-12-05 12:11:16.152097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.096 [2024-12-05 12:11:16.152104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.096 [2024-12-05 12:11:16.152111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:65336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.096 [2024-12-05 12:11:16.152118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.096 [2024-12-05 12:11:16.152127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:65344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.096 [2024-12-05 12:11:16.152134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.096 [2024-12-05 12:11:16.152141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:65352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.096 [2024-12-05 12:11:16.152148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.096 [2024-12-05 12:11:16.152155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:65360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.096 [2024-12-05 12:11:16.152162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.096 [2024-12-05 12:11:16.152171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:65368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.096 [2024-12-05 12:11:16.152178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.096 [2024-12-05 12:11:16.152185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:65376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.096 [2024-12-05 12:11:16.152192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.096 [2024-12-05 12:11:16.152200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.096 [2024-12-05 12:11:16.152206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.096 [2024-12-05 12:11:16.152214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:65392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.096 [2024-12-05 12:11:16.152220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.096 [2024-12-05 12:11:16.152230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.096 [2024-12-05 12:11:16.152237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.096 [2024-12-05 12:11:16.152245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.096 [2024-12-05 12:11:16.152251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.096 [2024-12-05 12:11:16.152259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.096 [2024-12-05 12:11:16.152265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.096 [2024-12-05 12:11:16.152273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.096 [2024-12-05 12:11:16.152279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.096 [2024-12-05 12:11:16.152287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.096 [2024-12-05 12:11:16.152293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.096 [2024-12-05 12:11:16.152301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.097 [2024-12-05 12:11:16.152309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.097 [2024-12-05 12:11:16.152317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.097 [2024-12-05 12:11:16.152323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.097 [2024-12-05 12:11:16.152331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.097 [2024-12-05 12:11:16.152337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.097 [2024-12-05 12:11:16.152345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:64880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.097 [2024-12-05 12:11:16.152352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.097 [2024-12-05 12:11:16.152360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:64888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.097 [2024-12-05 12:11:16.152371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.097 [2024-12-05 12:11:16.152380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:64896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.097 [2024-12-05 12:11:16.152387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.097 [2024-12-05 12:11:16.152394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:64904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.097 [2024-12-05 12:11:16.152401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.097 [2024-12-05 12:11:16.152411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:64912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.097 [2024-12-05 12:11:16.152417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.097 [2024-12-05 12:11:16.152425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:64920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.097 [2024-12-05 12:11:16.152432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.097 [2024-12-05 12:11:16.152440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:64928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.097 [2024-12-05 12:11:16.152446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.097 [2024-12-05 12:11:16.152454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:64936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.097 [2024-12-05 12:11:16.152461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.097 [2024-12-05 12:11:16.152468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:64944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.097 [2024-12-05 12:11:16.152475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.097 [2024-12-05 12:11:16.152483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:64952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.097 [2024-12-05 12:11:16.152489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.097 [2024-12-05 12:11:16.152497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:64960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.097 [2024-12-05 12:11:16.152505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.097 [2024-12-05 12:11:16.152513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:64968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.097 [2024-12-05 12:11:16.152520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.097 [2024-12-05 12:11:16.152528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:64976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.097 [2024-12-05 12:11:16.152534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.097 [2024-12-05 12:11:16.152542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.097 [2024-12-05 12:11:16.152548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.097 [2024-12-05 12:11:16.152556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:64992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.097 [2024-12-05 12:11:16.152563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.097 [2024-12-05 12:11:16.152571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:65000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.097 [2024-12-05 12:11:16.152577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.097 [2024-12-05 12:11:16.152585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:65008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.097 [2024-12-05 12:11:16.152592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.097 [2024-12-05 12:11:16.152599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:65016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.097 [2024-12-05 12:11:16.152606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.097 [2024-12-05 12:11:16.152613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:65024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.097 [2024-12-05 12:11:16.152620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.097 [2024-12-05 12:11:16.152628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:65032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.097 [2024-12-05 12:11:16.152634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.097 [2024-12-05 12:11:16.152643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:65040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.097 [2024-12-05 12:11:16.152649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.097 [2024-12-05 12:11:16.152657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:65048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.097 [2024-12-05 12:11:16.152663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.097 [2024-12-05 12:11:16.152671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:65056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.097 [2024-12-05 12:11:16.152678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.097 [2024-12-05 12:11:16.152687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:65064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.097 [2024-12-05 12:11:16.152694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.097 [2024-12-05 12:11:16.152702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.097 [2024-12-05 12:11:16.152708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.097 [2024-12-05 12:11:16.152716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.097 [2024-12-05 12:11:16.152722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.097 [2024-12-05 12:11:16.152730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.097 [2024-12-05 12:11:16.152736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.097 [2024-12-05 12:11:16.152743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.097 [2024-12-05 12:11:16.152750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.097 [2024-12-05 12:11:16.152758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:65496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.097 [2024-12-05 12:11:16.152765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.097 [2024-12-05 12:11:16.152773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.097 [2024-12-05 12:11:16.152779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.097 [2024-12-05 12:11:16.152787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:65512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.097 [2024-12-05 12:11:16.152793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.097 [2024-12-05 12:11:16.152801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.097 [2024-12-05 12:11:16.152807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.097 [2024-12-05 12:11:16.152815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:65528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.097 [2024-12-05 12:11:16.152821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.097 [2024-12-05 12:11:16.152829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:65536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.097 [2024-12-05 12:11:16.152835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.097 [2024-12-05 12:11:16.152843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:65544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.097 [2024-12-05 12:11:16.152849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.097 [2024-12-05 12:11:16.152857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:65552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.097 [2024-12-05 12:11:16.152865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.097 [2024-12-05 12:11:16.152873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:65560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.097 [2024-12-05 12:11:16.152880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.097 [2024-12-05 12:11:16.152888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:65568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.097 [2024-12-05 12:11:16.152894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.098 [2024-12-05 12:11:16.152902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:65576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.098 [2024-12-05 12:11:16.152909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.098 [2024-12-05 12:11:16.152916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.098 [2024-12-05 12:11:16.152923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.098 [2024-12-05 12:11:16.152931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:65592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.098 [2024-12-05 12:11:16.152937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.098 [2024-12-05 12:11:16.152944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:65600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.098 [2024-12-05 12:11:16.152951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.098 [2024-12-05 12:11:16.152959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:65608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.098 [2024-12-05 12:11:16.152966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.098 [2024-12-05 12:11:16.152973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.098 [2024-12-05 12:11:16.152980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.098 [2024-12-05 12:11:16.152988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:65624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.098 [2024-12-05 12:11:16.152995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.098 [2024-12-05 12:11:16.153003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:65632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.098 [2024-12-05 12:11:16.153010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.098 [2024-12-05 12:11:16.153017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:65640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.098 [2024-12-05 12:11:16.153024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.098 [2024-12-05 12:11:16.153031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.098 [2024-12-05 12:11:16.153039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.098 [2024-12-05 12:11:16.153048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:65656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.098 [2024-12-05 12:11:16.153055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.098 [2024-12-05 12:11:16.153063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:65664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.098 [2024-12-05 12:11:16.153070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.098 [2024-12-05 12:11:16.153078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:65672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.098 [2024-12-05 12:11:16.153084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.098 [2024-12-05 12:11:16.153092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:65072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.098 [2024-12-05 12:11:16.153098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.098 [2024-12-05 12:11:16.153106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:65080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.098 [2024-12-05 12:11:16.153113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.098 [2024-12-05 12:11:16.153121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:65088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.098 [2024-12-05 12:11:16.153127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.098 [2024-12-05 12:11:16.153135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:65096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.098 [2024-12-05 12:11:16.153142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.098 [2024-12-05 12:11:16.153149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:65104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.098 [2024-12-05 12:11:16.153156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.098 [2024-12-05 12:11:16.153164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:65112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.098 [2024-12-05 12:11:16.153170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.098 [2024-12-05 12:11:16.153178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:65120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.098 [2024-12-05 12:11:16.153185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.098 [2024-12-05 12:11:16.153193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:65128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.098 [2024-12-05 12:11:16.153199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.098 [2024-12-05 12:11:16.153207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:65136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.098 [2024-12-05 12:11:16.153213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.098 [2024-12-05 12:11:16.153231] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:53.098 [2024-12-05 12:11:16.153239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65144 len:8 PRP1 0x0 PRP2 0x0 00:27:53.098 [2024-12-05 12:11:16.153247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.098 [2024-12-05 12:11:16.153256] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:53.098 [2024-12-05 12:11:16.153262] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:53.098 [2024-12-05 12:11:16.153268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65152 len:8 PRP1 0x0 PRP2 0x0 00:27:53.098 [2024-12-05 12:11:16.153274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.098 [2024-12-05 12:11:16.153281] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:53.098 [2024-12-05 12:11:16.153286] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:53.098 [2024-12-05 12:11:16.153291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65160 len:8 PRP1 0x0 PRP2 0x0 00:27:53.098 [2024-12-05 12:11:16.153297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.098 [2024-12-05 12:11:16.153304] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:53.098 [2024-12-05 12:11:16.153308] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:53.098 [2024-12-05 12:11:16.153314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65168 len:8 PRP1 0x0 PRP2 0x0 00:27:53.098 [2024-12-05 12:11:16.153320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.098 [2024-12-05 12:11:16.153326] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:53.098 [2024-12-05 12:11:16.153331] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:53.098 [2024-12-05 12:11:16.153337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65176 len:8 PRP1 0x0 PRP2 0x0 00:27:53.098 [2024-12-05 12:11:16.153343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.098 [2024-12-05 12:11:16.153350] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:53.098 [2024-12-05 12:11:16.153354] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:53.098 [2024-12-05 12:11:16.153360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65184 len:8 PRP1 0x0 PRP2 0x0 00:27:53.098 [2024-12-05 12:11:16.153370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.127 [2024-12-05 12:11:16.153377] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:53.127 [2024-12-05 12:11:16.153382] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:53.127 [2024-12-05 12:11:16.153387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65192 len:8 PRP1 0x0 PRP2 0x0 00:27:53.127 [2024-12-05 12:11:16.153393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.127 [2024-12-05 12:11:16.153436] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:27:53.127 [2024-12-05 12:11:16.153457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.127 [2024-12-05 12:11:16.153465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.127 [2024-12-05 12:11:16.153473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.127 [2024-12-05 12:11:16.153479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.127 [2024-12-05 12:11:16.153488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.127 [2024-12-05 12:11:16.153495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.127 [2024-12-05 12:11:16.153502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.127 [2024-12-05 12:11:16.153509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.127 [2024-12-05 12:11:16.153515] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:27:53.127 [2024-12-05 12:11:16.153546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x230d370 (9): Bad file descriptor 00:27:53.127 [2024-12-05 12:11:16.156319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:27:53.127 [2024-12-05 12:11:16.178672] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:27:53.127 11157.40 IOPS, 43.58 MiB/s [2024-12-05T11:11:27.323Z] 11201.33 IOPS, 43.76 MiB/s [2024-12-05T11:11:27.323Z] 11241.43 IOPS, 43.91 MiB/s [2024-12-05T11:11:27.323Z] 11295.00 IOPS, 44.12 MiB/s [2024-12-05T11:11:27.323Z] 11331.67 IOPS, 44.26 MiB/s [2024-12-05T11:11:27.323Z] [2024-12-05 12:11:20.576265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:85408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.127 [2024-12-05 12:11:20.576297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.127 [2024-12-05 12:11:20.576312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:85416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.127 [2024-12-05 12:11:20.576320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.127 [2024-12-05 12:11:20.576329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:85424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.127 [2024-12-05 12:11:20.576336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.127 [2024-12-05 12:11:20.576344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:85432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.127 [2024-12-05 12:11:20.576351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.127 [2024-12-05 12:11:20.576359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:85440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.127 [2024-12-05 12:11:20.576370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.127 [2024-12-05 12:11:20.576379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:85448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.127 [2024-12-05 12:11:20.576386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.127 [2024-12-05 12:11:20.576394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.127 [2024-12-05 12:11:20.576401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.127 [2024-12-05 12:11:20.576409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:85464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.127 [2024-12-05 12:11:20.576416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.127 [2024-12-05 12:11:20.576427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:85472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.127 [2024-12-05 12:11:20.576434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.127 [2024-12-05 12:11:20.576442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:85480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.127 [2024-12-05 12:11:20.576449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.127 [2024-12-05 12:11:20.576457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.127 [2024-12-05 12:11:20.576464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.127 [2024-12-05 12:11:20.576472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:85496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.127 [2024-12-05 12:11:20.576479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.127 [2024-12-05 12:11:20.576487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:85504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.127 [2024-12-05 12:11:20.576493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.127 [2024-12-05 12:11:20.576501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:85512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.127 [2024-12-05 12:11:20.576509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.127 [2024-12-05 12:11:20.576517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:85520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.127 [2024-12-05 12:11:20.576523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.127 [2024-12-05 12:11:20.576531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:85528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.127 [2024-12-05 12:11:20.576538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.127 [2024-12-05 12:11:20.576545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:85536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.127 [2024-12-05 12:11:20.576553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.127 [2024-12-05 12:11:20.576561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:85544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.127 [2024-12-05 12:11:20.576567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.127 [2024-12-05 12:11:20.576575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:85552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.127 [2024-12-05 12:11:20.576581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.127 [2024-12-05 12:11:20.576589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:85560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.127 [2024-12-05 12:11:20.576595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.127 [2024-12-05 12:11:20.576603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:85568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.128 [2024-12-05 12:11:20.576615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.128 [2024-12-05 12:11:20.576623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:85576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.128 [2024-12-05 12:11:20.576629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.128 [2024-12-05 12:11:20.576637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:85584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.128 [2024-12-05 12:11:20.576644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.128 [2024-12-05 12:11:20.576652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:85592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.128 [2024-12-05 12:11:20.576658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.128 [2024-12-05 12:11:20.576666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:85600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.128 [2024-12-05 12:11:20.576673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.128 [2024-12-05 12:11:20.576681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:85608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.128 [2024-12-05 12:11:20.576687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.128 [2024-12-05 12:11:20.576695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.128 [2024-12-05 12:11:20.576702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.128 [2024-12-05 12:11:20.576710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:85624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.128 [2024-12-05 12:11:20.576717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.128 [2024-12-05 12:11:20.576724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:85632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.128 [2024-12-05 12:11:20.576730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.128 [2024-12-05 12:11:20.576738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:85640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.128 [2024-12-05 12:11:20.576746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.128 [2024-12-05 12:11:20.576754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:85648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.128 [2024-12-05 12:11:20.576760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.128 [2024-12-05 12:11:20.576768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:85656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.128 [2024-12-05 12:11:20.576775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.128 [2024-12-05 12:11:20.576782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:85664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.128 [2024-12-05 12:11:20.576789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.128 [2024-12-05 12:11:20.576797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:85672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.128 [2024-12-05 12:11:20.576805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.128 [2024-12-05 12:11:20.576813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:85680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.128 [2024-12-05 12:11:20.576820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.128 [2024-12-05 12:11:20.576828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:85688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.128 [2024-12-05 12:11:20.576834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.128 [2024-12-05 12:11:20.576842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:85696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.128 [2024-12-05 12:11:20.576848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.128 [2024-12-05 12:11:20.576856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:85704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.128 [2024-12-05 12:11:20.576863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.128 [2024-12-05 12:11:20.576871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:85712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.128 [2024-12-05 12:11:20.576878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.128 [2024-12-05 12:11:20.576886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:85720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.128 [2024-12-05 12:11:20.576892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.128 [2024-12-05 12:11:20.576900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.128 [2024-12-05 12:11:20.576907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.128 [2024-12-05 12:11:20.576915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.128 [2024-12-05 12:11:20.576921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.128 [2024-12-05 12:11:20.576930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.128 [2024-12-05 12:11:20.576936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.128 [2024-12-05 12:11:20.576944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:85752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.128 [2024-12-05 12:11:20.576951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.128 [2024-12-05 12:11:20.576958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:85760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.128 [2024-12-05 12:11:20.576965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.128 [2024-12-05 12:11:20.576973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:85768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.128 [2024-12-05 12:11:20.576979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.128 [2024-12-05 12:11:20.576989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:85776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.128 [2024-12-05 12:11:20.576995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.128 [2024-12-05 12:11:20.577003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:85784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.128 [2024-12-05 12:11:20.577009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.128 [2024-12-05 12:11:20.577018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:85800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.128 [2024-12-05 12:11:20.577024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.128 [2024-12-05 12:11:20.577032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:85808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.128 [2024-12-05 12:11:20.577039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.128 [2024-12-05 12:11:20.577047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:85816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.128 [2024-12-05 12:11:20.577053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.128 [2024-12-05 12:11:20.577061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:85824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.128 [2024-12-05 12:11:20.577068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.129 [2024-12-05 12:11:20.577075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:85832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.129 [2024-12-05 12:11:20.577082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.129 [2024-12-05 12:11:20.577090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:85840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.129 [2024-12-05 12:11:20.577096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.129 [2024-12-05 12:11:20.577104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:85848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.129 [2024-12-05 12:11:20.577110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.129 [2024-12-05 12:11:20.577118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.129 [2024-12-05 12:11:20.577125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.129 [2024-12-05 12:11:20.577133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:85864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.129 [2024-12-05 12:11:20.577139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.129 [2024-12-05 12:11:20.577147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:85872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.129 [2024-12-05 12:11:20.577153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.129 [2024-12-05 12:11:20.577161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:85880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.129 [2024-12-05 12:11:20.577169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.129 [2024-12-05 12:11:20.577176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:85888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.129 [2024-12-05 12:11:20.577183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.129 [2024-12-05 12:11:20.577191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:85896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.129 [2024-12-05 12:11:20.577197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.129 [2024-12-05 12:11:20.577205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:85904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.129 [2024-12-05 12:11:20.577212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.129 [2024-12-05 12:11:20.577219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:85912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.129 [2024-12-05 12:11:20.577225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.129 [2024-12-05 12:11:20.577234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:85792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.129 [2024-12-05 12:11:20.577241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.129 [2024-12-05 12:11:20.577249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:85920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.129 [2024-12-05 12:11:20.577255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.129 [2024-12-05 12:11:20.577263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:85928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.129 [2024-12-05 12:11:20.577270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.129 [2024-12-05 12:11:20.577278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:85936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.129 [2024-12-05 12:11:20.577284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.129 [2024-12-05 12:11:20.577292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:85944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.129 [2024-12-05 12:11:20.577299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.129 [2024-12-05 12:11:20.577306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:85952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.129 [2024-12-05 12:11:20.577313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.129 [2024-12-05 12:11:20.577321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:85960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.129 [2024-12-05 12:11:20.577327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.129 [2024-12-05 12:11:20.577335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:85968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.129 [2024-12-05 12:11:20.577341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.129 [2024-12-05 12:11:20.577350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:85976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.129 [2024-12-05 12:11:20.577357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.129 [2024-12-05 12:11:20.577365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:85984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.129 [2024-12-05 12:11:20.577376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.129 [2024-12-05 12:11:20.577383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:85992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.129 [2024-12-05 12:11:20.577390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.129 [2024-12-05 12:11:20.577398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:86000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.129 [2024-12-05 12:11:20.577404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.129 [2024-12-05 12:11:20.577412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:86008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.129 [2024-12-05 12:11:20.577419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.129 [2024-12-05 12:11:20.577427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:86016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.129 [2024-12-05 12:11:20.577434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.129 [2024-12-05 12:11:20.577442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:86024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.129 [2024-12-05 12:11:20.577448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.129 [2024-12-05 12:11:20.577456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:86032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.129 [2024-12-05 12:11:20.577463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.129 [2024-12-05 12:11:20.577471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:86040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.129 [2024-12-05 12:11:20.577478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.129 [2024-12-05 12:11:20.577485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:86048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.129 [2024-12-05 12:11:20.577492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.129 [2024-12-05 12:11:20.577500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:86056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.129 [2024-12-05 12:11:20.577506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.129 [2024-12-05 12:11:20.577514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:86064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.129 [2024-12-05 12:11:20.577520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.129 [2024-12-05 12:11:20.577528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:86072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.129 [2024-12-05 12:11:20.577534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.129 [2024-12-05 12:11:20.577544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.129 [2024-12-05 12:11:20.577550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.129 [2024-12-05 12:11:20.577558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:86088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.129 [2024-12-05 12:11:20.577564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.129 [2024-12-05 12:11:20.577572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:86096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.129 [2024-12-05 12:11:20.577578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.129 [2024-12-05 12:11:20.577586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:86104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.129 [2024-12-05 12:11:20.577593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.129 [2024-12-05 12:11:20.577600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:86112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.129 [2024-12-05 12:11:20.577607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.130 [2024-12-05 12:11:20.577615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.130 [2024-12-05 12:11:20.577621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.130 [2024-12-05 12:11:20.577629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:86128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.130 [2024-12-05 12:11:20.577635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.130 [2024-12-05 12:11:20.577643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:86136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.130 [2024-12-05 12:11:20.577649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.130 [2024-12-05 12:11:20.577656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:86144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.130 [2024-12-05 12:11:20.577663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.130 [2024-12-05 12:11:20.577671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:86152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.130 [2024-12-05 12:11:20.577677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.130 [2024-12-05 12:11:20.577685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:86160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.130 [2024-12-05 12:11:20.577691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.130 [2024-12-05 12:11:20.577699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:86168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.130 [2024-12-05 12:11:20.577705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.130 [2024-12-05 12:11:20.577713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:86176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.130 [2024-12-05 12:11:20.577728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.130 [2024-12-05 12:11:20.577735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:86184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.130 [2024-12-05 12:11:20.577742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.130 [2024-12-05 12:11:20.577750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.130 [2024-12-05 12:11:20.577756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.130 [2024-12-05 12:11:20.577764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:86200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.130 [2024-12-05 12:11:20.577770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.130 [2024-12-05 12:11:20.577778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.130 [2024-12-05 12:11:20.577785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.130 [2024-12-05 12:11:20.577793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:86216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.130 [2024-12-05 12:11:20.577799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.130 [2024-12-05 12:11:20.577806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:86224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.130 [2024-12-05 12:11:20.577813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.130 [2024-12-05 12:11:20.577821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:86232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.130 [2024-12-05 12:11:20.577827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.130 [2024-12-05 12:11:20.577835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:86240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.130 [2024-12-05 12:11:20.577842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.130 [2024-12-05 12:11:20.577849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:86248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.130 [2024-12-05 12:11:20.577856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.130 [2024-12-05 12:11:20.577863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:86256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.130 [2024-12-05 12:11:20.577870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.130 [2024-12-05 12:11:20.577877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:86264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.130 [2024-12-05 12:11:20.577884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.130 [2024-12-05 12:11:20.577892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.130 [2024-12-05 12:11:20.577901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.130 [2024-12-05 12:11:20.577910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:86280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.130 [2024-12-05 12:11:20.577917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.130 [2024-12-05 12:11:20.577925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:86288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.130 [2024-12-05 12:11:20.577931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.130 [2024-12-05 12:11:20.577939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:86296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.130 [2024-12-05 12:11:20.577945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.130 [2024-12-05 12:11:20.577965] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:53.130 [2024-12-05 12:11:20.577974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86304 len:8 PRP1 0x0 PRP2 0x0 00:27:53.130 [2024-12-05 12:11:20.577981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.130 [2024-12-05 12:11:20.577990] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:53.130 [2024-12-05 12:11:20.577995] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:53.130 [2024-12-05 12:11:20.578001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86312 len:8 PRP1 0x0 PRP2 0x0 00:27:53.130 [2024-12-05 12:11:20.578007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.130 [2024-12-05 12:11:20.578014] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:53.130 [2024-12-05 12:11:20.578019] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:53.130 [2024-12-05 12:11:20.578024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86320 len:8 PRP1 0x0 PRP2 0x0 00:27:53.130 [2024-12-05 12:11:20.578030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.130 [2024-12-05 12:11:20.578037] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:53.130 [2024-12-05 12:11:20.578042] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:53.130 [2024-12-05 12:11:20.578047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86328 len:8 PRP1 0x0 PRP2 0x0 00:27:53.130 [2024-12-05 12:11:20.578053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.130 [2024-12-05 12:11:20.578060] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:53.130 [2024-12-05 12:11:20.578065] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:53.130 [2024-12-05 12:11:20.578070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86336 len:8 PRP1 0x0 PRP2 0x0 00:27:53.130 [2024-12-05 12:11:20.578076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.130 [2024-12-05 12:11:20.578083] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:53.130 [2024-12-05 12:11:20.578088] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:53.130 [2024-12-05 12:11:20.578093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86344 len:8 PRP1 0x0 PRP2 0x0 00:27:53.130 [2024-12-05 12:11:20.578099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.130 [2024-12-05 12:11:20.578107] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:53.130 [2024-12-05 12:11:20.578112] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:53.130 [2024-12-05 12:11:20.578119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86352 len:8 PRP1 0x0 PRP2 0x0 00:27:53.130 [2024-12-05 12:11:20.578126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.130 [2024-12-05 12:11:20.578133] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:53.130 [2024-12-05 12:11:20.578137] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:53.130 [2024-12-05 12:11:20.578143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86360 len:8 PRP1 0x0 PRP2 0x0 00:27:53.131 [2024-12-05 12:11:20.578148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.131 [2024-12-05 12:11:20.578155] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:53.131 [2024-12-05 12:11:20.578160] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:53.131 [2024-12-05 12:11:20.578167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86368 len:8 PRP1 0x0 PRP2 0x0 00:27:53.131 [2024-12-05 12:11:20.578173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.131 [2024-12-05 12:11:20.578180] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:53.131 [2024-12-05 12:11:20.578185] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:53.131 [2024-12-05 12:11:20.578190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86376 len:8 PRP1 0x0 PRP2 0x0 00:27:53.131 [2024-12-05 12:11:20.578196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.131 [2024-12-05 12:11:20.578203] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:53.131 [2024-12-05 12:11:20.578208] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:53.131 [2024-12-05 12:11:20.578213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86384 len:8 PRP1 0x0 PRP2 0x0 00:27:53.131 [2024-12-05 12:11:20.578219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.131 [2024-12-05 12:11:20.578226] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:53.131 [2024-12-05 12:11:20.578231] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:53.131 [2024-12-05 12:11:20.578236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86392 len:8 PRP1 0x0 PRP2 0x0 00:27:53.131 [2024-12-05 12:11:20.578242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.131 [2024-12-05 12:11:20.578248] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:53.131 [2024-12-05 12:11:20.578253] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:53.131 [2024-12-05 12:11:20.578258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86400 len:8 PRP1 0x0 PRP2 0x0 00:27:53.131 [2024-12-05 12:11:20.578265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.131 [2024-12-05 12:11:20.578271] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:53.131 [2024-12-05 12:11:20.578276] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:53.131 [2024-12-05 12:11:20.578281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86408 len:8 PRP1 0x0 PRP2 0x0 00:27:53.131 [2024-12-05 12:11:20.578287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.131 [2024-12-05 12:11:20.578295] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:53.131 [2024-12-05 12:11:20.578300] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:53.131 [2024-12-05 12:11:20.578306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86416 len:8 PRP1 0x0 PRP2 0x0 00:27:53.131 [2024-12-05 12:11:20.578312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.131 [2024-12-05 12:11:20.578318] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:53.131 [2024-12-05 12:11:20.578323] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:53.131 [2024-12-05 12:11:20.578328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86424 len:8 PRP1 0x0 PRP2 0x0 00:27:53.131 [2024-12-05 12:11:20.578334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.131 [2024-12-05 12:11:20.578383] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:27:53.131 [2024-12-05 12:11:20.578406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.131 [2024-12-05 12:11:20.578415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.131 [2024-12-05 12:11:20.578422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.131 [2024-12-05 12:11:20.578428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.131 [2024-12-05 12:11:20.578435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.131 [2024-12-05 12:11:20.578441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.131 [2024-12-05 12:11:20.578449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.131 [2024-12-05 12:11:20.578455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.131 [2024-12-05 12:11:20.578461] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:27:53.131 [2024-12-05 12:11:20.578492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x230d370 (9): Bad file descriptor 00:27:53.131 [2024-12-05 12:11:20.581251] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:27:53.131 [2024-12-05 12:11:20.651200] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:27:53.131 11247.50 IOPS, 43.94 MiB/s [2024-12-05T11:11:27.327Z] 11261.82 IOPS, 43.99 MiB/s [2024-12-05T11:11:27.327Z] 11282.75 IOPS, 44.07 MiB/s [2024-12-05T11:11:27.327Z] 11296.85 IOPS, 44.13 MiB/s [2024-12-05T11:11:27.327Z] 11299.79 IOPS, 44.14 MiB/s 00:27:53.131 Latency(us) 00:27:53.131 [2024-12-05T11:11:27.327Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:53.131 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:53.131 Verification LBA range: start 0x0 length 0x4000 00:27:53.131 NVMe0n1 : 15.01 11310.50 44.18 660.67 0.00 10670.83 421.30 12732.71 00:27:53.131 [2024-12-05T11:11:27.327Z] =================================================================================================================== 00:27:53.131 [2024-12-05T11:11:27.327Z] Total : 11310.50 44.18 660.67 0.00 10670.83 421.30 12732.71 00:27:53.131 Received shutdown signal, test time was about 15.000000 seconds 00:27:53.131 00:27:53.131 Latency(us) 00:27:53.131 [2024-12-05T11:11:27.327Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:53.131 [2024-12-05T11:11:27.327Z] =================================================================================================================== 00:27:53.131 [2024-12-05T11:11:27.327Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:53.131 12:11:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:27:53.131 12:11:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:27:53.131 12:11:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:27:53.131 12:11:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=181455 00:27:53.131 12:11:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:27:53.131 12:11:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 181455 /var/tmp/bdevperf.sock 00:27:53.131 12:11:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 181455 ']' 00:27:53.131 12:11:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:53.131 12:11:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:53.131 12:11:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:53.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:53.131 12:11:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:53.131 12:11:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:53.131 12:11:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:53.131 12:11:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:27:53.131 12:11:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:53.131 [2024-12-05 12:11:27.223470] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:53.131 12:11:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:27:53.389 [2024-12-05 12:11:27.424025] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:27:53.389 12:11:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:27:53.646 NVMe0n1 00:27:53.646 12:11:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:27:53.903 00:27:53.903 12:11:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:27:54.161 00:27:54.161 12:11:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:54.161 12:11:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:27:54.419 12:11:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:54.678 12:11:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:27:57.969 12:11:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:57.969 12:11:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:27:57.969 12:11:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=182311 00:27:57.969 12:11:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:57.969 12:11:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 182311 00:27:58.905 { 00:27:58.905 "results": [ 00:27:58.905 { 00:27:58.905 "job": "NVMe0n1", 00:27:58.905 "core_mask": "0x1", 00:27:58.905 "workload": "verify", 00:27:58.905 "status": "finished", 00:27:58.905 "verify_range": { 00:27:58.905 "start": 0, 00:27:58.905 "length": 16384 00:27:58.905 }, 00:27:58.905 "queue_depth": 128, 00:27:58.905 "io_size": 4096, 00:27:58.905 "runtime": 1.013863, 00:27:58.905 "iops": 11516.34885581188, 00:27:58.905 "mibps": 44.98573771801516, 00:27:58.905 "io_failed": 0, 00:27:58.905 "io_timeout": 0, 00:27:58.905 "avg_latency_us": 11074.898693942805, 00:27:58.905 "min_latency_us": 2340.5714285714284, 00:27:58.905 "max_latency_us": 10111.26857142857 00:27:58.905 } 00:27:58.905 ], 00:27:58.905 "core_count": 1 00:27:58.905 } 00:27:58.905 12:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:58.905 [2024-12-05 12:11:26.836693] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:27:58.905 [2024-12-05 12:11:26.836746] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid181455 ] 00:27:58.905 [2024-12-05 12:11:26.915030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:58.905 [2024-12-05 12:11:26.952515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:58.905 [2024-12-05 12:11:28.588877] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:27:58.905 [2024-12-05 12:11:28.588921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:58.905 [2024-12-05 12:11:28.588933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.905 [2024-12-05 12:11:28.588941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:58.905 [2024-12-05 12:11:28.588948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.905 [2024-12-05 12:11:28.588956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:58.905 [2024-12-05 12:11:28.588963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.905 [2024-12-05 12:11:28.588971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:58.905 [2024-12-05 12:11:28.588978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.905 [2024-12-05 12:11:28.588985] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:27:58.905 [2024-12-05 12:11:28.589010] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:27:58.905 [2024-12-05 12:11:28.589023] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x222c370 (9): Bad file descriptor 00:27:58.905 [2024-12-05 12:11:28.641531] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:27:58.905 Running I/O for 1 seconds... 00:27:58.905 11455.00 IOPS, 44.75 MiB/s 00:27:58.905 Latency(us) 00:27:58.905 [2024-12-05T11:11:33.101Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:58.905 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:58.905 Verification LBA range: start 0x0 length 0x4000 00:27:58.905 NVMe0n1 : 1.01 11516.35 44.99 0.00 0.00 11074.90 2340.57 10111.27 00:27:58.905 [2024-12-05T11:11:33.101Z] =================================================================================================================== 00:27:58.905 [2024-12-05T11:11:33.101Z] Total : 11516.35 44.99 0.00 0.00 11074.90 2340.57 10111.27 00:27:58.905 12:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:58.905 12:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:27:59.164 12:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:59.164 12:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:59.164 12:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:27:59.422 12:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:59.681 12:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:28:02.997 12:11:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:02.997 12:11:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:28:02.997 12:11:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 181455 00:28:02.997 12:11:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 181455 ']' 00:28:02.997 12:11:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 181455 00:28:02.997 12:11:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:28:02.997 12:11:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:02.997 12:11:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 181455 00:28:02.997 12:11:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:02.997 12:11:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:02.997 12:11:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 181455' 00:28:02.997 killing process with pid 181455 00:28:02.997 12:11:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 181455 00:28:02.997 12:11:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 181455 00:28:02.997 12:11:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:28:02.997 12:11:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:03.255 12:11:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:28:03.255 12:11:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:03.255 12:11:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:28:03.255 12:11:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # nvmfcleanup 00:28:03.255 12:11:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@99 -- # sync 00:28:03.255 12:11:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:28:03.255 12:11:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@102 -- # set +e 00:28:03.255 12:11:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@103 -- # for i in {1..20} 00:28:03.255 12:11:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:28:03.255 rmmod nvme_tcp 00:28:03.255 rmmod nvme_fabrics 00:28:03.255 rmmod nvme_keyring 00:28:03.255 12:11:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:28:03.255 12:11:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # set -e 00:28:03.255 12:11:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # return 0 00:28:03.255 12:11:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # '[' -n 178378 ']' 00:28:03.255 12:11:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@337 -- # killprocess 178378 00:28:03.255 12:11:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 178378 ']' 00:28:03.255 12:11:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 178378 00:28:03.255 12:11:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:28:03.255 12:11:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:03.255 12:11:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 178378 00:28:03.514 12:11:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:03.514 12:11:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:03.514 12:11:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 178378' 00:28:03.514 killing process with pid 178378 00:28:03.515 12:11:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 178378 00:28:03.515 12:11:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 178378 00:28:03.515 12:11:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:28:03.515 12:11:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # nvmf_fini 00:28:03.515 12:11:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@264 -- # local dev 00:28:03.515 12:11:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@267 -- # remove_target_ns 00:28:03.515 12:11:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:28:03.515 12:11:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:28:03.515 12:11:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_target_ns 00:28:06.054 12:11:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@268 -- # delete_main_bridge 00:28:06.054 12:11:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:28:06.054 12:11:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@130 -- # return 0 00:28:06.054 12:11:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:28:06.054 12:11:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:28:06.054 12:11:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:28:06.054 12:11:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:28:06.054 12:11:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:28:06.054 12:11:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:28:06.054 12:11:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:28:06.054 12:11:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:28:06.054 12:11:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:28:06.054 12:11:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:28:06.054 12:11:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:28:06.054 12:11:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:28:06.054 12:11:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:28:06.054 12:11:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:28:06.054 12:11:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:28:06.054 12:11:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:28:06.054 12:11:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:28:06.054 12:11:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@41 -- # _dev=0 00:28:06.054 12:11:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@41 -- # dev_map=() 00:28:06.054 12:11:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@284 -- # iptr 00:28:06.054 12:11:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@542 -- # iptables-save 00:28:06.054 12:11:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:28:06.054 12:11:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@542 -- # iptables-restore 00:28:06.054 00:28:06.054 real 0m37.803s 00:28:06.054 user 1m59.224s 00:28:06.054 sys 0m7.997s 00:28:06.054 12:11:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:06.054 12:11:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:28:06.054 ************************************ 00:28:06.054 END TEST nvmf_failover 00:28:06.054 ************************************ 00:28:06.054 12:11:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:28:06.054 12:11:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:06.054 12:11:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:06.054 12:11:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.054 ************************************ 00:28:06.054 START TEST nvmf_host_multipath_status 00:28:06.054 ************************************ 00:28:06.054 12:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:28:06.054 * Looking for test storage... 00:28:06.054 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:06.054 12:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:06.054 12:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:28:06.054 12:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:06.054 12:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:06.054 12:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:06.054 12:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:06.054 12:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:06.054 12:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:28:06.054 12:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:28:06.054 12:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:28:06.054 12:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:28:06.054 12:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:28:06.054 12:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:28:06.054 12:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:28:06.054 12:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:06.054 12:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:28:06.054 12:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:28:06.054 12:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:06.054 12:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:06.054 12:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:28:06.054 12:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:28:06.054 12:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:06.054 12:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:28:06.054 12:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:28:06.054 12:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:28:06.054 12:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:28:06.054 12:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:06.054 12:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:28:06.054 12:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:28:06.054 12:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:06.054 12:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:06.054 12:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:28:06.054 12:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:06.054 12:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:06.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:06.054 --rc genhtml_branch_coverage=1 00:28:06.054 --rc genhtml_function_coverage=1 00:28:06.054 --rc genhtml_legend=1 00:28:06.054 --rc geninfo_all_blocks=1 00:28:06.054 --rc geninfo_unexecuted_blocks=1 00:28:06.055 00:28:06.055 ' 00:28:06.055 12:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:06.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:06.055 --rc genhtml_branch_coverage=1 00:28:06.055 --rc genhtml_function_coverage=1 00:28:06.055 --rc genhtml_legend=1 00:28:06.055 --rc geninfo_all_blocks=1 00:28:06.055 --rc geninfo_unexecuted_blocks=1 00:28:06.055 00:28:06.055 ' 00:28:06.055 12:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:06.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:06.055 --rc genhtml_branch_coverage=1 00:28:06.055 --rc genhtml_function_coverage=1 00:28:06.055 --rc genhtml_legend=1 00:28:06.055 --rc geninfo_all_blocks=1 00:28:06.055 --rc geninfo_unexecuted_blocks=1 00:28:06.055 00:28:06.055 ' 00:28:06.055 12:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:06.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:06.055 --rc genhtml_branch_coverage=1 00:28:06.055 --rc genhtml_function_coverage=1 00:28:06.055 --rc genhtml_legend=1 00:28:06.055 --rc geninfo_all_blocks=1 00:28:06.055 --rc geninfo_unexecuted_blocks=1 00:28:06.055 00:28:06.055 ' 00:28:06.055 12:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:06.055 12:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:28:06.055 12:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:06.055 12:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:06.055 12:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:06.055 12:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:06.055 12:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:06.055 12:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:28:06.055 12:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:06.055 12:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:28:06.055 12:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:28:06.055 12:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:28:06.055 12:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:06.055 12:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:28:06.055 12:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:28:06.055 12:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:06.055 12:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:06.055 12:11:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:28:06.055 12:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:06.055 12:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:06.055 12:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:06.055 12:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.055 12:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.055 12:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.055 12:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:28:06.055 12:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.055 12:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:28:06.055 12:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:28:06.055 12:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:28:06.055 12:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:28:06.055 12:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@50 -- # : 0 00:28:06.055 12:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:28:06.055 12:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:28:06.055 12:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:28:06.055 12:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:06.055 12:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:06.055 12:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:28:06.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:28:06.055 12:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:28:06.055 12:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:28:06.055 12:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@54 -- # have_pci_nics=0 00:28:06.055 12:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:06.055 12:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:06.055 12:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:06.055 12:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:28:06.055 12:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:06.055 12:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:28:06.055 12:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:28:06.055 12:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:28:06.056 12:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:06.056 12:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # prepare_net_devs 00:28:06.056 12:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # local -g is_hw=no 00:28:06.056 12:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # remove_target_ns 00:28:06.056 12:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:28:06.056 12:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:28:06.056 12:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_target_ns 00:28:06.056 12:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:28:06.056 12:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:28:06.056 12:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # xtrace_disable 00:28:06.056 12:11:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@131 -- # pci_devs=() 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@131 -- # local -a pci_devs 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@132 -- # pci_net_devs=() 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@133 -- # pci_drivers=() 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@133 -- # local -A pci_drivers 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@135 -- # net_devs=() 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@135 -- # local -ga net_devs 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@136 -- # e810=() 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@136 -- # local -ga e810 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@137 -- # x722=() 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@137 -- # local -ga x722 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@138 -- # mlx=() 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@138 -- # local -ga mlx 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:12.623 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:12.623 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # [[ up == up ]] 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:12.623 Found net devices under 0000:86:00.0: cvl_0_0 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # [[ up == up ]] 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:12.623 Found net devices under 0000:86:00.1: cvl_0_1 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # is_hw=yes 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@257 -- # create_target_ns 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:12.623 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@27 -- # local -gA dev_map 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@28 -- # local -g _dev 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@44 -- # ips=() 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@11 -- # local val=167772161 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:28:12.624 10.0.0.1 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@11 -- # local val=167772162 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:28:12.624 10.0.0.2 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@38 -- # ping_ips 1 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@107 -- # local dev=initiator0 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:28:12.624 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:28:12.624 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:12.624 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.475 ms 00:28:12.624 00:28:12.624 --- 10.0.0.1 ping statistics --- 00:28:12.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:12.624 rtt min/avg/max/mdev = 0.475/0.475/0.475/0.000 ms 00:28:12.625 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:28:12.625 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:28:12.625 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:28:12.625 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:28:12.625 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:12.625 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:12.625 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # get_net_dev target0 00:28:12.625 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@107 -- # local dev=target0 00:28:12.625 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:28:12.625 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:28:12.625 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:28:12.625 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:28:12.625 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:28:12.625 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:28:12.625 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:28:12.625 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:28:12.625 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:28:12.625 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:28:12.625 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:28:12.625 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:28:12.625 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:28:12.625 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:28:12.625 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:12.625 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:28:12.625 00:28:12.625 --- 10.0.0.2 ping statistics --- 00:28:12.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:12.625 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:28:12.625 12:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # (( pair++ )) 00:28:12.625 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:28:12.625 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:12.625 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # return 0 00:28:12.625 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:28:12.625 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:28:12.625 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:28:12.625 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:28:12.625 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:28:12.625 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:28:12.625 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:28:12.625 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:28:12.625 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:28:12.625 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:28:12.625 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@107 -- # local dev=initiator0 00:28:12.625 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:28:12.625 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:28:12.625 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:28:12.625 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:28:12.625 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:12.625 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:12.625 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:28:12.625 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:28:12.625 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:28:12.625 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:12.625 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:28:12.625 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:28:12.625 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:28:12.625 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:28:12.625 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:28:12.625 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:28:12.625 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@107 -- # local dev=initiator1 00:28:12.625 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:28:12.625 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:28:12.625 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # return 1 00:28:12.625 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # dev= 00:28:12.625 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@169 -- # return 0 00:28:12.625 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:28:12.625 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:28:12.625 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:28:12.625 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:28:12.625 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:28:12.625 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:12.625 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:12.625 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # get_net_dev target0 00:28:12.625 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@107 -- # local dev=target0 00:28:12.625 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:28:12.625 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:28:12.625 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:28:12.625 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:28:12.625 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:28:12.625 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:28:12.625 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:28:12.625 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:28:12.625 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:28:12.625 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:12.625 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:28:12.625 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:28:12.625 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:28:12.625 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:28:12.625 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:12.625 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:12.625 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # get_net_dev target1 00:28:12.625 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@107 -- # local dev=target1 00:28:12.626 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:28:12.626 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:28:12.626 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # return 1 00:28:12.626 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # dev= 00:28:12.626 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@169 -- # return 0 00:28:12.626 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:28:12.626 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:12.626 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:28:12.626 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:28:12.626 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:12.626 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:28:12.626 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:28:12.626 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:28:12.626 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:28:12.626 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:12.626 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:12.626 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # nvmfpid=186784 00:28:12.626 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # waitforlisten 186784 00:28:12.626 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:28:12.626 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 186784 ']' 00:28:12.626 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:12.626 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:12.626 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:12.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:12.626 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:12.626 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:12.626 [2024-12-05 12:11:46.160231] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:28:12.626 [2024-12-05 12:11:46.160281] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:12.626 [2024-12-05 12:11:46.239658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:12.626 [2024-12-05 12:11:46.280431] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:12.626 [2024-12-05 12:11:46.280466] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:12.626 [2024-12-05 12:11:46.280473] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:12.626 [2024-12-05 12:11:46.280479] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:12.626 [2024-12-05 12:11:46.280484] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:12.626 [2024-12-05 12:11:46.281677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:12.626 [2024-12-05 12:11:46.281680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:12.626 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:12.626 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:28:12.626 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:28:12.626 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:12.626 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:12.626 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:12.626 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=186784 00:28:12.626 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:12.626 [2024-12-05 12:11:46.579610] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:12.626 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:28:12.888 Malloc0 00:28:12.888 12:11:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:28:12.888 12:11:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:13.186 12:11:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:13.477 [2024-12-05 12:11:47.431149] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:13.477 12:11:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:13.477 [2024-12-05 12:11:47.619597] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:13.477 12:11:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:28:13.477 12:11:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=187040 00:28:13.477 12:11:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:13.477 12:11:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 187040 /var/tmp/bdevperf.sock 00:28:13.477 12:11:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 187040 ']' 00:28:13.477 12:11:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:13.477 12:11:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:13.477 12:11:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:13.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:13.477 12:11:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:13.477 12:11:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:13.774 12:11:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:13.774 12:11:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:28:13.774 12:11:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:28:14.032 12:11:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:28:14.599 Nvme0n1 00:28:14.599 12:11:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:28:14.858 Nvme0n1 00:28:14.858 12:11:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:28:14.858 12:11:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:28:16.762 12:11:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:28:16.762 12:11:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:28:17.021 12:11:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:17.280 12:11:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:28:18.217 12:11:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:28:18.217 12:11:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:18.217 12:11:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:18.217 12:11:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:18.476 12:11:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:18.476 12:11:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:18.476 12:11:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:18.476 12:11:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:18.734 12:11:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:18.734 12:11:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:18.734 12:11:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:18.734 12:11:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:18.991 12:11:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:18.991 12:11:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:18.991 12:11:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:18.991 12:11:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:18.991 12:11:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:18.991 12:11:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:18.991 12:11:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:18.991 12:11:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:19.249 12:11:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:19.249 12:11:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:19.249 12:11:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:19.249 12:11:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:19.507 12:11:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:19.507 12:11:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:28:19.507 12:11:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:19.766 12:11:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:20.024 12:11:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:28:20.960 12:11:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:28:20.960 12:11:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:20.960 12:11:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:20.960 12:11:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:21.219 12:11:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:21.219 12:11:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:21.219 12:11:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:21.219 12:11:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:21.478 12:11:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:21.478 12:11:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:21.478 12:11:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:21.478 12:11:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:21.478 12:11:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:21.478 12:11:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:21.478 12:11:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:21.478 12:11:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:21.736 12:11:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:21.736 12:11:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:21.736 12:11:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:21.736 12:11:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:21.994 12:11:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:21.994 12:11:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:21.994 12:11:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:21.994 12:11:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:22.252 12:11:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:22.252 12:11:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:28:22.252 12:11:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:22.509 12:11:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:28:22.509 12:11:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:28:23.884 12:11:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:28:23.884 12:11:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:23.884 12:11:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:23.884 12:11:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:23.884 12:11:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:23.884 12:11:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:23.884 12:11:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:23.884 12:11:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:24.143 12:11:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:24.143 12:11:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:24.143 12:11:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:24.143 12:11:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:24.143 12:11:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:24.143 12:11:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:24.143 12:11:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:24.143 12:11:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:24.403 12:11:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:24.403 12:11:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:24.403 12:11:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:24.403 12:11:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:24.662 12:11:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:24.662 12:11:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:24.662 12:11:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:24.662 12:11:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:24.922 12:11:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:24.922 12:11:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:28:24.922 12:11:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:25.181 12:11:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:28:25.439 12:11:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:28:26.375 12:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:28:26.375 12:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:26.375 12:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:26.375 12:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:26.634 12:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:26.634 12:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:26.634 12:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:26.634 12:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:26.634 12:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:26.634 12:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:26.634 12:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:26.634 12:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:26.893 12:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:26.893 12:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:26.893 12:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:26.893 12:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:27.151 12:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:27.151 12:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:27.151 12:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:27.151 12:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:27.410 12:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:27.410 12:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:28:27.410 12:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:27.410 12:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:27.669 12:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:27.669 12:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:28:27.669 12:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:28:27.928 12:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:28:27.928 12:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:28:29.302 12:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:28:29.302 12:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:29.302 12:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:29.302 12:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:29.302 12:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:29.302 12:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:29.302 12:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:29.302 12:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:29.560 12:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:29.560 12:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:29.560 12:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:29.560 12:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:29.560 12:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:29.560 12:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:29.560 12:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:29.560 12:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:29.819 12:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:29.819 12:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:28:29.819 12:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:29.819 12:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:30.077 12:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:30.077 12:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:28:30.077 12:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:30.077 12:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:30.335 12:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:30.335 12:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:28:30.335 12:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:28:30.335 12:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:30.593 12:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:28:31.526 12:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:28:31.526 12:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:31.526 12:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:31.526 12:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:31.784 12:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:31.784 12:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:31.784 12:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:31.784 12:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:32.042 12:12:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:32.042 12:12:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:32.042 12:12:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:32.042 12:12:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:32.301 12:12:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:32.301 12:12:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:32.301 12:12:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:32.301 12:12:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:32.560 12:12:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:32.560 12:12:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:28:32.560 12:12:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:32.560 12:12:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:32.819 12:12:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:32.819 12:12:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:32.819 12:12:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:32.819 12:12:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:32.819 12:12:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:32.819 12:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:28:33.078 12:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:28:33.078 12:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:28:33.336 12:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:33.595 12:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:28:34.530 12:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:28:34.530 12:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:34.530 12:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:34.530 12:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:34.789 12:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:34.789 12:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:34.789 12:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:34.789 12:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:35.047 12:12:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:35.047 12:12:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:35.047 12:12:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:35.047 12:12:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:35.306 12:12:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:35.306 12:12:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:35.306 12:12:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:35.306 12:12:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:35.565 12:12:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:35.565 12:12:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:35.565 12:12:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:35.565 12:12:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:35.565 12:12:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:35.565 12:12:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:35.565 12:12:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:35.565 12:12:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:35.824 12:12:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:35.824 12:12:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:28:35.824 12:12:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:36.082 12:12:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:36.341 12:12:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:28:37.277 12:12:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:28:37.277 12:12:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:37.277 12:12:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:37.277 12:12:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:37.537 12:12:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:37.537 12:12:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:37.537 12:12:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:37.537 12:12:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:37.537 12:12:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:37.537 12:12:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:37.537 12:12:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:37.537 12:12:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:37.796 12:12:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:37.796 12:12:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:37.796 12:12:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:37.796 12:12:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:38.055 12:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:38.055 12:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:38.055 12:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:38.055 12:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:38.314 12:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:38.314 12:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:38.314 12:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:38.314 12:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:38.573 12:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:38.573 12:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:28:38.573 12:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:38.832 12:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:28:38.832 12:12:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:28:40.228 12:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:28:40.228 12:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:40.228 12:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:40.228 12:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:40.228 12:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:40.228 12:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:40.228 12:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:40.228 12:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:40.487 12:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:40.487 12:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:40.487 12:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:40.487 12:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:40.487 12:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:40.487 12:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:40.487 12:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:40.487 12:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:40.746 12:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:40.746 12:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:40.746 12:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:40.746 12:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:41.005 12:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:41.005 12:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:41.005 12:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:41.005 12:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:41.263 12:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:41.263 12:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:28:41.263 12:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:41.521 12:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:28:41.521 12:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:28:42.910 12:12:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:28:42.910 12:12:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:42.910 12:12:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:42.910 12:12:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:42.910 12:12:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:42.910 12:12:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:42.910 12:12:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:42.910 12:12:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:43.169 12:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:43.169 12:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:43.169 12:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:43.169 12:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:43.169 12:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:43.169 12:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:43.169 12:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:43.169 12:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:43.429 12:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:43.429 12:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:43.429 12:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:43.429 12:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:43.688 12:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:43.688 12:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:28:43.688 12:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:43.688 12:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:43.947 12:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:43.947 12:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 187040 00:28:43.947 12:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 187040 ']' 00:28:43.947 12:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 187040 00:28:43.947 12:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:28:43.947 12:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:43.947 12:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 187040 00:28:43.947 12:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:28:43.947 12:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:28:43.947 12:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 187040' 00:28:43.947 killing process with pid 187040 00:28:43.947 12:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 187040 00:28:43.947 12:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 187040 00:28:43.947 { 00:28:43.947 "results": [ 00:28:43.947 { 00:28:43.947 "job": "Nvme0n1", 00:28:43.947 "core_mask": "0x4", 00:28:43.947 "workload": "verify", 00:28:43.947 "status": "terminated", 00:28:43.947 "verify_range": { 00:28:43.947 "start": 0, 00:28:43.947 "length": 16384 00:28:43.947 }, 00:28:43.947 "queue_depth": 128, 00:28:43.947 "io_size": 4096, 00:28:43.947 "runtime": 29.03755, 00:28:43.947 "iops": 10689.297134227922, 00:28:43.947 "mibps": 41.75506693057782, 00:28:43.947 "io_failed": 0, 00:28:43.947 "io_timeout": 0, 00:28:43.947 "avg_latency_us": 11954.266905885679, 00:28:43.947 "min_latency_us": 173.59238095238095, 00:28:43.947 "max_latency_us": 3019898.88 00:28:43.947 } 00:28:43.947 ], 00:28:43.947 "core_count": 1 00:28:43.947 } 00:28:44.211 12:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 187040 00:28:44.211 12:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:44.211 [2024-12-05 12:11:47.696866] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:28:44.211 [2024-12-05 12:11:47.696920] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid187040 ] 00:28:44.211 [2024-12-05 12:11:47.768943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:44.211 [2024-12-05 12:11:47.808945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:44.211 Running I/O for 90 seconds... 00:28:44.211 11452.00 IOPS, 44.73 MiB/s [2024-12-05T11:12:18.407Z] 11488.00 IOPS, 44.88 MiB/s [2024-12-05T11:12:18.407Z] 11530.67 IOPS, 45.04 MiB/s [2024-12-05T11:12:18.407Z] 11532.50 IOPS, 45.05 MiB/s [2024-12-05T11:12:18.407Z] 11517.20 IOPS, 44.99 MiB/s [2024-12-05T11:12:18.407Z] 11524.67 IOPS, 45.02 MiB/s [2024-12-05T11:12:18.407Z] 11504.43 IOPS, 44.94 MiB/s [2024-12-05T11:12:18.407Z] 11511.88 IOPS, 44.97 MiB/s [2024-12-05T11:12:18.407Z] 11520.89 IOPS, 45.00 MiB/s [2024-12-05T11:12:18.407Z] 11520.60 IOPS, 45.00 MiB/s [2024-12-05T11:12:18.407Z] 11518.18 IOPS, 44.99 MiB/s [2024-12-05T11:12:18.407Z] 11501.58 IOPS, 44.93 MiB/s [2024-12-05T11:12:18.407Z] [2024-12-05 12:12:01.888406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:11416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.211 [2024-12-05 12:12:01.888445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:44.211 [2024-12-05 12:12:01.888489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:11424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.211 [2024-12-05 12:12:01.888501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:44.211 [2024-12-05 12:12:01.888518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.211 [2024-12-05 12:12:01.888529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:44.211 [2024-12-05 12:12:01.888545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:11440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.211 [2024-12-05 12:12:01.888555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:44.211 [2024-12-05 12:12:01.888572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:11448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.211 [2024-12-05 12:12:01.888582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:44.211 [2024-12-05 12:12:01.888598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:11456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.211 [2024-12-05 12:12:01.888608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:44.211 [2024-12-05 12:12:01.888624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.211 [2024-12-05 12:12:01.888634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:44.211 [2024-12-05 12:12:01.888650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.211 [2024-12-05 12:12:01.888659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:44.211 [2024-12-05 12:12:01.888676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:11480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.211 [2024-12-05 12:12:01.888686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:44.211 [2024-12-05 12:12:01.888701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:11488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.211 [2024-12-05 12:12:01.888716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:44.211 [2024-12-05 12:12:01.888731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:11496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.211 [2024-12-05 12:12:01.888740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:44.211 [2024-12-05 12:12:01.888756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:11504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.211 [2024-12-05 12:12:01.888767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:44.211 [2024-12-05 12:12:01.888783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:11512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.211 [2024-12-05 12:12:01.888793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:44.211 [2024-12-05 12:12:01.888810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:11520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.211 [2024-12-05 12:12:01.888821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:44.211 [2024-12-05 12:12:01.888838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.211 [2024-12-05 12:12:01.888848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:44.211 [2024-12-05 12:12:01.888865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.211 [2024-12-05 12:12:01.888875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:44.211 [2024-12-05 12:12:01.888892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.211 [2024-12-05 12:12:01.888903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:44.211 [2024-12-05 12:12:01.888919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:11552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.211 [2024-12-05 12:12:01.888930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:44.211 [2024-12-05 12:12:01.888947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.211 [2024-12-05 12:12:01.888958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:44.211 [2024-12-05 12:12:01.888974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:11568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.212 [2024-12-05 12:12:01.888985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:44.212 [2024-12-05 12:12:01.889001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:11576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.212 [2024-12-05 12:12:01.889012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:44.212 [2024-12-05 12:12:01.889029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:11584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.212 [2024-12-05 12:12:01.889042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:44.212 [2024-12-05 12:12:01.889060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:11592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.212 [2024-12-05 12:12:01.889070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:44.212 [2024-12-05 12:12:01.889087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:11600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.212 [2024-12-05 12:12:01.889099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:44.212 [2024-12-05 12:12:01.889116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:11256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.212 [2024-12-05 12:12:01.889127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:44.212 [2024-12-05 12:12:01.889144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:11264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.212 [2024-12-05 12:12:01.889155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.212 [2024-12-05 12:12:01.889172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:11272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.212 [2024-12-05 12:12:01.889182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:44.212 [2024-12-05 12:12:01.889199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.212 [2024-12-05 12:12:01.889209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:44.212 [2024-12-05 12:12:01.889226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:11288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.212 [2024-12-05 12:12:01.889236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:44.212 [2024-12-05 12:12:01.889665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.212 [2024-12-05 12:12:01.889690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:44.212 [2024-12-05 12:12:01.889711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.212 [2024-12-05 12:12:01.889723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:44.212 [2024-12-05 12:12:01.889742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.212 [2024-12-05 12:12:01.889754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:44.212 [2024-12-05 12:12:01.889773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:11632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.212 [2024-12-05 12:12:01.889784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:44.212 [2024-12-05 12:12:01.889802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:11640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.212 [2024-12-05 12:12:01.889813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:44.212 [2024-12-05 12:12:01.889835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:11648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.212 [2024-12-05 12:12:01.889846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:44.212 [2024-12-05 12:12:01.889865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:11656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.212 [2024-12-05 12:12:01.889876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:44.212 [2024-12-05 12:12:01.889894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.212 [2024-12-05 12:12:01.889906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:44.212 [2024-12-05 12:12:01.889924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:11672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.212 [2024-12-05 12:12:01.889936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:44.212 [2024-12-05 12:12:01.889955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:11680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.212 [2024-12-05 12:12:01.889966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:44.212 [2024-12-05 12:12:01.889985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:11688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.212 [2024-12-05 12:12:01.889996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:44.212 [2024-12-05 12:12:01.890015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:11696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.212 [2024-12-05 12:12:01.890026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:44.212 [2024-12-05 12:12:01.890044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.212 [2024-12-05 12:12:01.890056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:44.212 [2024-12-05 12:12:01.890074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:11712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.212 [2024-12-05 12:12:01.890085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:44.212 [2024-12-05 12:12:01.890103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:11720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.212 [2024-12-05 12:12:01.890114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:44.212 [2024-12-05 12:12:01.890132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:11728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.212 [2024-12-05 12:12:01.890143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:44.212 [2024-12-05 12:12:01.890162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.212 [2024-12-05 12:12:01.890173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:44.212 [2024-12-05 12:12:01.890193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:11744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.212 [2024-12-05 12:12:01.890204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:44.212 [2024-12-05 12:12:01.890222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.212 [2024-12-05 12:12:01.890234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:44.212 [2024-12-05 12:12:01.890253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:11760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.212 [2024-12-05 12:12:01.890264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:44.212 [2024-12-05 12:12:01.890282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:11768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.212 [2024-12-05 12:12:01.890293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:44.212 [2024-12-05 12:12:01.890312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.212 [2024-12-05 12:12:01.890323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:44.212 [2024-12-05 12:12:01.890342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.212 [2024-12-05 12:12:01.890352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:44.212 [2024-12-05 12:12:01.890377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:11792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.212 [2024-12-05 12:12:01.890388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:44.212 [2024-12-05 12:12:01.890406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:11800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.212 [2024-12-05 12:12:01.890417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:44.212 [2024-12-05 12:12:01.890436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:11808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.212 [2024-12-05 12:12:01.890447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:44.212 [2024-12-05 12:12:01.890465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:11816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.212 [2024-12-05 12:12:01.890476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.212 [2024-12-05 12:12:01.890495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:11824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.212 [2024-12-05 12:12:01.890505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.212 [2024-12-05 12:12:01.890524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:11832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.212 [2024-12-05 12:12:01.890535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.213 [2024-12-05 12:12:01.890553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:11840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.213 [2024-12-05 12:12:01.890570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:44.213 [2024-12-05 12:12:01.890589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:11848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.213 [2024-12-05 12:12:01.890599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:44.213 [2024-12-05 12:12:01.890618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:11856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.213 [2024-12-05 12:12:01.890628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:44.213 [2024-12-05 12:12:01.890647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.213 [2024-12-05 12:12:01.890658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:44.213 [2024-12-05 12:12:01.890676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:11872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.213 [2024-12-05 12:12:01.890687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:44.213 [2024-12-05 12:12:01.890706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:11880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.213 [2024-12-05 12:12:01.890717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:44.213 [2024-12-05 12:12:01.890735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:11888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.213 [2024-12-05 12:12:01.890745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:44.213 [2024-12-05 12:12:01.890764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:11896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.213 [2024-12-05 12:12:01.890775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:44.213 [2024-12-05 12:12:01.890794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.213 [2024-12-05 12:12:01.890804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:44.213 [2024-12-05 12:12:01.890823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:11912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.213 [2024-12-05 12:12:01.890833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:44.213 [2024-12-05 12:12:01.890851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:11920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.213 [2024-12-05 12:12:01.890862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:44.213 [2024-12-05 12:12:01.890881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:11928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.213 [2024-12-05 12:12:01.890892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:44.213 [2024-12-05 12:12:01.890911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:11936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.213 [2024-12-05 12:12:01.890924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:44.213 [2024-12-05 12:12:01.890943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.213 [2024-12-05 12:12:01.890954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:44.213 [2024-12-05 12:12:01.890972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:11952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.213 [2024-12-05 12:12:01.890983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:44.213 [2024-12-05 12:12:01.891001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:11960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.213 [2024-12-05 12:12:01.891012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:44.213 [2024-12-05 12:12:01.891030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.213 [2024-12-05 12:12:01.891041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:44.213 [2024-12-05 12:12:01.891059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:11976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.213 [2024-12-05 12:12:01.891070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:44.213 [2024-12-05 12:12:01.891088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:11984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.213 [2024-12-05 12:12:01.891099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:44.213 [2024-12-05 12:12:01.891117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:11992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.213 [2024-12-05 12:12:01.891130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:44.213 [2024-12-05 12:12:01.891149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.213 [2024-12-05 12:12:01.891159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:44.213 [2024-12-05 12:12:01.891179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:11296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.213 [2024-12-05 12:12:01.891189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:44.213 [2024-12-05 12:12:01.891209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:11304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.213 [2024-12-05 12:12:01.891220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:44.213 [2024-12-05 12:12:01.891239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:11312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.213 [2024-12-05 12:12:01.891249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:44.213 [2024-12-05 12:12:01.891268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:11320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.213 [2024-12-05 12:12:01.891279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:44.213 [2024-12-05 12:12:01.891299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.213 [2024-12-05 12:12:01.891310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:44.213 [2024-12-05 12:12:01.891328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:11336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.213 [2024-12-05 12:12:01.891339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:44.213 [2024-12-05 12:12:01.891357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.213 [2024-12-05 12:12:01.891371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:44.213 [2024-12-05 12:12:01.891391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:11352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.213 [2024-12-05 12:12:01.891402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:44.213 [2024-12-05 12:12:01.891421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:11360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.213 [2024-12-05 12:12:01.891432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:44.213 [2024-12-05 12:12:01.891451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.213 [2024-12-05 12:12:01.891462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:44.213 [2024-12-05 12:12:01.891480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:11376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.213 [2024-12-05 12:12:01.891491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.213 [2024-12-05 12:12:01.891509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:11384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.213 [2024-12-05 12:12:01.891520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:44.213 [2024-12-05 12:12:01.891539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.213 [2024-12-05 12:12:01.891550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:44.213 [2024-12-05 12:12:01.891569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:11400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.213 [2024-12-05 12:12:01.891580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:44.213 [2024-12-05 12:12:01.891720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:11408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.213 [2024-12-05 12:12:01.891735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:44.213 [2024-12-05 12:12:01.891759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.213 [2024-12-05 12:12:01.891770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:44.213 [2024-12-05 12:12:01.891795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.213 [2024-12-05 12:12:01.891807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:44.213 [2024-12-05 12:12:01.891829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:12024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.213 [2024-12-05 12:12:01.891842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:44.214 [2024-12-05 12:12:01.891864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.214 [2024-12-05 12:12:01.891876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:44.214 [2024-12-05 12:12:01.891897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:12040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.214 [2024-12-05 12:12:01.891909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:44.214 [2024-12-05 12:12:01.891932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:12048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.214 [2024-12-05 12:12:01.891943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:44.214 [2024-12-05 12:12:01.891965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.214 [2024-12-05 12:12:01.891976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:44.214 [2024-12-05 12:12:01.891997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:12064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.214 [2024-12-05 12:12:01.892009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:44.214 [2024-12-05 12:12:01.892033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.214 [2024-12-05 12:12:01.892046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:44.214 [2024-12-05 12:12:01.892068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:12080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.214 [2024-12-05 12:12:01.892079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:44.214 [2024-12-05 12:12:01.892102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:12088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.214 [2024-12-05 12:12:01.892113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:44.214 [2024-12-05 12:12:01.892135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.214 [2024-12-05 12:12:01.892146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:44.214 [2024-12-05 12:12:01.892169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.214 [2024-12-05 12:12:01.892181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:44.214 [2024-12-05 12:12:01.892204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:12112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.214 [2024-12-05 12:12:01.892218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:44.214 [2024-12-05 12:12:01.892240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:12120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.214 [2024-12-05 12:12:01.892252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:44.214 [2024-12-05 12:12:01.892275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:12128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.214 [2024-12-05 12:12:01.892286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:44.214 [2024-12-05 12:12:01.892308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:12136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.214 [2024-12-05 12:12:01.892319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:44.214 [2024-12-05 12:12:01.892341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.214 [2024-12-05 12:12:01.892352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:44.214 [2024-12-05 12:12:01.892381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:12152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.214 [2024-12-05 12:12:01.892393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:44.214 [2024-12-05 12:12:01.892416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.214 [2024-12-05 12:12:01.892427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:44.214 [2024-12-05 12:12:01.892450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.214 [2024-12-05 12:12:01.892461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:44.214 [2024-12-05 12:12:01.892484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.214 [2024-12-05 12:12:01.892495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:44.214 [2024-12-05 12:12:01.892517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:12184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.214 [2024-12-05 12:12:01.892528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:44.214 [2024-12-05 12:12:01.892550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:12192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.214 [2024-12-05 12:12:01.892561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:44.214 [2024-12-05 12:12:01.892583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:12200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.214 [2024-12-05 12:12:01.892594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:44.214 [2024-12-05 12:12:01.892616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.214 [2024-12-05 12:12:01.892629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:44.214 [2024-12-05 12:12:01.892652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:12216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.214 [2024-12-05 12:12:01.892664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:44.214 [2024-12-05 12:12:01.892686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:12224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.214 [2024-12-05 12:12:01.892697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.214 [2024-12-05 12:12:01.892719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:12232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.214 [2024-12-05 12:12:01.892731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:44.214 [2024-12-05 12:12:01.892753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:12240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.214 [2024-12-05 12:12:01.892765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:44.214 [2024-12-05 12:12:01.892787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:12248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.214 [2024-12-05 12:12:01.892797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:44.214 [2024-12-05 12:12:01.892819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:12256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.214 [2024-12-05 12:12:01.892830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:44.214 [2024-12-05 12:12:01.892852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:12264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.214 [2024-12-05 12:12:01.892864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:44.214 [2024-12-05 12:12:01.892886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:12272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.214 [2024-12-05 12:12:01.892898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:44.214 11451.00 IOPS, 44.73 MiB/s [2024-12-05T11:12:18.410Z] 10633.07 IOPS, 41.54 MiB/s [2024-12-05T11:12:18.410Z] 9924.20 IOPS, 38.77 MiB/s [2024-12-05T11:12:18.410Z] 9336.44 IOPS, 36.47 MiB/s [2024-12-05T11:12:18.410Z] 9457.94 IOPS, 36.95 MiB/s [2024-12-05T11:12:18.410Z] 9560.83 IOPS, 37.35 MiB/s [2024-12-05T11:12:18.410Z] 9711.32 IOPS, 37.93 MiB/s [2024-12-05T11:12:18.410Z] 9915.80 IOPS, 38.73 MiB/s [2024-12-05T11:12:18.410Z] 10098.67 IOPS, 39.45 MiB/s [2024-12-05T11:12:18.410Z] 10182.86 IOPS, 39.78 MiB/s [2024-12-05T11:12:18.410Z] 10244.35 IOPS, 40.02 MiB/s [2024-12-05T11:12:18.410Z] 10295.29 IOPS, 40.22 MiB/s [2024-12-05T11:12:18.410Z] 10419.36 IOPS, 40.70 MiB/s [2024-12-05T11:12:18.410Z] 10539.38 IOPS, 41.17 MiB/s [2024-12-05T11:12:18.410Z] [2024-12-05 12:12:15.672971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:39808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.214 [2024-12-05 12:12:15.673010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:44.214 [2024-12-05 12:12:15.673057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:39824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.214 [2024-12-05 12:12:15.673070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:44.214 [2024-12-05 12:12:15.673087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:39840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.214 [2024-12-05 12:12:15.673102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:44.214 [2024-12-05 12:12:15.673118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.214 [2024-12-05 12:12:15.673128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:44.214 [2024-12-05 12:12:15.673144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:39872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.214 [2024-12-05 12:12:15.673154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:44.214 [2024-12-05 12:12:15.673170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:39888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.215 [2024-12-05 12:12:15.673181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:44.215 [2024-12-05 12:12:15.673197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.215 [2024-12-05 12:12:15.673207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:44.215 [2024-12-05 12:12:15.673223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.215 [2024-12-05 12:12:15.673232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:44.215 [2024-12-05 12:12:15.673250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:39936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.215 [2024-12-05 12:12:15.673260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:44.215 [2024-12-05 12:12:15.673277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:39952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.215 [2024-12-05 12:12:15.673287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:44.215 [2024-12-05 12:12:15.673303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:39968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.215 [2024-12-05 12:12:15.673313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:44.215 [2024-12-05 12:12:15.673328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:39984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.215 [2024-12-05 12:12:15.673337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:44.215 [2024-12-05 12:12:15.673353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.215 [2024-12-05 12:12:15.673363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:44.215 [2024-12-05 12:12:15.673387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.215 [2024-12-05 12:12:15.673397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:44.215 [2024-12-05 12:12:15.673411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:40032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.215 [2024-12-05 12:12:15.673422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:44.215 [2024-12-05 12:12:15.673442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.215 [2024-12-05 12:12:15.673452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:44.215 [2024-12-05 12:12:15.673469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:40064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.215 [2024-12-05 12:12:15.673479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:44.215 [2024-12-05 12:12:15.673496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:40080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.215 [2024-12-05 12:12:15.673506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:44.215 [2024-12-05 12:12:15.673522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:40096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.215 [2024-12-05 12:12:15.673534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:44.215 [2024-12-05 12:12:15.673551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:40112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.215 [2024-12-05 12:12:15.673562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:44.215 [2024-12-05 12:12:15.673579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:40128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.215 [2024-12-05 12:12:15.673590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:44.215 [2024-12-05 12:12:15.673607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.215 [2024-12-05 12:12:15.673618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:44.215 [2024-12-05 12:12:15.673635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:40160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.215 [2024-12-05 12:12:15.673645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:44.215 [2024-12-05 12:12:15.673662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:40176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.215 [2024-12-05 12:12:15.673672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:44.215 [2024-12-05 12:12:15.673690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:40192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.215 [2024-12-05 12:12:15.673700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:44.215 [2024-12-05 12:12:15.673717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.215 [2024-12-05 12:12:15.673728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:44.215 [2024-12-05 12:12:15.673745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.215 [2024-12-05 12:12:15.673756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:44.215 [2024-12-05 12:12:15.673775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:40240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.215 [2024-12-05 12:12:15.673786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:44.215 [2024-12-05 12:12:15.673803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:40256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.215 [2024-12-05 12:12:15.673815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:44.215 [2024-12-05 12:12:15.673832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.215 [2024-12-05 12:12:15.673843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:44.215 [2024-12-05 12:12:15.673859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:40288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.215 [2024-12-05 12:12:15.673871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.215 [2024-12-05 12:12:15.673887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:40304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.215 [2024-12-05 12:12:15.673898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:44.215 [2024-12-05 12:12:15.673916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:40320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.215 [2024-12-05 12:12:15.673927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:44.215 [2024-12-05 12:12:15.673944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:40336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.215 [2024-12-05 12:12:15.673955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:44.215 [2024-12-05 12:12:15.673972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:40352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.215 [2024-12-05 12:12:15.673983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:44.215 [2024-12-05 12:12:15.674001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.215 [2024-12-05 12:12:15.674012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:44.215 [2024-12-05 12:12:15.674030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:40384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.215 [2024-12-05 12:12:15.674041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:44.215 [2024-12-05 12:12:15.674058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:40400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.215 [2024-12-05 12:12:15.674070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:44.215 [2024-12-05 12:12:15.674087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:40416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.216 [2024-12-05 12:12:15.674098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:44.216 [2024-12-05 12:12:15.674115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:39648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.216 [2024-12-05 12:12:15.674128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:44.216 [2024-12-05 12:12:15.674146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:40440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.216 [2024-12-05 12:12:15.674157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:44.216 [2024-12-05 12:12:15.674175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:39664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.216 [2024-12-05 12:12:15.674186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:44.216 [2024-12-05 12:12:15.674203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:39696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.216 [2024-12-05 12:12:15.674216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:44.216 [2024-12-05 12:12:15.674234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:39728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.216 [2024-12-05 12:12:15.674245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:44.216 [2024-12-05 12:12:15.674262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:39760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.216 [2024-12-05 12:12:15.674273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:44.216 [2024-12-05 12:12:15.674291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.216 [2024-12-05 12:12:15.674302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:44.216 [2024-12-05 12:12:15.674785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:40464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.216 [2024-12-05 12:12:15.674807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:44.216 [2024-12-05 12:12:15.674829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:40480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.216 [2024-12-05 12:12:15.674840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:44.216 [2024-12-05 12:12:15.674858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:40496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.216 [2024-12-05 12:12:15.674869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:44.216 [2024-12-05 12:12:15.674886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:40512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.216 [2024-12-05 12:12:15.674897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:44.216 [2024-12-05 12:12:15.674914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:40528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.216 [2024-12-05 12:12:15.674925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:44.216 [2024-12-05 12:12:15.674944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.216 [2024-12-05 12:12:15.674959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:44.216 [2024-12-05 12:12:15.674976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:40560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.216 [2024-12-05 12:12:15.674987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:44.216 [2024-12-05 12:12:15.675004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:40576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.216 [2024-12-05 12:12:15.675014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:44.216 [2024-12-05 12:12:15.675031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:40592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.216 [2024-12-05 12:12:15.675042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:44.216 [2024-12-05 12:12:15.675058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:40608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.216 [2024-12-05 12:12:15.675069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:44.216 [2024-12-05 12:12:15.675085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:40624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.216 [2024-12-05 12:12:15.675096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:44.216 [2024-12-05 12:12:15.675112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:40640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.216 [2024-12-05 12:12:15.675124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:44.216 [2024-12-05 12:12:15.675140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:40656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.216 [2024-12-05 12:12:15.675151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:44.216 [2024-12-05 12:12:15.675168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:39656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.216 [2024-12-05 12:12:15.675179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:44.216 [2024-12-05 12:12:15.675196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:39688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.216 [2024-12-05 12:12:15.675208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.216 [2024-12-05 12:12:15.675224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:39720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.216 [2024-12-05 12:12:15.675235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.216 [2024-12-05 12:12:15.675253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.216 [2024-12-05 12:12:15.675263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.216 [2024-12-05 12:12:15.675280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:39784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.216 [2024-12-05 12:12:15.675291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:44.216 10628.07 IOPS, 41.52 MiB/s [2024-12-05T11:12:18.412Z] 10658.25 IOPS, 41.63 MiB/s [2024-12-05T11:12:18.412Z] 10689.38 IOPS, 41.76 MiB/s [2024-12-05T11:12:18.412Z] Received shutdown signal, test time was about 29.038200 seconds 00:28:44.216 00:28:44.216 Latency(us) 00:28:44.216 [2024-12-05T11:12:18.412Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:44.216 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:44.216 Verification LBA range: start 0x0 length 0x4000 00:28:44.216 Nvme0n1 : 29.04 10689.30 41.76 0.00 0.00 11954.27 173.59 3019898.88 00:28:44.216 [2024-12-05T11:12:18.412Z] =================================================================================================================== 00:28:44.216 [2024-12-05T11:12:18.412Z] Total : 10689.30 41.76 0.00 0.00 11954.27 173.59 3019898.88 00:28:44.216 12:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:44.216 12:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:28:44.216 12:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:44.216 12:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:28:44.216 12:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # nvmfcleanup 00:28:44.216 12:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@99 -- # sync 00:28:44.216 12:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:28:44.216 12:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # set +e 00:28:44.216 12:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # for i in {1..20} 00:28:44.216 12:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:28:44.216 rmmod nvme_tcp 00:28:44.216 rmmod nvme_fabrics 00:28:44.476 rmmod nvme_keyring 00:28:44.476 12:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:28:44.476 12:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # set -e 00:28:44.476 12:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # return 0 00:28:44.476 12:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # '[' -n 186784 ']' 00:28:44.476 12:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@337 -- # killprocess 186784 00:28:44.476 12:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 186784 ']' 00:28:44.476 12:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 186784 00:28:44.476 12:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:28:44.476 12:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:44.476 12:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 186784 00:28:44.476 12:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:44.476 12:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:44.476 12:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 186784' 00:28:44.476 killing process with pid 186784 00:28:44.476 12:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 186784 00:28:44.476 12:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 186784 00:28:44.476 12:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:28:44.476 12:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # nvmf_fini 00:28:44.476 12:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@264 -- # local dev 00:28:44.476 12:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@267 -- # remove_target_ns 00:28:44.476 12:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:28:44.476 12:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:28:44.476 12:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_target_ns 00:28:47.012 12:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@268 -- # delete_main_bridge 00:28:47.012 12:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:28:47.012 12:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@130 -- # return 0 00:28:47.012 12:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:28:47.012 12:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:28:47.012 12:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:28:47.012 12:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:28:47.012 12:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:28:47.012 12:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:28:47.012 12:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:28:47.012 12:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:28:47.012 12:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:28:47.012 12:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:28:47.012 12:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:28:47.012 12:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:28:47.012 12:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:28:47.012 12:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:28:47.012 12:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:28:47.012 12:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:28:47.012 12:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:28:47.012 12:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@41 -- # _dev=0 00:28:47.012 12:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@41 -- # dev_map=() 00:28:47.012 12:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@284 -- # iptr 00:28:47.012 12:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@542 -- # iptables-save 00:28:47.012 12:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:28:47.012 12:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@542 -- # iptables-restore 00:28:47.012 00:28:47.012 real 0m40.948s 00:28:47.012 user 1m50.734s 00:28:47.012 sys 0m11.773s 00:28:47.012 12:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:47.012 12:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:47.012 ************************************ 00:28:47.012 END TEST nvmf_host_multipath_status 00:28:47.012 ************************************ 00:28:47.012 12:12:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:28:47.012 12:12:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:47.012 12:12:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:47.012 12:12:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.012 ************************************ 00:28:47.012 START TEST nvmf_identify_kernel_target 00:28:47.012 ************************************ 00:28:47.012 12:12:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:28:47.012 * Looking for test storage... 00:28:47.012 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:47.012 12:12:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:47.012 12:12:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:28:47.012 12:12:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:47.013 12:12:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:47.013 12:12:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:47.013 12:12:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:47.013 12:12:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:47.013 12:12:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:28:47.013 12:12:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:28:47.013 12:12:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:28:47.013 12:12:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:28:47.013 12:12:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:28:47.013 12:12:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:28:47.013 12:12:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:28:47.013 12:12:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:47.013 12:12:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:28:47.013 12:12:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:28:47.013 12:12:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:47.013 12:12:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:47.013 12:12:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:28:47.013 12:12:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:28:47.013 12:12:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:47.013 12:12:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:28:47.013 12:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:28:47.013 12:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:28:47.013 12:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:28:47.013 12:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:47.013 12:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:28:47.013 12:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:28:47.013 12:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:47.013 12:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:47.013 12:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:28:47.013 12:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:47.013 12:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:47.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:47.013 --rc genhtml_branch_coverage=1 00:28:47.013 --rc genhtml_function_coverage=1 00:28:47.013 --rc genhtml_legend=1 00:28:47.013 --rc geninfo_all_blocks=1 00:28:47.013 --rc geninfo_unexecuted_blocks=1 00:28:47.013 00:28:47.013 ' 00:28:47.013 12:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:47.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:47.013 --rc genhtml_branch_coverage=1 00:28:47.013 --rc genhtml_function_coverage=1 00:28:47.013 --rc genhtml_legend=1 00:28:47.013 --rc geninfo_all_blocks=1 00:28:47.013 --rc geninfo_unexecuted_blocks=1 00:28:47.013 00:28:47.013 ' 00:28:47.013 12:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:47.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:47.013 --rc genhtml_branch_coverage=1 00:28:47.013 --rc genhtml_function_coverage=1 00:28:47.013 --rc genhtml_legend=1 00:28:47.013 --rc geninfo_all_blocks=1 00:28:47.013 --rc geninfo_unexecuted_blocks=1 00:28:47.013 00:28:47.013 ' 00:28:47.013 12:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:47.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:47.013 --rc genhtml_branch_coverage=1 00:28:47.013 --rc genhtml_function_coverage=1 00:28:47.013 --rc genhtml_legend=1 00:28:47.013 --rc geninfo_all_blocks=1 00:28:47.013 --rc geninfo_unexecuted_blocks=1 00:28:47.013 00:28:47.013 ' 00:28:47.013 12:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:47.013 12:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:28:47.013 12:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:47.013 12:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:47.013 12:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:47.013 12:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:47.013 12:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:47.013 12:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:28:47.013 12:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:47.013 12:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:28:47.013 12:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:28:47.013 12:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:28:47.013 12:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:47.013 12:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:28:47.013 12:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:28:47.013 12:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:47.013 12:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:47.013 12:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:28:47.013 12:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:47.013 12:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:47.013 12:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:47.013 12:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:47.013 12:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:47.013 12:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:47.013 12:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:28:47.013 12:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:47.013 12:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:28:47.013 12:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:28:47.013 12:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:28:47.013 12:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:28:47.013 12:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@50 -- # : 0 00:28:47.013 12:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:28:47.013 12:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:28:47.013 12:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:28:47.013 12:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:47.013 12:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:47.013 12:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:28:47.013 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:28:47.013 12:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:28:47.013 12:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:28:47.013 12:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@54 -- # have_pci_nics=0 00:28:47.013 12:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:28:47.013 12:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:28:47.014 12:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:47.014 12:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # prepare_net_devs 00:28:47.014 12:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # local -g is_hw=no 00:28:47.014 12:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # remove_target_ns 00:28:47.014 12:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:28:47.014 12:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:28:47.014 12:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:28:47.014 12:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:28:47.014 12:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:28:47.014 12:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # xtrace_disable 00:28:47.014 12:12:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:28:53.583 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:53.583 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@131 -- # pci_devs=() 00:28:53.583 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@131 -- # local -a pci_devs 00:28:53.583 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@132 -- # pci_net_devs=() 00:28:53.583 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:28:53.583 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@133 -- # pci_drivers=() 00:28:53.583 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@133 -- # local -A pci_drivers 00:28:53.583 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@135 -- # net_devs=() 00:28:53.583 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@135 -- # local -ga net_devs 00:28:53.583 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@136 -- # e810=() 00:28:53.583 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@136 -- # local -ga e810 00:28:53.583 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@137 -- # x722=() 00:28:53.583 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@137 -- # local -ga x722 00:28:53.583 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@138 -- # mlx=() 00:28:53.583 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@138 -- # local -ga mlx 00:28:53.583 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:53.583 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:53.583 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:53.583 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:53.583 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:53.583 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:53.583 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:53.583 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:53.583 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:53.583 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:53.583 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:53.583 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:53.583 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:28:53.583 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:28:53.583 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:28:53.583 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:28:53.583 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:28:53.583 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:28:53.583 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:28:53.583 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:53.583 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:53.583 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:28:53.583 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:28:53.583 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:53.583 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:53.583 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:28:53.583 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:53.584 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # [[ up == up ]] 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:53.584 Found net devices under 0000:86:00.0: cvl_0_0 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # [[ up == up ]] 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:53.584 Found net devices under 0000:86:00.1: cvl_0_1 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # is_hw=yes 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@257 -- # create_target_ns 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@27 -- # local -gA dev_map 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@28 -- # local -g _dev 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@44 -- # ips=() 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@11 -- # local val=167772161 00:28:53.584 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:28:53.585 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:28:53.585 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:28:53.585 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:28:53.585 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:28:53.585 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:28:53.585 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:28:53.585 10.0.0.1 00:28:53.585 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:28:53.585 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:28:53.585 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:53.585 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:53.585 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:28:53.585 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@11 -- # local val=167772162 00:28:53.585 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:28:53.585 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:28:53.585 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:28:53.585 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:28:53.585 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:28:53.585 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:28:53.585 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:28:53.585 10.0.0.2 00:28:53.585 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:28:53.585 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:28:53.585 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:28:53.585 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:28:53.585 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:28:53.585 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:28:53.585 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:28:53.585 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:53.585 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:53.585 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:28:53.585 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:28:53.585 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:28:53.585 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:28:53.585 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:28:53.585 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:28:53.585 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:28:53.585 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:28:53.585 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:28:53.585 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:28:53.585 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:28:53.585 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@38 -- # ping_ips 1 00:28:53.585 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:28:53.585 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:28:53.585 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:28:53.585 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:28:53.585 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:28:53.585 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:28:53.585 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:28:53.585 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:28:53.585 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:28:53.585 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@107 -- # local dev=initiator0 00:28:53.585 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:28:53.585 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:28:53.585 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:28:53.585 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:28:53.585 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:53.585 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:53.585 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:28:53.585 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:28:53.585 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:28:53.585 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:28:53.585 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:28:53.585 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:53.585 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:53.585 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:28:53.585 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:28:53.585 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:53.585 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.335 ms 00:28:53.585 00:28:53.585 --- 10.0.0.1 ping statistics --- 00:28:53.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:53.585 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:28:53.585 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:28:53.585 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:28:53.585 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:28:53.585 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:28:53.585 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:53.586 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:53.586 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # get_net_dev target0 00:28:53.586 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@107 -- # local dev=target0 00:28:53.586 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:28:53.586 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:28:53.586 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:28:53.586 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:28:53.586 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:28:53.586 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:28:53.586 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:28:53.586 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:28:53.586 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:28:53.586 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:28:53.586 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:28:53.586 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:28:53.586 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:28:53.586 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:28:53.586 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:53.586 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:28:53.586 00:28:53.586 --- 10.0.0.2 ping statistics --- 00:28:53.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:53.586 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:28:53.586 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # (( pair++ )) 00:28:53.586 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:28:53.586 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:53.586 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # return 0 00:28:53.586 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:28:53.586 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:28:53.586 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:28:53.586 12:12:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:28:53.586 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:28:53.586 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:28:53.586 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:28:53.586 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:28:53.586 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:28:53.586 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:28:53.586 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@107 -- # local dev=initiator0 00:28:53.586 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:28:53.586 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:28:53.586 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:28:53.586 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:28:53.586 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:53.586 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:53.586 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:28:53.586 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:28:53.586 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:28:53.586 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:53.586 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:28:53.586 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:28:53.586 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:28:53.586 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:28:53.586 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:28:53.586 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:28:53.586 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@107 -- # local dev=initiator1 00:28:53.586 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:28:53.586 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:28:53.586 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # return 1 00:28:53.586 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # dev= 00:28:53.586 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@169 -- # return 0 00:28:53.586 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:28:53.586 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:28:53.586 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:28:53.586 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:28:53.586 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:28:53.586 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:53.586 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:53.586 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # get_net_dev target0 00:28:53.586 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@107 -- # local dev=target0 00:28:53.586 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:28:53.586 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:28:53.586 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:28:53.586 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:28:53.586 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:28:53.586 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:28:53.586 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:28:53.586 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:28:53.586 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:28:53.586 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:53.586 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:28:53.586 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:28:53.587 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:28:53.587 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:28:53.587 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:53.587 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:53.587 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # get_net_dev target1 00:28:53.587 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@107 -- # local dev=target1 00:28:53.587 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:28:53.587 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:28:53.587 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # return 1 00:28:53.587 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # dev= 00:28:53.587 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@169 -- # return 0 00:28:53.587 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:28:53.587 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:53.587 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:28:53.587 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:28:53.587 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:53.587 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:28:53.587 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:28:53.587 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:28:53.587 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:28:53.587 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:28:53.587 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@434 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:28:53.587 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # nvmet=/sys/kernel/config/nvmet 00:28:53.587 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@437 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:53.587 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:53.587 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@439 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:53.587 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # local block nvme 00:28:53.587 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@443 -- # [[ ! -e /sys/module/nvmet ]] 00:28:53.587 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # modprobe nvmet 00:28:53.587 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@447 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:53.587 12:12:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@449 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:56.123 Waiting for block devices as requested 00:28:56.123 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:28:56.123 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:56.123 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:56.123 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:56.123 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:56.123 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:56.383 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:56.383 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:56.383 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:56.642 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:56.642 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:56.642 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:56.642 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:56.901 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:56.901 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:56.901 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:57.200 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:57.200 12:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:28:57.200 12:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:57.200 12:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # is_block_zoned nvme0n1 00:28:57.200 12:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:28:57.200 12:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:57.200 12:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:57.200 12:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # block_in_use nvme0n1 00:28:57.200 12:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:28:57.200 12:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:57.200 No valid GPT data, bailing 00:28:57.200 12:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:57.200 12:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:28:57.200 12:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:28:57.200 12:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # nvme=/dev/nvme0n1 00:28:57.200 12:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # [[ -b /dev/nvme0n1 ]] 00:28:57.200 12:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:57.201 12:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:57.201 12:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@462 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:57.201 12:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@467 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:28:57.201 12:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # echo 1 00:28:57.201 12:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@470 -- # echo /dev/nvme0n1 00:28:57.201 12:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@471 -- # echo 1 00:28:57.201 12:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@473 -- # echo 10.0.0.1 00:28:57.201 12:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # echo tcp 00:28:57.201 12:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@475 -- # echo 4420 00:28:57.201 12:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # echo ipv4 00:28:57.201 12:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@479 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:57.201 12:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:28:57.478 00:28:57.478 Discovery Log Number of Records 2, Generation counter 2 00:28:57.478 =====Discovery Log Entry 0====== 00:28:57.478 trtype: tcp 00:28:57.478 adrfam: ipv4 00:28:57.478 subtype: current discovery subsystem 00:28:57.478 treq: not specified, sq flow control disable supported 00:28:57.478 portid: 1 00:28:57.478 trsvcid: 4420 00:28:57.478 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:57.478 traddr: 10.0.0.1 00:28:57.478 eflags: none 00:28:57.478 sectype: none 00:28:57.478 =====Discovery Log Entry 1====== 00:28:57.478 trtype: tcp 00:28:57.478 adrfam: ipv4 00:28:57.478 subtype: nvme subsystem 00:28:57.478 treq: not specified, sq flow control disable supported 00:28:57.478 portid: 1 00:28:57.478 trsvcid: 4420 00:28:57.478 subnqn: nqn.2016-06.io.spdk:testnqn 00:28:57.478 traddr: 10.0.0.1 00:28:57.478 eflags: none 00:28:57.478 sectype: none 00:28:57.478 12:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:28:57.478 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:28:57.478 ===================================================== 00:28:57.478 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:57.478 ===================================================== 00:28:57.478 Controller Capabilities/Features 00:28:57.478 ================================ 00:28:57.478 Vendor ID: 0000 00:28:57.478 Subsystem Vendor ID: 0000 00:28:57.478 Serial Number: 14ddaaae8768665f8982 00:28:57.478 Model Number: Linux 00:28:57.478 Firmware Version: 6.8.9-20 00:28:57.478 Recommended Arb Burst: 0 00:28:57.478 IEEE OUI Identifier: 00 00 00 00:28:57.478 Multi-path I/O 00:28:57.478 May have multiple subsystem ports: No 00:28:57.478 May have multiple controllers: No 00:28:57.478 Associated with SR-IOV VF: No 00:28:57.478 Max Data Transfer Size: Unlimited 00:28:57.478 Max Number of Namespaces: 0 00:28:57.478 Max Number of I/O Queues: 1024 00:28:57.478 NVMe Specification Version (VS): 1.3 00:28:57.478 NVMe Specification Version (Identify): 1.3 00:28:57.478 Maximum Queue Entries: 1024 00:28:57.478 Contiguous Queues Required: No 00:28:57.478 Arbitration Mechanisms Supported 00:28:57.478 Weighted Round Robin: Not Supported 00:28:57.478 Vendor Specific: Not Supported 00:28:57.478 Reset Timeout: 7500 ms 00:28:57.478 Doorbell Stride: 4 bytes 00:28:57.478 NVM Subsystem Reset: Not Supported 00:28:57.478 Command Sets Supported 00:28:57.478 NVM Command Set: Supported 00:28:57.478 Boot Partition: Not Supported 00:28:57.478 Memory Page Size Minimum: 4096 bytes 00:28:57.478 Memory Page Size Maximum: 4096 bytes 00:28:57.478 Persistent Memory Region: Not Supported 00:28:57.478 Optional Asynchronous Events Supported 00:28:57.478 Namespace Attribute Notices: Not Supported 00:28:57.478 Firmware Activation Notices: Not Supported 00:28:57.478 ANA Change Notices: Not Supported 00:28:57.478 PLE Aggregate Log Change Notices: Not Supported 00:28:57.478 LBA Status Info Alert Notices: Not Supported 00:28:57.478 EGE Aggregate Log Change Notices: Not Supported 00:28:57.478 Normal NVM Subsystem Shutdown event: Not Supported 00:28:57.478 Zone Descriptor Change Notices: Not Supported 00:28:57.478 Discovery Log Change Notices: Supported 00:28:57.478 Controller Attributes 00:28:57.478 128-bit Host Identifier: Not Supported 00:28:57.478 Non-Operational Permissive Mode: Not Supported 00:28:57.478 NVM Sets: Not Supported 00:28:57.478 Read Recovery Levels: Not Supported 00:28:57.478 Endurance Groups: Not Supported 00:28:57.478 Predictable Latency Mode: Not Supported 00:28:57.478 Traffic Based Keep ALive: Not Supported 00:28:57.478 Namespace Granularity: Not Supported 00:28:57.478 SQ Associations: Not Supported 00:28:57.478 UUID List: Not Supported 00:28:57.478 Multi-Domain Subsystem: Not Supported 00:28:57.478 Fixed Capacity Management: Not Supported 00:28:57.478 Variable Capacity Management: Not Supported 00:28:57.478 Delete Endurance Group: Not Supported 00:28:57.478 Delete NVM Set: Not Supported 00:28:57.478 Extended LBA Formats Supported: Not Supported 00:28:57.478 Flexible Data Placement Supported: Not Supported 00:28:57.478 00:28:57.478 Controller Memory Buffer Support 00:28:57.479 ================================ 00:28:57.479 Supported: No 00:28:57.479 00:28:57.479 Persistent Memory Region Support 00:28:57.479 ================================ 00:28:57.479 Supported: No 00:28:57.479 00:28:57.479 Admin Command Set Attributes 00:28:57.479 ============================ 00:28:57.479 Security Send/Receive: Not Supported 00:28:57.479 Format NVM: Not Supported 00:28:57.479 Firmware Activate/Download: Not Supported 00:28:57.479 Namespace Management: Not Supported 00:28:57.479 Device Self-Test: Not Supported 00:28:57.479 Directives: Not Supported 00:28:57.479 NVMe-MI: Not Supported 00:28:57.479 Virtualization Management: Not Supported 00:28:57.479 Doorbell Buffer Config: Not Supported 00:28:57.479 Get LBA Status Capability: Not Supported 00:28:57.479 Command & Feature Lockdown Capability: Not Supported 00:28:57.479 Abort Command Limit: 1 00:28:57.479 Async Event Request Limit: 1 00:28:57.479 Number of Firmware Slots: N/A 00:28:57.479 Firmware Slot 1 Read-Only: N/A 00:28:57.479 Firmware Activation Without Reset: N/A 00:28:57.479 Multiple Update Detection Support: N/A 00:28:57.479 Firmware Update Granularity: No Information Provided 00:28:57.479 Per-Namespace SMART Log: No 00:28:57.479 Asymmetric Namespace Access Log Page: Not Supported 00:28:57.479 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:57.479 Command Effects Log Page: Not Supported 00:28:57.479 Get Log Page Extended Data: Supported 00:28:57.479 Telemetry Log Pages: Not Supported 00:28:57.479 Persistent Event Log Pages: Not Supported 00:28:57.479 Supported Log Pages Log Page: May Support 00:28:57.479 Commands Supported & Effects Log Page: Not Supported 00:28:57.479 Feature Identifiers & Effects Log Page:May Support 00:28:57.479 NVMe-MI Commands & Effects Log Page: May Support 00:28:57.479 Data Area 4 for Telemetry Log: Not Supported 00:28:57.479 Error Log Page Entries Supported: 1 00:28:57.479 Keep Alive: Not Supported 00:28:57.479 00:28:57.479 NVM Command Set Attributes 00:28:57.479 ========================== 00:28:57.479 Submission Queue Entry Size 00:28:57.479 Max: 1 00:28:57.479 Min: 1 00:28:57.479 Completion Queue Entry Size 00:28:57.479 Max: 1 00:28:57.479 Min: 1 00:28:57.479 Number of Namespaces: 0 00:28:57.479 Compare Command: Not Supported 00:28:57.479 Write Uncorrectable Command: Not Supported 00:28:57.479 Dataset Management Command: Not Supported 00:28:57.479 Write Zeroes Command: Not Supported 00:28:57.479 Set Features Save Field: Not Supported 00:28:57.479 Reservations: Not Supported 00:28:57.479 Timestamp: Not Supported 00:28:57.479 Copy: Not Supported 00:28:57.479 Volatile Write Cache: Not Present 00:28:57.479 Atomic Write Unit (Normal): 1 00:28:57.479 Atomic Write Unit (PFail): 1 00:28:57.479 Atomic Compare & Write Unit: 1 00:28:57.479 Fused Compare & Write: Not Supported 00:28:57.479 Scatter-Gather List 00:28:57.479 SGL Command Set: Supported 00:28:57.479 SGL Keyed: Not Supported 00:28:57.479 SGL Bit Bucket Descriptor: Not Supported 00:28:57.479 SGL Metadata Pointer: Not Supported 00:28:57.479 Oversized SGL: Not Supported 00:28:57.479 SGL Metadata Address: Not Supported 00:28:57.479 SGL Offset: Supported 00:28:57.479 Transport SGL Data Block: Not Supported 00:28:57.479 Replay Protected Memory Block: Not Supported 00:28:57.479 00:28:57.479 Firmware Slot Information 00:28:57.479 ========================= 00:28:57.479 Active slot: 0 00:28:57.479 00:28:57.479 00:28:57.479 Error Log 00:28:57.479 ========= 00:28:57.479 00:28:57.479 Active Namespaces 00:28:57.479 ================= 00:28:57.479 Discovery Log Page 00:28:57.479 ================== 00:28:57.479 Generation Counter: 2 00:28:57.479 Number of Records: 2 00:28:57.479 Record Format: 0 00:28:57.479 00:28:57.479 Discovery Log Entry 0 00:28:57.479 ---------------------- 00:28:57.479 Transport Type: 3 (TCP) 00:28:57.479 Address Family: 1 (IPv4) 00:28:57.479 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:57.479 Entry Flags: 00:28:57.479 Duplicate Returned Information: 0 00:28:57.479 Explicit Persistent Connection Support for Discovery: 0 00:28:57.479 Transport Requirements: 00:28:57.479 Secure Channel: Not Specified 00:28:57.479 Port ID: 1 (0x0001) 00:28:57.479 Controller ID: 65535 (0xffff) 00:28:57.479 Admin Max SQ Size: 32 00:28:57.479 Transport Service Identifier: 4420 00:28:57.479 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:57.479 Transport Address: 10.0.0.1 00:28:57.479 Discovery Log Entry 1 00:28:57.479 ---------------------- 00:28:57.479 Transport Type: 3 (TCP) 00:28:57.479 Address Family: 1 (IPv4) 00:28:57.479 Subsystem Type: 2 (NVM Subsystem) 00:28:57.479 Entry Flags: 00:28:57.479 Duplicate Returned Information: 0 00:28:57.479 Explicit Persistent Connection Support for Discovery: 0 00:28:57.479 Transport Requirements: 00:28:57.479 Secure Channel: Not Specified 00:28:57.479 Port ID: 1 (0x0001) 00:28:57.479 Controller ID: 65535 (0xffff) 00:28:57.479 Admin Max SQ Size: 32 00:28:57.479 Transport Service Identifier: 4420 00:28:57.479 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:28:57.479 Transport Address: 10.0.0.1 00:28:57.479 12:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:57.479 get_feature(0x01) failed 00:28:57.479 get_feature(0x02) failed 00:28:57.479 get_feature(0x04) failed 00:28:57.479 ===================================================== 00:28:57.480 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:57.480 ===================================================== 00:28:57.480 Controller Capabilities/Features 00:28:57.480 ================================ 00:28:57.480 Vendor ID: 0000 00:28:57.480 Subsystem Vendor ID: 0000 00:28:57.480 Serial Number: 6e8ee0de2c1857e164f0 00:28:57.480 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:28:57.480 Firmware Version: 6.8.9-20 00:28:57.480 Recommended Arb Burst: 6 00:28:57.480 IEEE OUI Identifier: 00 00 00 00:28:57.480 Multi-path I/O 00:28:57.480 May have multiple subsystem ports: Yes 00:28:57.480 May have multiple controllers: Yes 00:28:57.480 Associated with SR-IOV VF: No 00:28:57.480 Max Data Transfer Size: Unlimited 00:28:57.480 Max Number of Namespaces: 1024 00:28:57.480 Max Number of I/O Queues: 128 00:28:57.480 NVMe Specification Version (VS): 1.3 00:28:57.480 NVMe Specification Version (Identify): 1.3 00:28:57.480 Maximum Queue Entries: 1024 00:28:57.480 Contiguous Queues Required: No 00:28:57.480 Arbitration Mechanisms Supported 00:28:57.480 Weighted Round Robin: Not Supported 00:28:57.480 Vendor Specific: Not Supported 00:28:57.480 Reset Timeout: 7500 ms 00:28:57.480 Doorbell Stride: 4 bytes 00:28:57.480 NVM Subsystem Reset: Not Supported 00:28:57.480 Command Sets Supported 00:28:57.480 NVM Command Set: Supported 00:28:57.480 Boot Partition: Not Supported 00:28:57.480 Memory Page Size Minimum: 4096 bytes 00:28:57.480 Memory Page Size Maximum: 4096 bytes 00:28:57.480 Persistent Memory Region: Not Supported 00:28:57.480 Optional Asynchronous Events Supported 00:28:57.480 Namespace Attribute Notices: Supported 00:28:57.480 Firmware Activation Notices: Not Supported 00:28:57.480 ANA Change Notices: Supported 00:28:57.480 PLE Aggregate Log Change Notices: Not Supported 00:28:57.480 LBA Status Info Alert Notices: Not Supported 00:28:57.480 EGE Aggregate Log Change Notices: Not Supported 00:28:57.480 Normal NVM Subsystem Shutdown event: Not Supported 00:28:57.480 Zone Descriptor Change Notices: Not Supported 00:28:57.480 Discovery Log Change Notices: Not Supported 00:28:57.480 Controller Attributes 00:28:57.480 128-bit Host Identifier: Supported 00:28:57.480 Non-Operational Permissive Mode: Not Supported 00:28:57.480 NVM Sets: Not Supported 00:28:57.480 Read Recovery Levels: Not Supported 00:28:57.480 Endurance Groups: Not Supported 00:28:57.480 Predictable Latency Mode: Not Supported 00:28:57.480 Traffic Based Keep ALive: Supported 00:28:57.480 Namespace Granularity: Not Supported 00:28:57.480 SQ Associations: Not Supported 00:28:57.480 UUID List: Not Supported 00:28:57.480 Multi-Domain Subsystem: Not Supported 00:28:57.480 Fixed Capacity Management: Not Supported 00:28:57.480 Variable Capacity Management: Not Supported 00:28:57.480 Delete Endurance Group: Not Supported 00:28:57.480 Delete NVM Set: Not Supported 00:28:57.480 Extended LBA Formats Supported: Not Supported 00:28:57.480 Flexible Data Placement Supported: Not Supported 00:28:57.480 00:28:57.480 Controller Memory Buffer Support 00:28:57.480 ================================ 00:28:57.480 Supported: No 00:28:57.480 00:28:57.480 Persistent Memory Region Support 00:28:57.480 ================================ 00:28:57.480 Supported: No 00:28:57.480 00:28:57.480 Admin Command Set Attributes 00:28:57.480 ============================ 00:28:57.480 Security Send/Receive: Not Supported 00:28:57.480 Format NVM: Not Supported 00:28:57.480 Firmware Activate/Download: Not Supported 00:28:57.480 Namespace Management: Not Supported 00:28:57.480 Device Self-Test: Not Supported 00:28:57.480 Directives: Not Supported 00:28:57.480 NVMe-MI: Not Supported 00:28:57.480 Virtualization Management: Not Supported 00:28:57.480 Doorbell Buffer Config: Not Supported 00:28:57.480 Get LBA Status Capability: Not Supported 00:28:57.480 Command & Feature Lockdown Capability: Not Supported 00:28:57.480 Abort Command Limit: 4 00:28:57.480 Async Event Request Limit: 4 00:28:57.480 Number of Firmware Slots: N/A 00:28:57.480 Firmware Slot 1 Read-Only: N/A 00:28:57.480 Firmware Activation Without Reset: N/A 00:28:57.480 Multiple Update Detection Support: N/A 00:28:57.480 Firmware Update Granularity: No Information Provided 00:28:57.480 Per-Namespace SMART Log: Yes 00:28:57.480 Asymmetric Namespace Access Log Page: Supported 00:28:57.480 ANA Transition Time : 10 sec 00:28:57.480 00:28:57.480 Asymmetric Namespace Access Capabilities 00:28:57.480 ANA Optimized State : Supported 00:28:57.480 ANA Non-Optimized State : Supported 00:28:57.480 ANA Inaccessible State : Supported 00:28:57.480 ANA Persistent Loss State : Supported 00:28:57.480 ANA Change State : Supported 00:28:57.480 ANAGRPID is not changed : No 00:28:57.480 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:28:57.480 00:28:57.480 ANA Group Identifier Maximum : 128 00:28:57.480 Number of ANA Group Identifiers : 128 00:28:57.480 Max Number of Allowed Namespaces : 1024 00:28:57.480 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:28:57.480 Command Effects Log Page: Supported 00:28:57.480 Get Log Page Extended Data: Supported 00:28:57.480 Telemetry Log Pages: Not Supported 00:28:57.480 Persistent Event Log Pages: Not Supported 00:28:57.480 Supported Log Pages Log Page: May Support 00:28:57.480 Commands Supported & Effects Log Page: Not Supported 00:28:57.480 Feature Identifiers & Effects Log Page:May Support 00:28:57.480 NVMe-MI Commands & Effects Log Page: May Support 00:28:57.480 Data Area 4 for Telemetry Log: Not Supported 00:28:57.480 Error Log Page Entries Supported: 128 00:28:57.480 Keep Alive: Supported 00:28:57.481 Keep Alive Granularity: 1000 ms 00:28:57.481 00:28:57.481 NVM Command Set Attributes 00:28:57.481 ========================== 00:28:57.481 Submission Queue Entry Size 00:28:57.481 Max: 64 00:28:57.481 Min: 64 00:28:57.481 Completion Queue Entry Size 00:28:57.481 Max: 16 00:28:57.481 Min: 16 00:28:57.481 Number of Namespaces: 1024 00:28:57.481 Compare Command: Not Supported 00:28:57.481 Write Uncorrectable Command: Not Supported 00:28:57.481 Dataset Management Command: Supported 00:28:57.481 Write Zeroes Command: Supported 00:28:57.481 Set Features Save Field: Not Supported 00:28:57.481 Reservations: Not Supported 00:28:57.481 Timestamp: Not Supported 00:28:57.481 Copy: Not Supported 00:28:57.481 Volatile Write Cache: Present 00:28:57.481 Atomic Write Unit (Normal): 1 00:28:57.481 Atomic Write Unit (PFail): 1 00:28:57.481 Atomic Compare & Write Unit: 1 00:28:57.481 Fused Compare & Write: Not Supported 00:28:57.481 Scatter-Gather List 00:28:57.481 SGL Command Set: Supported 00:28:57.481 SGL Keyed: Not Supported 00:28:57.481 SGL Bit Bucket Descriptor: Not Supported 00:28:57.481 SGL Metadata Pointer: Not Supported 00:28:57.481 Oversized SGL: Not Supported 00:28:57.481 SGL Metadata Address: Not Supported 00:28:57.481 SGL Offset: Supported 00:28:57.481 Transport SGL Data Block: Not Supported 00:28:57.481 Replay Protected Memory Block: Not Supported 00:28:57.481 00:28:57.481 Firmware Slot Information 00:28:57.481 ========================= 00:28:57.481 Active slot: 0 00:28:57.481 00:28:57.481 Asymmetric Namespace Access 00:28:57.481 =========================== 00:28:57.481 Change Count : 0 00:28:57.481 Number of ANA Group Descriptors : 1 00:28:57.481 ANA Group Descriptor : 0 00:28:57.481 ANA Group ID : 1 00:28:57.481 Number of NSID Values : 1 00:28:57.481 Change Count : 0 00:28:57.481 ANA State : 1 00:28:57.481 Namespace Identifier : 1 00:28:57.481 00:28:57.481 Commands Supported and Effects 00:28:57.481 ============================== 00:28:57.481 Admin Commands 00:28:57.481 -------------- 00:28:57.481 Get Log Page (02h): Supported 00:28:57.481 Identify (06h): Supported 00:28:57.481 Abort (08h): Supported 00:28:57.481 Set Features (09h): Supported 00:28:57.481 Get Features (0Ah): Supported 00:28:57.481 Asynchronous Event Request (0Ch): Supported 00:28:57.481 Keep Alive (18h): Supported 00:28:57.481 I/O Commands 00:28:57.481 ------------ 00:28:57.481 Flush (00h): Supported 00:28:57.481 Write (01h): Supported LBA-Change 00:28:57.481 Read (02h): Supported 00:28:57.481 Write Zeroes (08h): Supported LBA-Change 00:28:57.481 Dataset Management (09h): Supported 00:28:57.481 00:28:57.481 Error Log 00:28:57.481 ========= 00:28:57.481 Entry: 0 00:28:57.481 Error Count: 0x3 00:28:57.481 Submission Queue Id: 0x0 00:28:57.481 Command Id: 0x5 00:28:57.481 Phase Bit: 0 00:28:57.481 Status Code: 0x2 00:28:57.481 Status Code Type: 0x0 00:28:57.481 Do Not Retry: 1 00:28:57.481 Error Location: 0x28 00:28:57.481 LBA: 0x0 00:28:57.481 Namespace: 0x0 00:28:57.481 Vendor Log Page: 0x0 00:28:57.481 ----------- 00:28:57.481 Entry: 1 00:28:57.481 Error Count: 0x2 00:28:57.481 Submission Queue Id: 0x0 00:28:57.481 Command Id: 0x5 00:28:57.481 Phase Bit: 0 00:28:57.481 Status Code: 0x2 00:28:57.481 Status Code Type: 0x0 00:28:57.481 Do Not Retry: 1 00:28:57.481 Error Location: 0x28 00:28:57.481 LBA: 0x0 00:28:57.481 Namespace: 0x0 00:28:57.481 Vendor Log Page: 0x0 00:28:57.481 ----------- 00:28:57.481 Entry: 2 00:28:57.481 Error Count: 0x1 00:28:57.481 Submission Queue Id: 0x0 00:28:57.481 Command Id: 0x4 00:28:57.481 Phase Bit: 0 00:28:57.481 Status Code: 0x2 00:28:57.481 Status Code Type: 0x0 00:28:57.481 Do Not Retry: 1 00:28:57.481 Error Location: 0x28 00:28:57.481 LBA: 0x0 00:28:57.481 Namespace: 0x0 00:28:57.481 Vendor Log Page: 0x0 00:28:57.481 00:28:57.481 Number of Queues 00:28:57.481 ================ 00:28:57.481 Number of I/O Submission Queues: 128 00:28:57.481 Number of I/O Completion Queues: 128 00:28:57.481 00:28:57.481 ZNS Specific Controller Data 00:28:57.481 ============================ 00:28:57.481 Zone Append Size Limit: 0 00:28:57.481 00:28:57.481 00:28:57.481 Active Namespaces 00:28:57.481 ================= 00:28:57.481 get_feature(0x05) failed 00:28:57.481 Namespace ID:1 00:28:57.481 Command Set Identifier: NVM (00h) 00:28:57.481 Deallocate: Supported 00:28:57.481 Deallocated/Unwritten Error: Not Supported 00:28:57.481 Deallocated Read Value: Unknown 00:28:57.481 Deallocate in Write Zeroes: Not Supported 00:28:57.481 Deallocated Guard Field: 0xFFFF 00:28:57.481 Flush: Supported 00:28:57.481 Reservation: Not Supported 00:28:57.481 Namespace Sharing Capabilities: Multiple Controllers 00:28:57.481 Size (in LBAs): 3125627568 (1490GiB) 00:28:57.481 Capacity (in LBAs): 3125627568 (1490GiB) 00:28:57.481 Utilization (in LBAs): 3125627568 (1490GiB) 00:28:57.481 UUID: 2ba84d7e-bb75-485d-8f3e-175caf109564 00:28:57.481 Thin Provisioning: Not Supported 00:28:57.481 Per-NS Atomic Units: Yes 00:28:57.481 Atomic Boundary Size (Normal): 0 00:28:57.481 Atomic Boundary Size (PFail): 0 00:28:57.481 Atomic Boundary Offset: 0 00:28:57.481 NGUID/EUI64 Never Reused: No 00:28:57.482 ANA group ID: 1 00:28:57.482 Namespace Write Protected: No 00:28:57.482 Number of LBA Formats: 1 00:28:57.482 Current LBA Format: LBA Format #00 00:28:57.482 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:57.482 00:28:57.482 12:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:28:57.482 12:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # nvmfcleanup 00:28:57.482 12:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@99 -- # sync 00:28:57.482 12:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:28:57.482 12:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # set +e 00:28:57.482 12:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # for i in {1..20} 00:28:57.482 12:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:28:57.482 rmmod nvme_tcp 00:28:57.482 rmmod nvme_fabrics 00:28:57.482 12:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:28:57.482 12:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # set -e 00:28:57.482 12:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # return 0 00:28:57.482 12:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # '[' -n '' ']' 00:28:57.482 12:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:28:57.482 12:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # nvmf_fini 00:28:57.482 12:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@264 -- # local dev 00:28:57.482 12:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@267 -- # remove_target_ns 00:28:57.482 12:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:28:57.482 12:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:28:57.482 12:12:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:29:00.040 12:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@268 -- # delete_main_bridge 00:29:00.040 12:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:29:00.040 12:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@130 -- # return 0 00:29:00.040 12:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:29:00.040 12:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:29:00.040 12:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:29:00.040 12:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:29:00.040 12:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:29:00.040 12:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:29:00.040 12:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:29:00.040 12:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:29:00.040 12:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:29:00.040 12:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:29:00.040 12:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:29:00.040 12:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:29:00.040 12:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:29:00.040 12:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:29:00.040 12:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:29:00.040 12:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:29:00.040 12:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:29:00.040 12:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@41 -- # _dev=0 00:29:00.040 12:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@41 -- # dev_map=() 00:29:00.040 12:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@284 -- # iptr 00:29:00.040 12:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@542 -- # iptables-save 00:29:00.040 12:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:29:00.040 12:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@542 -- # iptables-restore 00:29:00.040 12:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:29:00.040 12:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:29:00.040 12:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # echo 0 00:29:00.040 12:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@490 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:00.040 12:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:00.040 12:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:00.040 12:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:00.040 12:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # modules=(/sys/module/nvmet/holders/*) 00:29:00.040 12:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@497 -- # modprobe -r nvmet_tcp nvmet 00:29:00.040 12:12:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:02.575 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:02.575 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:02.575 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:02.575 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:02.575 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:02.575 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:02.575 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:02.575 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:02.575 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:02.575 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:02.575 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:02.575 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:02.575 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:02.575 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:02.575 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:02.575 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:03.953 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:29:04.212 00:29:04.212 real 0m17.439s 00:29:04.212 user 0m4.457s 00:29:04.212 sys 0m8.774s 00:29:04.212 12:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:04.212 12:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:29:04.212 ************************************ 00:29:04.212 END TEST nvmf_identify_kernel_target 00:29:04.212 ************************************ 00:29:04.212 12:12:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:29:04.212 12:12:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:04.212 12:12:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:04.212 12:12:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.212 ************************************ 00:29:04.212 START TEST nvmf_auth_host 00:29:04.212 ************************************ 00:29:04.212 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:29:04.473 * Looking for test storage... 00:29:04.473 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:04.473 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:04.473 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:29:04.473 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:04.473 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:04.473 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:04.473 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:04.473 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:04.473 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:29:04.473 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:29:04.473 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:29:04.473 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:29:04.473 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:29:04.473 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:29:04.473 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:29:04.473 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:04.473 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:29:04.473 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:29:04.473 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:04.473 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:04.473 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:29:04.473 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:29:04.473 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:04.473 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:29:04.473 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:29:04.473 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:29:04.473 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:29:04.473 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:04.473 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:29:04.473 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:29:04.473 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:04.473 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:04.473 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:29:04.473 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:04.473 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:04.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:04.473 --rc genhtml_branch_coverage=1 00:29:04.473 --rc genhtml_function_coverage=1 00:29:04.473 --rc genhtml_legend=1 00:29:04.473 --rc geninfo_all_blocks=1 00:29:04.473 --rc geninfo_unexecuted_blocks=1 00:29:04.473 00:29:04.473 ' 00:29:04.473 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:04.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:04.473 --rc genhtml_branch_coverage=1 00:29:04.473 --rc genhtml_function_coverage=1 00:29:04.473 --rc genhtml_legend=1 00:29:04.473 --rc geninfo_all_blocks=1 00:29:04.473 --rc geninfo_unexecuted_blocks=1 00:29:04.473 00:29:04.473 ' 00:29:04.473 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:04.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:04.473 --rc genhtml_branch_coverage=1 00:29:04.473 --rc genhtml_function_coverage=1 00:29:04.473 --rc genhtml_legend=1 00:29:04.473 --rc geninfo_all_blocks=1 00:29:04.473 --rc geninfo_unexecuted_blocks=1 00:29:04.473 00:29:04.473 ' 00:29:04.473 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:04.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:04.473 --rc genhtml_branch_coverage=1 00:29:04.473 --rc genhtml_function_coverage=1 00:29:04.473 --rc genhtml_legend=1 00:29:04.473 --rc geninfo_all_blocks=1 00:29:04.473 --rc geninfo_unexecuted_blocks=1 00:29:04.473 00:29:04.473 ' 00:29:04.473 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:04.473 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:29:04.473 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:04.473 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:04.473 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:04.473 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:04.473 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:04.473 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:29:04.473 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:04.473 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:29:04.473 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:29:04.473 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:29:04.473 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:04.473 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:29:04.473 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:29:04.473 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:04.473 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:04.473 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:29:04.473 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:04.473 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:04.473 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:04.473 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.473 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.473 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.473 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:29:04.473 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.474 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:29:04.474 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:29:04.474 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:29:04.474 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:29:04.474 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@50 -- # : 0 00:29:04.474 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:29:04.474 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:29:04.474 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:29:04.474 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:04.474 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:04.474 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:29:04.474 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:29:04.474 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:29:04.474 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:29:04.474 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@54 -- # have_pci_nics=0 00:29:04.474 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:29:04.474 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:29:04.474 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:29:04.474 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:29:04.474 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:04.474 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:04.474 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:29:04.474 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:29:04.474 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:29:04.474 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:29:04.474 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:04.474 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # prepare_net_devs 00:29:04.474 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # local -g is_hw=no 00:29:04.474 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@260 -- # remove_target_ns 00:29:04.474 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:29:04.474 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:29:04.474 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_target_ns 00:29:04.474 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:29:04.474 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:29:04.474 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # xtrace_disable 00:29:04.474 12:12:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.052 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:11.052 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@131 -- # pci_devs=() 00:29:11.052 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@131 -- # local -a pci_devs 00:29:11.052 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@132 -- # pci_net_devs=() 00:29:11.052 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:29:11.052 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@133 -- # pci_drivers=() 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@133 -- # local -A pci_drivers 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@135 -- # net_devs=() 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@135 -- # local -ga net_devs 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@136 -- # e810=() 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@136 -- # local -ga e810 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@137 -- # x722=() 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@137 -- # local -ga x722 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@138 -- # mlx=() 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@138 -- # local -ga mlx 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:11.053 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:11.053 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # [[ up == up ]] 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:11.053 Found net devices under 0000:86:00.0: cvl_0_0 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # [[ up == up ]] 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:11.053 Found net devices under 0000:86:00.1: cvl_0_1 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # is_hw=yes 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@257 -- # create_target_ns 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@27 -- # local -gA dev_map 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@28 -- # local -g _dev 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@44 -- # ips=() 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@11 -- # local val=167772161 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:29:11.053 10.0.0.1 00:29:11.053 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@11 -- # local val=167772162 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:29:11.054 10.0.0.2 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@38 -- # ping_ips 1 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@107 -- # local dev=initiator0 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:29:11.054 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:11.054 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.354 ms 00:29:11.054 00:29:11.054 --- 10.0.0.1 ping statistics --- 00:29:11.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:11.054 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # get_net_dev target0 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@107 -- # local dev=target0 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:29:11.054 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:11.054 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:29:11.054 00:29:11.054 --- 10.0.0.2 ping statistics --- 00:29:11.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:11.054 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # (( pair++ )) 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@270 -- # return 0 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@107 -- # local dev=initiator0 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:29:11.054 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@107 -- # local dev=initiator1 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # return 1 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # dev= 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@169 -- # return 0 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # get_net_dev target0 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@107 -- # local dev=target0 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # get_net_dev target1 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@107 -- # local dev=target1 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # return 1 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # dev= 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@169 -- # return 0 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # nvmfpid=202688 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@329 -- # waitforlisten 202688 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 202688 ']' 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=null 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=32 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=63f19a7c329dccdf369b4a2313f9ac20 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-null.XXX 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-null.EK6 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key 63f19a7c329dccdf369b4a2313f9ac20 0 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 63f19a7c329dccdf369b4a2313f9ac20 0 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=63f19a7c329dccdf369b4a2313f9ac20 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=0 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-null.EK6 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-null.EK6 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.EK6 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha512 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=64 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 32 /dev/urandom 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=17059583de3fac115e500edf09c7cff32cb52fee4533a4029adfaf703314c82d 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha512.XXX 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha512.xlu 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key 17059583de3fac115e500edf09c7cff32cb52fee4533a4029adfaf703314c82d 3 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 17059583de3fac115e500edf09c7cff32cb52fee4533a4029adfaf703314c82d 3 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=17059583de3fac115e500edf09c7cff32cb52fee4533a4029adfaf703314c82d 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=3 00:29:11.055 12:12:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:29:11.055 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha512.xlu 00:29:11.055 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha512.xlu 00:29:11.055 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.xlu 00:29:11.055 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:29:11.055 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:29:11.055 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:11.055 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:29:11.055 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=null 00:29:11.055 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=48 00:29:11.055 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:29:11.055 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=ddea58638daedd0c1ee21b19b3eaf404992fbaf2d85f4609 00:29:11.055 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-null.XXX 00:29:11.055 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-null.J8r 00:29:11.055 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key ddea58638daedd0c1ee21b19b3eaf404992fbaf2d85f4609 0 00:29:11.055 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 ddea58638daedd0c1ee21b19b3eaf404992fbaf2d85f4609 0 00:29:11.055 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:29:11.055 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:29:11.055 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=ddea58638daedd0c1ee21b19b3eaf404992fbaf2d85f4609 00:29:11.055 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=0 00:29:11.055 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:29:11.055 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-null.J8r 00:29:11.055 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-null.J8r 00:29:11.055 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.J8r 00:29:11.055 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:29:11.055 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:29:11.055 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:11.055 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:29:11.055 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha384 00:29:11.055 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=48 00:29:11.055 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:29:11.055 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=2ece1daa6a0ebec0895eee2676d24a6db9edd10263ef3437 00:29:11.055 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha384.XXX 00:29:11.055 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha384.5Zv 00:29:11.055 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key 2ece1daa6a0ebec0895eee2676d24a6db9edd10263ef3437 2 00:29:11.056 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 2ece1daa6a0ebec0895eee2676d24a6db9edd10263ef3437 2 00:29:11.056 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:29:11.056 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:29:11.056 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=2ece1daa6a0ebec0895eee2676d24a6db9edd10263ef3437 00:29:11.056 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=2 00:29:11.056 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:29:11.056 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha384.5Zv 00:29:11.056 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha384.5Zv 00:29:11.056 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.5Zv 00:29:11.056 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:29:11.056 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:29:11.056 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:11.056 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:29:11.056 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha256 00:29:11.056 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=32 00:29:11.056 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:11.056 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=852334be62bde246f97a1fa6ced50b74 00:29:11.056 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha256.XXX 00:29:11.056 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha256.6El 00:29:11.056 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key 852334be62bde246f97a1fa6ced50b74 1 00:29:11.056 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 852334be62bde246f97a1fa6ced50b74 1 00:29:11.056 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:29:11.056 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:29:11.056 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=852334be62bde246f97a1fa6ced50b74 00:29:11.056 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=1 00:29:11.056 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:29:11.056 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha256.6El 00:29:11.056 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha256.6El 00:29:11.056 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.6El 00:29:11.056 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:29:11.056 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:29:11.056 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:11.056 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:29:11.056 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha256 00:29:11.056 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=32 00:29:11.056 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:11.056 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=52df1d07563656ed868376d091982603 00:29:11.056 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha256.XXX 00:29:11.056 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha256.har 00:29:11.056 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key 52df1d07563656ed868376d091982603 1 00:29:11.056 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 52df1d07563656ed868376d091982603 1 00:29:11.056 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:29:11.056 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:29:11.056 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=52df1d07563656ed868376d091982603 00:29:11.056 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=1 00:29:11.056 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:29:11.056 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha256.har 00:29:11.056 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha256.har 00:29:11.333 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.har 00:29:11.333 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:29:11.333 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:29:11.333 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:11.333 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:29:11.333 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha384 00:29:11.333 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=48 00:29:11.333 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:29:11.333 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=77ff2b8fba5a98359724b1cec0a4ff769e25a72c47d66559 00:29:11.333 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha384.XXX 00:29:11.333 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha384.Uiz 00:29:11.333 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key 77ff2b8fba5a98359724b1cec0a4ff769e25a72c47d66559 2 00:29:11.333 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 77ff2b8fba5a98359724b1cec0a4ff769e25a72c47d66559 2 00:29:11.333 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:29:11.333 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:29:11.333 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=77ff2b8fba5a98359724b1cec0a4ff769e25a72c47d66559 00:29:11.333 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=2 00:29:11.333 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:29:11.333 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha384.Uiz 00:29:11.333 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha384.Uiz 00:29:11.333 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Uiz 00:29:11.333 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:29:11.333 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:29:11.333 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:11.333 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:29:11.333 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=null 00:29:11.333 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=32 00:29:11.333 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:11.333 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=32e12ce1a6878e219eb1c03e43758bc8 00:29:11.333 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-null.XXX 00:29:11.333 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-null.569 00:29:11.333 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key 32e12ce1a6878e219eb1c03e43758bc8 0 00:29:11.333 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 32e12ce1a6878e219eb1c03e43758bc8 0 00:29:11.333 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:29:11.333 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:29:11.333 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=32e12ce1a6878e219eb1c03e43758bc8 00:29:11.333 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=0 00:29:11.333 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:29:11.333 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-null.569 00:29:11.333 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-null.569 00:29:11.333 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.569 00:29:11.333 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:29:11.333 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:29:11.333 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:11.333 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:29:11.333 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha512 00:29:11.333 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=64 00:29:11.333 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 32 /dev/urandom 00:29:11.333 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=2790d51867e1f42fcc6fe4a1c901a7617a094c9b74b525fe866b3587ce159816 00:29:11.333 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha512.XXX 00:29:11.333 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha512.Rm6 00:29:11.333 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key 2790d51867e1f42fcc6fe4a1c901a7617a094c9b74b525fe866b3587ce159816 3 00:29:11.333 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 2790d51867e1f42fcc6fe4a1c901a7617a094c9b74b525fe866b3587ce159816 3 00:29:11.333 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:29:11.333 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:29:11.333 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=2790d51867e1f42fcc6fe4a1c901a7617a094c9b74b525fe866b3587ce159816 00:29:11.333 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=3 00:29:11.333 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:29:11.333 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha512.Rm6 00:29:11.333 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha512.Rm6 00:29:11.333 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.Rm6 00:29:11.333 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:29:11.333 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 202688 00:29:11.333 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 202688 ']' 00:29:11.333 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:11.333 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:11.333 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:11.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:11.333 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:11.333 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.592 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:11.592 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:29:11.592 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:11.592 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.EK6 00:29:11.592 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.592 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.592 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.592 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.xlu ]] 00:29:11.592 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.xlu 00:29:11.592 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.592 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.592 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.592 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:11.592 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.J8r 00:29:11.592 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.592 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.592 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.592 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.5Zv ]] 00:29:11.592 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.5Zv 00:29:11.592 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.592 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.592 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.592 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:11.592 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.6El 00:29:11.592 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.592 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.592 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.592 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.har ]] 00:29:11.592 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.har 00:29:11.592 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.592 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.592 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.592 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:11.592 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Uiz 00:29:11.592 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.592 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.592 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.592 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.569 ]] 00:29:11.592 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.569 00:29:11.592 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.592 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.592 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.592 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:11.592 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Rm6 00:29:11.592 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.592 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.592 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.592 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:29:11.592 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:29:11.592 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:29:11.592 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@434 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:29:11.592 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # nvmet=/sys/kernel/config/nvmet 00:29:11.592 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@437 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:11.592 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:11.592 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@439 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:29:11.592 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # local block nvme 00:29:11.592 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@443 -- # [[ ! -e /sys/module/nvmet ]] 00:29:11.592 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # modprobe nvmet 00:29:11.592 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@447 -- # [[ -e /sys/kernel/config/nvmet ]] 00:29:11.592 12:12:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@449 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:14.873 Waiting for block devices as requested 00:29:14.873 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:29:14.873 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:14.873 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:14.873 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:14.873 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:14.873 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:14.873 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:14.873 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:14.873 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:15.132 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:15.132 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:15.132 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:15.132 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:15.391 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:15.391 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:15.391 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:15.650 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:16.219 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:29:16.219 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:16.219 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # is_block_zoned nvme0n1 00:29:16.219 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:29:16.219 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:16.219 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:29:16.219 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # block_in_use nvme0n1 00:29:16.219 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:29:16.219 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:29:16.219 No valid GPT data, bailing 00:29:16.219 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:16.219 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:29:16.219 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:29:16.219 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # nvme=/dev/nvme0n1 00:29:16.219 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@458 -- # [[ -b /dev/nvme0n1 ]] 00:29:16.219 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:16.219 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:16.219 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@462 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:29:16.219 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@467 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:29:16.219 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # echo 1 00:29:16.219 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@470 -- # echo /dev/nvme0n1 00:29:16.219 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@471 -- # echo 1 00:29:16.219 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@473 -- # echo 10.0.0.1 00:29:16.219 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # echo tcp 00:29:16.219 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@475 -- # echo 4420 00:29:16.219 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # echo ipv4 00:29:16.219 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:29:16.219 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:29:16.219 00:29:16.219 Discovery Log Number of Records 2, Generation counter 2 00:29:16.219 =====Discovery Log Entry 0====== 00:29:16.219 trtype: tcp 00:29:16.219 adrfam: ipv4 00:29:16.219 subtype: current discovery subsystem 00:29:16.219 treq: not specified, sq flow control disable supported 00:29:16.219 portid: 1 00:29:16.219 trsvcid: 4420 00:29:16.219 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:16.219 traddr: 10.0.0.1 00:29:16.219 eflags: none 00:29:16.219 sectype: none 00:29:16.219 =====Discovery Log Entry 1====== 00:29:16.219 trtype: tcp 00:29:16.219 adrfam: ipv4 00:29:16.219 subtype: nvme subsystem 00:29:16.219 treq: not specified, sq flow control disable supported 00:29:16.219 portid: 1 00:29:16.219 trsvcid: 4420 00:29:16.219 subnqn: nqn.2024-02.io.spdk:cnode0 00:29:16.219 traddr: 10.0.0.1 00:29:16.219 eflags: none 00:29:16.219 sectype: none 00:29:16.219 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:16.219 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:29:16.219 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:29:16.219 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:16.219 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:16.219 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:16.219 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:16.219 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:16.219 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGRlYTU4NjM4ZGFlZGQwYzFlZTIxYjE5YjNlYWY0MDQ5OTJmYmFmMmQ4NWY0NjA5+1RSgg==: 00:29:16.219 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmVjZTFkYWE2YTBlYmVjMDg5NWVlZTI2NzZkMjRhNmRiOWVkZDEwMjYzZWYzNDM3x7QykA==: 00:29:16.219 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:16.219 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:16.219 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGRlYTU4NjM4ZGFlZGQwYzFlZTIxYjE5YjNlYWY0MDQ5OTJmYmFmMmQ4NWY0NjA5+1RSgg==: 00:29:16.219 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmVjZTFkYWE2YTBlYmVjMDg5NWVlZTI2NzZkMjRhNmRiOWVkZDEwMjYzZWYzNDM3x7QykA==: ]] 00:29:16.219 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmVjZTFkYWE2YTBlYmVjMDg5NWVlZTI2NzZkMjRhNmRiOWVkZDEwMjYzZWYzNDM3x7QykA==: 00:29:16.219 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:29:16.219 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:29:16.219 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:29:16.219 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:16.219 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:29:16.219 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:16.219 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:29:16.219 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:16.219 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:16.219 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:16.219 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:16.219 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.219 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.219 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.219 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:16.219 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.219 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.479 nvme0n1 00:29:16.479 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.479 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:16.479 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:16.479 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.479 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.479 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.479 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:16.479 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:16.479 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.479 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.479 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.479 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:29:16.479 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:16.479 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:16.479 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:29:16.479 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:16.479 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:16.479 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:16.479 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:16.479 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjNmMTlhN2MzMjlkY2NkZjM2OWI0YTIzMTNmOWFjMjDSn53l: 00:29:16.479 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTcwNTk1ODNkZTNmYWMxMTVlNTAwZWRmMDljN2NmZjMyY2I1MmZlZTQ1MzNhNDAyOWFkZmFmNzAzMzE0YzgyZFWoOa4=: 00:29:16.479 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:16.479 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:16.479 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjNmMTlhN2MzMjlkY2NkZjM2OWI0YTIzMTNmOWFjMjDSn53l: 00:29:16.479 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTcwNTk1ODNkZTNmYWMxMTVlNTAwZWRmMDljN2NmZjMyY2I1MmZlZTQ1MzNhNDAyOWFkZmFmNzAzMzE0YzgyZFWoOa4=: ]] 00:29:16.479 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTcwNTk1ODNkZTNmYWMxMTVlNTAwZWRmMDljN2NmZjMyY2I1MmZlZTQ1MzNhNDAyOWFkZmFmNzAzMzE0YzgyZFWoOa4=: 00:29:16.479 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:29:16.479 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:16.479 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:16.479 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:16.479 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:16.479 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:16.479 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:16.479 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.479 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.479 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.479 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:16.479 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.479 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.752 nvme0n1 00:29:16.752 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.752 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:16.752 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:16.752 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.752 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.752 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.752 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:16.752 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:16.752 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.752 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.752 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.752 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:16.752 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:16.752 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:16.752 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:16.752 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:16.752 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:16.752 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGRlYTU4NjM4ZGFlZGQwYzFlZTIxYjE5YjNlYWY0MDQ5OTJmYmFmMmQ4NWY0NjA5+1RSgg==: 00:29:16.752 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmVjZTFkYWE2YTBlYmVjMDg5NWVlZTI2NzZkMjRhNmRiOWVkZDEwMjYzZWYzNDM3x7QykA==: 00:29:16.752 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:16.752 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:16.752 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGRlYTU4NjM4ZGFlZGQwYzFlZTIxYjE5YjNlYWY0MDQ5OTJmYmFmMmQ4NWY0NjA5+1RSgg==: 00:29:16.752 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmVjZTFkYWE2YTBlYmVjMDg5NWVlZTI2NzZkMjRhNmRiOWVkZDEwMjYzZWYzNDM3x7QykA==: ]] 00:29:16.752 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmVjZTFkYWE2YTBlYmVjMDg5NWVlZTI2NzZkMjRhNmRiOWVkZDEwMjYzZWYzNDM3x7QykA==: 00:29:16.752 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:29:16.752 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:16.752 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:16.752 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:16.752 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:16.752 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:16.752 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:16.752 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.752 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.752 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.752 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:16.752 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.752 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.012 nvme0n1 00:29:17.012 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.012 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:17.012 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:17.012 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.012 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.012 12:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.012 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:17.012 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:17.012 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.012 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.012 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.012 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:17.012 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:29:17.012 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:17.012 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:17.012 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:17.012 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:17.012 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODUyMzM0YmU2MmJkZTI0NmY5N2ExZmE2Y2VkNTBiNzQYmII1: 00:29:17.012 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTJkZjFkMDc1NjM2NTZlZDg2ODM3NmQwOTE5ODI2MDOdrPvI: 00:29:17.012 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:17.012 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:17.012 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODUyMzM0YmU2MmJkZTI0NmY5N2ExZmE2Y2VkNTBiNzQYmII1: 00:29:17.012 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTJkZjFkMDc1NjM2NTZlZDg2ODM3NmQwOTE5ODI2MDOdrPvI: ]] 00:29:17.012 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTJkZjFkMDc1NjM2NTZlZDg2ODM3NmQwOTE5ODI2MDOdrPvI: 00:29:17.012 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:29:17.012 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:17.012 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:17.012 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:17.012 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:17.012 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:17.012 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:17.012 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.012 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.012 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.012 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:17.012 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.012 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.012 nvme0n1 00:29:17.012 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.012 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:17.012 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:17.012 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.012 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.012 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.272 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:17.272 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:17.272 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.272 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.272 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.272 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:17.272 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:29:17.272 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:17.272 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:17.272 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:17.272 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:17.272 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzdmZjJiOGZiYTVhOTgzNTk3MjRiMWNlYzBhNGZmNzY5ZTI1YTcyYzQ3ZDY2NTU5itkAIg==: 00:29:17.272 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzJlMTJjZTFhNjg3OGUyMTllYjFjMDNlNDM3NThiYzhnKNvU: 00:29:17.272 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:17.272 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:17.272 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzdmZjJiOGZiYTVhOTgzNTk3MjRiMWNlYzBhNGZmNzY5ZTI1YTcyYzQ3ZDY2NTU5itkAIg==: 00:29:17.272 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzJlMTJjZTFhNjg3OGUyMTllYjFjMDNlNDM3NThiYzhnKNvU: ]] 00:29:17.272 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzJlMTJjZTFhNjg3OGUyMTllYjFjMDNlNDM3NThiYzhnKNvU: 00:29:17.272 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:29:17.272 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:17.272 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:17.272 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:17.272 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:17.272 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:17.272 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:17.272 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.272 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.272 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.272 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:17.272 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.272 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.272 nvme0n1 00:29:17.272 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.272 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:17.272 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:17.272 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.272 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.272 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.272 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:17.272 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:17.272 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.272 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.532 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.532 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:17.532 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:29:17.532 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:17.532 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:17.532 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:17.532 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:17.532 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjc5MGQ1MTg2N2UxZjQyZmNjNmZlNGExYzkwMWE3NjE3YTA5NGM5Yjc0YjUyNWZlODY2YjM1ODdjZTE1OTgxNmTv0X8=: 00:29:17.532 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:17.532 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:17.532 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:17.532 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjc5MGQ1MTg2N2UxZjQyZmNjNmZlNGExYzkwMWE3NjE3YTA5NGM5Yjc0YjUyNWZlODY2YjM1ODdjZTE1OTgxNmTv0X8=: 00:29:17.532 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:17.532 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:29:17.532 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:17.532 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:17.532 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:17.532 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:17.532 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:17.532 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:17.532 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.532 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.532 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.532 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:17.532 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.532 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.532 nvme0n1 00:29:17.532 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.532 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:17.532 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.532 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:17.532 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.532 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.532 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:17.532 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:17.532 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.532 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.532 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.532 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:17.532 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:17.532 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:29:17.532 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:17.532 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:17.532 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:17.532 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:17.532 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjNmMTlhN2MzMjlkY2NkZjM2OWI0YTIzMTNmOWFjMjDSn53l: 00:29:17.532 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTcwNTk1ODNkZTNmYWMxMTVlNTAwZWRmMDljN2NmZjMyY2I1MmZlZTQ1MzNhNDAyOWFkZmFmNzAzMzE0YzgyZFWoOa4=: 00:29:17.532 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:17.532 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:17.532 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjNmMTlhN2MzMjlkY2NkZjM2OWI0YTIzMTNmOWFjMjDSn53l: 00:29:17.532 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTcwNTk1ODNkZTNmYWMxMTVlNTAwZWRmMDljN2NmZjMyY2I1MmZlZTQ1MzNhNDAyOWFkZmFmNzAzMzE0YzgyZFWoOa4=: ]] 00:29:17.532 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTcwNTk1ODNkZTNmYWMxMTVlNTAwZWRmMDljN2NmZjMyY2I1MmZlZTQ1MzNhNDAyOWFkZmFmNzAzMzE0YzgyZFWoOa4=: 00:29:17.532 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:29:17.532 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:17.532 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:17.532 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:17.532 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:17.532 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:17.532 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:17.533 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.533 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.533 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.533 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:17.533 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.533 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.791 nvme0n1 00:29:17.791 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.791 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:17.792 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:17.792 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.792 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.792 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.792 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:17.792 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:17.792 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.792 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.792 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.792 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:17.792 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:29:17.792 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:17.792 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:17.792 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:17.792 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:17.792 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGRlYTU4NjM4ZGFlZGQwYzFlZTIxYjE5YjNlYWY0MDQ5OTJmYmFmMmQ4NWY0NjA5+1RSgg==: 00:29:17.792 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmVjZTFkYWE2YTBlYmVjMDg5NWVlZTI2NzZkMjRhNmRiOWVkZDEwMjYzZWYzNDM3x7QykA==: 00:29:17.792 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:17.792 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:17.792 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGRlYTU4NjM4ZGFlZGQwYzFlZTIxYjE5YjNlYWY0MDQ5OTJmYmFmMmQ4NWY0NjA5+1RSgg==: 00:29:17.792 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmVjZTFkYWE2YTBlYmVjMDg5NWVlZTI2NzZkMjRhNmRiOWVkZDEwMjYzZWYzNDM3x7QykA==: ]] 00:29:17.792 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmVjZTFkYWE2YTBlYmVjMDg5NWVlZTI2NzZkMjRhNmRiOWVkZDEwMjYzZWYzNDM3x7QykA==: 00:29:17.792 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:29:17.792 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:17.792 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:17.792 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:17.792 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:17.792 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:17.792 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:17.792 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.792 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.792 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.792 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:17.792 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.792 12:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.049 nvme0n1 00:29:18.049 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.049 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:18.049 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:18.049 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.049 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.049 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.049 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:18.049 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:18.049 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.049 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.049 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.049 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:18.049 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:29:18.049 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:18.049 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:18.049 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:18.049 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:18.049 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODUyMzM0YmU2MmJkZTI0NmY5N2ExZmE2Y2VkNTBiNzQYmII1: 00:29:18.049 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTJkZjFkMDc1NjM2NTZlZDg2ODM3NmQwOTE5ODI2MDOdrPvI: 00:29:18.049 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:18.049 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:18.049 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODUyMzM0YmU2MmJkZTI0NmY5N2ExZmE2Y2VkNTBiNzQYmII1: 00:29:18.049 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTJkZjFkMDc1NjM2NTZlZDg2ODM3NmQwOTE5ODI2MDOdrPvI: ]] 00:29:18.049 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTJkZjFkMDc1NjM2NTZlZDg2ODM3NmQwOTE5ODI2MDOdrPvI: 00:29:18.049 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:29:18.049 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:18.049 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:18.049 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:18.049 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:18.049 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:18.049 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:18.049 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.049 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.049 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.049 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:18.049 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.049 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.308 nvme0n1 00:29:18.308 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.308 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:18.308 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:18.308 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.308 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.308 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.308 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:18.308 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:18.308 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.308 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.308 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.308 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:18.308 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:29:18.308 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:18.308 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:18.308 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:18.308 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:18.308 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzdmZjJiOGZiYTVhOTgzNTk3MjRiMWNlYzBhNGZmNzY5ZTI1YTcyYzQ3ZDY2NTU5itkAIg==: 00:29:18.308 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzJlMTJjZTFhNjg3OGUyMTllYjFjMDNlNDM3NThiYzhnKNvU: 00:29:18.308 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:18.308 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:18.308 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzdmZjJiOGZiYTVhOTgzNTk3MjRiMWNlYzBhNGZmNzY5ZTI1YTcyYzQ3ZDY2NTU5itkAIg==: 00:29:18.308 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzJlMTJjZTFhNjg3OGUyMTllYjFjMDNlNDM3NThiYzhnKNvU: ]] 00:29:18.308 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzJlMTJjZTFhNjg3OGUyMTllYjFjMDNlNDM3NThiYzhnKNvU: 00:29:18.308 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:29:18.308 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:18.308 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:18.308 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:18.308 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:18.308 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:18.308 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:18.308 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.308 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.308 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.308 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:18.308 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.308 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.567 nvme0n1 00:29:18.567 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.567 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:18.567 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:18.567 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.567 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.567 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.567 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:18.567 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:18.567 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.567 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.567 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.567 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:18.567 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:29:18.567 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:18.568 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:18.568 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:18.568 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:18.568 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjc5MGQ1MTg2N2UxZjQyZmNjNmZlNGExYzkwMWE3NjE3YTA5NGM5Yjc0YjUyNWZlODY2YjM1ODdjZTE1OTgxNmTv0X8=: 00:29:18.568 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:18.568 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:18.568 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:18.568 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjc5MGQ1MTg2N2UxZjQyZmNjNmZlNGExYzkwMWE3NjE3YTA5NGM5Yjc0YjUyNWZlODY2YjM1ODdjZTE1OTgxNmTv0X8=: 00:29:18.568 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:18.568 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:29:18.568 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:18.568 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:18.568 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:18.568 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:18.568 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:18.568 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:18.568 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.568 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.568 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.568 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:18.568 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.568 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.827 nvme0n1 00:29:18.827 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.827 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:18.827 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:18.827 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.827 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.827 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.827 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:18.827 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:18.827 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.827 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.827 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.827 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:18.827 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:18.827 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:29:18.827 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:18.827 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:18.827 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:18.827 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:18.827 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjNmMTlhN2MzMjlkY2NkZjM2OWI0YTIzMTNmOWFjMjDSn53l: 00:29:18.827 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTcwNTk1ODNkZTNmYWMxMTVlNTAwZWRmMDljN2NmZjMyY2I1MmZlZTQ1MzNhNDAyOWFkZmFmNzAzMzE0YzgyZFWoOa4=: 00:29:18.827 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:18.827 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:18.827 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjNmMTlhN2MzMjlkY2NkZjM2OWI0YTIzMTNmOWFjMjDSn53l: 00:29:18.827 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTcwNTk1ODNkZTNmYWMxMTVlNTAwZWRmMDljN2NmZjMyY2I1MmZlZTQ1MzNhNDAyOWFkZmFmNzAzMzE0YzgyZFWoOa4=: ]] 00:29:18.827 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTcwNTk1ODNkZTNmYWMxMTVlNTAwZWRmMDljN2NmZjMyY2I1MmZlZTQ1MzNhNDAyOWFkZmFmNzAzMzE0YzgyZFWoOa4=: 00:29:18.827 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:29:18.827 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:18.827 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:18.827 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:18.827 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:18.827 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:18.827 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:18.827 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.827 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.827 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.827 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:18.827 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.827 12:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.086 nvme0n1 00:29:19.086 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.086 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:19.086 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:19.086 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.086 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.086 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.086 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:19.086 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:19.086 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.086 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.086 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.086 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:19.086 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:29:19.086 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:19.086 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:19.086 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:19.086 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:19.086 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGRlYTU4NjM4ZGFlZGQwYzFlZTIxYjE5YjNlYWY0MDQ5OTJmYmFmMmQ4NWY0NjA5+1RSgg==: 00:29:19.086 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmVjZTFkYWE2YTBlYmVjMDg5NWVlZTI2NzZkMjRhNmRiOWVkZDEwMjYzZWYzNDM3x7QykA==: 00:29:19.086 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:19.086 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:19.086 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGRlYTU4NjM4ZGFlZGQwYzFlZTIxYjE5YjNlYWY0MDQ5OTJmYmFmMmQ4NWY0NjA5+1RSgg==: 00:29:19.086 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmVjZTFkYWE2YTBlYmVjMDg5NWVlZTI2NzZkMjRhNmRiOWVkZDEwMjYzZWYzNDM3x7QykA==: ]] 00:29:19.086 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmVjZTFkYWE2YTBlYmVjMDg5NWVlZTI2NzZkMjRhNmRiOWVkZDEwMjYzZWYzNDM3x7QykA==: 00:29:19.086 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:29:19.086 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:19.086 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:19.086 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:19.086 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:19.086 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:19.086 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:19.086 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.086 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.345 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.345 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:19.345 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.345 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.345 nvme0n1 00:29:19.345 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.345 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:19.345 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:19.345 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.345 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.604 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.604 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:19.604 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:19.604 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.604 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.604 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.604 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:19.604 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:29:19.604 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:19.604 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:19.604 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:19.604 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:19.604 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODUyMzM0YmU2MmJkZTI0NmY5N2ExZmE2Y2VkNTBiNzQYmII1: 00:29:19.604 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTJkZjFkMDc1NjM2NTZlZDg2ODM3NmQwOTE5ODI2MDOdrPvI: 00:29:19.604 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:19.604 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:19.604 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODUyMzM0YmU2MmJkZTI0NmY5N2ExZmE2Y2VkNTBiNzQYmII1: 00:29:19.604 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTJkZjFkMDc1NjM2NTZlZDg2ODM3NmQwOTE5ODI2MDOdrPvI: ]] 00:29:19.604 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTJkZjFkMDc1NjM2NTZlZDg2ODM3NmQwOTE5ODI2MDOdrPvI: 00:29:19.604 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:29:19.604 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:19.604 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:19.604 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:19.604 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:19.604 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:19.604 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:19.604 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.604 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.604 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.604 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:19.604 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.604 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.863 nvme0n1 00:29:19.863 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.863 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:19.863 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:19.863 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.863 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.863 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.863 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:19.863 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:19.863 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.863 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.863 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.863 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:19.863 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:29:19.863 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:19.863 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:19.863 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:19.863 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:19.863 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzdmZjJiOGZiYTVhOTgzNTk3MjRiMWNlYzBhNGZmNzY5ZTI1YTcyYzQ3ZDY2NTU5itkAIg==: 00:29:19.863 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzJlMTJjZTFhNjg3OGUyMTllYjFjMDNlNDM3NThiYzhnKNvU: 00:29:19.863 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:19.863 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:19.863 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzdmZjJiOGZiYTVhOTgzNTk3MjRiMWNlYzBhNGZmNzY5ZTI1YTcyYzQ3ZDY2NTU5itkAIg==: 00:29:19.863 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzJlMTJjZTFhNjg3OGUyMTllYjFjMDNlNDM3NThiYzhnKNvU: ]] 00:29:19.863 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzJlMTJjZTFhNjg3OGUyMTllYjFjMDNlNDM3NThiYzhnKNvU: 00:29:19.863 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:29:19.863 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:19.863 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:19.863 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:19.863 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:19.863 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:19.863 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:19.863 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.863 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.863 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.863 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:19.863 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.863 12:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.122 nvme0n1 00:29:20.122 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.122 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:20.122 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:20.122 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.122 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.122 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.122 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:20.122 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:20.122 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.122 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.122 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.122 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:20.122 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:29:20.122 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:20.122 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:20.122 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:20.122 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:20.122 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjc5MGQ1MTg2N2UxZjQyZmNjNmZlNGExYzkwMWE3NjE3YTA5NGM5Yjc0YjUyNWZlODY2YjM1ODdjZTE1OTgxNmTv0X8=: 00:29:20.122 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:20.122 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:20.122 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:20.122 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjc5MGQ1MTg2N2UxZjQyZmNjNmZlNGExYzkwMWE3NjE3YTA5NGM5Yjc0YjUyNWZlODY2YjM1ODdjZTE1OTgxNmTv0X8=: 00:29:20.122 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:20.122 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:29:20.122 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:20.122 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:20.122 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:20.122 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:20.122 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:20.122 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:20.122 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.122 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.122 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.122 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:20.122 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.122 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.379 nvme0n1 00:29:20.379 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.379 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:20.379 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:20.379 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.379 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.379 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.379 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:20.379 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:20.379 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.379 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.379 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.379 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:20.379 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:20.379 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:29:20.379 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:20.379 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:20.379 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:20.379 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:20.379 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjNmMTlhN2MzMjlkY2NkZjM2OWI0YTIzMTNmOWFjMjDSn53l: 00:29:20.379 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTcwNTk1ODNkZTNmYWMxMTVlNTAwZWRmMDljN2NmZjMyY2I1MmZlZTQ1MzNhNDAyOWFkZmFmNzAzMzE0YzgyZFWoOa4=: 00:29:20.379 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:20.379 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:20.379 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjNmMTlhN2MzMjlkY2NkZjM2OWI0YTIzMTNmOWFjMjDSn53l: 00:29:20.379 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTcwNTk1ODNkZTNmYWMxMTVlNTAwZWRmMDljN2NmZjMyY2I1MmZlZTQ1MzNhNDAyOWFkZmFmNzAzMzE0YzgyZFWoOa4=: ]] 00:29:20.379 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTcwNTk1ODNkZTNmYWMxMTVlNTAwZWRmMDljN2NmZjMyY2I1MmZlZTQ1MzNhNDAyOWFkZmFmNzAzMzE0YzgyZFWoOa4=: 00:29:20.379 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:29:20.379 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:20.379 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:20.379 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:20.379 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:20.379 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:20.379 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:20.379 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.379 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.379 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.379 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:20.380 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.380 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.944 nvme0n1 00:29:20.944 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.944 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:20.944 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:20.944 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.944 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.944 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.944 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:20.944 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:20.944 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.944 12:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.944 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.944 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:20.944 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:29:20.944 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:20.944 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:20.944 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:20.944 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:20.944 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGRlYTU4NjM4ZGFlZGQwYzFlZTIxYjE5YjNlYWY0MDQ5OTJmYmFmMmQ4NWY0NjA5+1RSgg==: 00:29:20.944 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmVjZTFkYWE2YTBlYmVjMDg5NWVlZTI2NzZkMjRhNmRiOWVkZDEwMjYzZWYzNDM3x7QykA==: 00:29:20.944 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:20.944 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:20.944 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGRlYTU4NjM4ZGFlZGQwYzFlZTIxYjE5YjNlYWY0MDQ5OTJmYmFmMmQ4NWY0NjA5+1RSgg==: 00:29:20.944 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmVjZTFkYWE2YTBlYmVjMDg5NWVlZTI2NzZkMjRhNmRiOWVkZDEwMjYzZWYzNDM3x7QykA==: ]] 00:29:20.944 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmVjZTFkYWE2YTBlYmVjMDg5NWVlZTI2NzZkMjRhNmRiOWVkZDEwMjYzZWYzNDM3x7QykA==: 00:29:20.944 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:29:20.944 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:20.944 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:20.944 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:20.944 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:20.944 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:20.944 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:20.944 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.944 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.944 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.944 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:20.944 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.944 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.202 nvme0n1 00:29:21.202 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.460 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:21.460 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:21.460 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.460 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.460 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.460 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:21.460 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:21.460 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.460 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.460 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.460 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:21.460 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:29:21.460 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:21.460 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:21.460 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:21.460 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:21.460 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODUyMzM0YmU2MmJkZTI0NmY5N2ExZmE2Y2VkNTBiNzQYmII1: 00:29:21.460 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTJkZjFkMDc1NjM2NTZlZDg2ODM3NmQwOTE5ODI2MDOdrPvI: 00:29:21.460 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:21.460 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:21.460 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODUyMzM0YmU2MmJkZTI0NmY5N2ExZmE2Y2VkNTBiNzQYmII1: 00:29:21.460 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTJkZjFkMDc1NjM2NTZlZDg2ODM3NmQwOTE5ODI2MDOdrPvI: ]] 00:29:21.460 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTJkZjFkMDc1NjM2NTZlZDg2ODM3NmQwOTE5ODI2MDOdrPvI: 00:29:21.460 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:29:21.460 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:21.460 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:21.460 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:21.460 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:21.460 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:21.460 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:21.460 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.460 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.460 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.460 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:21.460 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.460 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.718 nvme0n1 00:29:21.718 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.718 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:21.718 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:21.718 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.718 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.718 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.718 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:21.718 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:21.718 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.718 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.718 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.718 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:21.718 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:29:21.718 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:21.718 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:21.718 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:21.718 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:21.718 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzdmZjJiOGZiYTVhOTgzNTk3MjRiMWNlYzBhNGZmNzY5ZTI1YTcyYzQ3ZDY2NTU5itkAIg==: 00:29:21.718 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzJlMTJjZTFhNjg3OGUyMTllYjFjMDNlNDM3NThiYzhnKNvU: 00:29:21.718 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:21.718 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:21.718 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzdmZjJiOGZiYTVhOTgzNTk3MjRiMWNlYzBhNGZmNzY5ZTI1YTcyYzQ3ZDY2NTU5itkAIg==: 00:29:21.718 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzJlMTJjZTFhNjg3OGUyMTllYjFjMDNlNDM3NThiYzhnKNvU: ]] 00:29:21.718 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzJlMTJjZTFhNjg3OGUyMTllYjFjMDNlNDM3NThiYzhnKNvU: 00:29:21.718 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:29:21.718 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:21.718 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:21.718 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:21.718 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:21.718 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:21.718 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:21.718 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.718 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.976 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.976 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:21.976 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.976 12:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.233 nvme0n1 00:29:22.233 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.233 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:22.233 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:22.233 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.233 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.233 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.233 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:22.233 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:22.233 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.233 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.233 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.233 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:22.233 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:29:22.233 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:22.233 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:22.233 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:22.233 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:22.233 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjc5MGQ1MTg2N2UxZjQyZmNjNmZlNGExYzkwMWE3NjE3YTA5NGM5Yjc0YjUyNWZlODY2YjM1ODdjZTE1OTgxNmTv0X8=: 00:29:22.233 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:22.233 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:22.233 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:22.233 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjc5MGQ1MTg2N2UxZjQyZmNjNmZlNGExYzkwMWE3NjE3YTA5NGM5Yjc0YjUyNWZlODY2YjM1ODdjZTE1OTgxNmTv0X8=: 00:29:22.233 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:22.233 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:29:22.233 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:22.233 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:22.233 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:22.233 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:22.233 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:22.233 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:22.233 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.233 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.233 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.233 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:22.233 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.233 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.799 nvme0n1 00:29:22.799 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.799 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:22.799 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:22.799 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.799 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.799 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.799 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:22.799 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:22.799 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.799 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.799 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.799 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:22.799 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:22.799 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:29:22.799 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:22.799 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:22.799 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:22.799 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:22.799 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjNmMTlhN2MzMjlkY2NkZjM2OWI0YTIzMTNmOWFjMjDSn53l: 00:29:22.799 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTcwNTk1ODNkZTNmYWMxMTVlNTAwZWRmMDljN2NmZjMyY2I1MmZlZTQ1MzNhNDAyOWFkZmFmNzAzMzE0YzgyZFWoOa4=: 00:29:22.799 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:22.799 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:22.799 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjNmMTlhN2MzMjlkY2NkZjM2OWI0YTIzMTNmOWFjMjDSn53l: 00:29:22.799 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTcwNTk1ODNkZTNmYWMxMTVlNTAwZWRmMDljN2NmZjMyY2I1MmZlZTQ1MzNhNDAyOWFkZmFmNzAzMzE0YzgyZFWoOa4=: ]] 00:29:22.799 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTcwNTk1ODNkZTNmYWMxMTVlNTAwZWRmMDljN2NmZjMyY2I1MmZlZTQ1MzNhNDAyOWFkZmFmNzAzMzE0YzgyZFWoOa4=: 00:29:22.799 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:29:22.799 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:22.799 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:22.799 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:22.799 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:22.799 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:22.799 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:22.799 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.799 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.799 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.799 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:22.799 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.799 12:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.364 nvme0n1 00:29:23.364 12:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.364 12:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:23.364 12:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:23.364 12:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.364 12:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.364 12:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.364 12:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:23.364 12:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:23.364 12:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.364 12:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.364 12:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.364 12:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:23.364 12:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:29:23.364 12:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:23.364 12:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:23.364 12:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:23.364 12:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:23.364 12:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGRlYTU4NjM4ZGFlZGQwYzFlZTIxYjE5YjNlYWY0MDQ5OTJmYmFmMmQ4NWY0NjA5+1RSgg==: 00:29:23.364 12:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmVjZTFkYWE2YTBlYmVjMDg5NWVlZTI2NzZkMjRhNmRiOWVkZDEwMjYzZWYzNDM3x7QykA==: 00:29:23.364 12:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:23.364 12:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:23.364 12:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGRlYTU4NjM4ZGFlZGQwYzFlZTIxYjE5YjNlYWY0MDQ5OTJmYmFmMmQ4NWY0NjA5+1RSgg==: 00:29:23.364 12:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmVjZTFkYWE2YTBlYmVjMDg5NWVlZTI2NzZkMjRhNmRiOWVkZDEwMjYzZWYzNDM3x7QykA==: ]] 00:29:23.364 12:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmVjZTFkYWE2YTBlYmVjMDg5NWVlZTI2NzZkMjRhNmRiOWVkZDEwMjYzZWYzNDM3x7QykA==: 00:29:23.364 12:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:29:23.364 12:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:23.364 12:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:23.364 12:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:23.364 12:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:23.364 12:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:23.364 12:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:23.364 12:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.364 12:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.364 12:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.364 12:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:23.364 12:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.364 12:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.930 nvme0n1 00:29:23.930 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.930 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:23.930 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:23.930 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.930 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.930 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.930 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:23.930 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:23.930 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.930 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.930 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.930 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:23.931 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:29:23.931 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:23.931 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:23.931 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:23.931 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:23.931 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODUyMzM0YmU2MmJkZTI0NmY5N2ExZmE2Y2VkNTBiNzQYmII1: 00:29:23.931 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTJkZjFkMDc1NjM2NTZlZDg2ODM3NmQwOTE5ODI2MDOdrPvI: 00:29:23.931 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:23.931 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:23.931 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODUyMzM0YmU2MmJkZTI0NmY5N2ExZmE2Y2VkNTBiNzQYmII1: 00:29:23.931 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTJkZjFkMDc1NjM2NTZlZDg2ODM3NmQwOTE5ODI2MDOdrPvI: ]] 00:29:23.931 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTJkZjFkMDc1NjM2NTZlZDg2ODM3NmQwOTE5ODI2MDOdrPvI: 00:29:23.931 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:29:23.931 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:23.931 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:23.931 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:23.931 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:23.931 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:23.931 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:23.931 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.931 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.931 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.931 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:23.931 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.931 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.498 nvme0n1 00:29:24.498 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.498 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:24.498 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:24.498 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.498 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.498 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.757 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:24.757 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:24.757 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.757 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.757 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.757 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:24.757 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:29:24.757 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:24.757 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:24.757 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:24.757 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:24.757 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzdmZjJiOGZiYTVhOTgzNTk3MjRiMWNlYzBhNGZmNzY5ZTI1YTcyYzQ3ZDY2NTU5itkAIg==: 00:29:24.757 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzJlMTJjZTFhNjg3OGUyMTllYjFjMDNlNDM3NThiYzhnKNvU: 00:29:24.757 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:24.757 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:24.757 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzdmZjJiOGZiYTVhOTgzNTk3MjRiMWNlYzBhNGZmNzY5ZTI1YTcyYzQ3ZDY2NTU5itkAIg==: 00:29:24.757 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzJlMTJjZTFhNjg3OGUyMTllYjFjMDNlNDM3NThiYzhnKNvU: ]] 00:29:24.757 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzJlMTJjZTFhNjg3OGUyMTllYjFjMDNlNDM3NThiYzhnKNvU: 00:29:24.757 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:29:24.757 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:24.757 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:24.757 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:24.757 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:24.757 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:24.757 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:24.757 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.757 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.758 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.758 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:24.758 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.758 12:12:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.325 nvme0n1 00:29:25.325 12:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.325 12:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:25.325 12:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:25.325 12:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.325 12:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.325 12:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.325 12:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:25.325 12:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:25.326 12:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.326 12:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.326 12:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.326 12:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:25.326 12:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:29:25.326 12:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:25.326 12:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:25.326 12:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:25.326 12:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:25.326 12:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjc5MGQ1MTg2N2UxZjQyZmNjNmZlNGExYzkwMWE3NjE3YTA5NGM5Yjc0YjUyNWZlODY2YjM1ODdjZTE1OTgxNmTv0X8=: 00:29:25.326 12:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:25.326 12:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:25.326 12:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:25.326 12:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjc5MGQ1MTg2N2UxZjQyZmNjNmZlNGExYzkwMWE3NjE3YTA5NGM5Yjc0YjUyNWZlODY2YjM1ODdjZTE1OTgxNmTv0X8=: 00:29:25.326 12:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:25.326 12:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:29:25.326 12:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:25.326 12:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:25.326 12:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:25.326 12:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:25.326 12:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:25.326 12:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:25.326 12:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.326 12:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.326 12:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.326 12:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:25.326 12:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.326 12:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.894 nvme0n1 00:29:25.894 12:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.894 12:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:25.894 12:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:25.894 12:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.894 12:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.894 12:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.894 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:25.894 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:25.894 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.895 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.895 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.895 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:29:25.895 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:25.895 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:25.895 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:29:25.895 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:25.895 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:25.895 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:25.895 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:25.895 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjNmMTlhN2MzMjlkY2NkZjM2OWI0YTIzMTNmOWFjMjDSn53l: 00:29:25.895 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTcwNTk1ODNkZTNmYWMxMTVlNTAwZWRmMDljN2NmZjMyY2I1MmZlZTQ1MzNhNDAyOWFkZmFmNzAzMzE0YzgyZFWoOa4=: 00:29:25.895 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:25.895 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:25.895 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjNmMTlhN2MzMjlkY2NkZjM2OWI0YTIzMTNmOWFjMjDSn53l: 00:29:25.895 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTcwNTk1ODNkZTNmYWMxMTVlNTAwZWRmMDljN2NmZjMyY2I1MmZlZTQ1MzNhNDAyOWFkZmFmNzAzMzE0YzgyZFWoOa4=: ]] 00:29:25.895 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTcwNTk1ODNkZTNmYWMxMTVlNTAwZWRmMDljN2NmZjMyY2I1MmZlZTQ1MzNhNDAyOWFkZmFmNzAzMzE0YzgyZFWoOa4=: 00:29:25.895 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:29:25.895 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:25.895 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:25.895 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:25.895 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:25.895 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:25.895 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:25.895 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.895 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.895 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.895 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:25.895 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.895 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.154 nvme0n1 00:29:26.154 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.154 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:26.154 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:26.154 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.154 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.154 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.154 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:26.154 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:26.154 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.154 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.154 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.154 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:26.154 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:29:26.154 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:26.154 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:26.154 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:26.154 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:26.154 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGRlYTU4NjM4ZGFlZGQwYzFlZTIxYjE5YjNlYWY0MDQ5OTJmYmFmMmQ4NWY0NjA5+1RSgg==: 00:29:26.154 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmVjZTFkYWE2YTBlYmVjMDg5NWVlZTI2NzZkMjRhNmRiOWVkZDEwMjYzZWYzNDM3x7QykA==: 00:29:26.154 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:26.154 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:26.154 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGRlYTU4NjM4ZGFlZGQwYzFlZTIxYjE5YjNlYWY0MDQ5OTJmYmFmMmQ4NWY0NjA5+1RSgg==: 00:29:26.154 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmVjZTFkYWE2YTBlYmVjMDg5NWVlZTI2NzZkMjRhNmRiOWVkZDEwMjYzZWYzNDM3x7QykA==: ]] 00:29:26.154 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmVjZTFkYWE2YTBlYmVjMDg5NWVlZTI2NzZkMjRhNmRiOWVkZDEwMjYzZWYzNDM3x7QykA==: 00:29:26.154 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:29:26.154 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:26.154 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:26.154 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:26.154 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:26.154 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:26.154 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:26.154 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.154 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.154 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.154 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:26.154 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.154 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.413 nvme0n1 00:29:26.413 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.413 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:26.413 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:26.413 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.413 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.413 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.413 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:26.413 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:26.413 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.413 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.413 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.413 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:26.413 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:29:26.413 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:26.413 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:26.413 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:26.413 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:26.413 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODUyMzM0YmU2MmJkZTI0NmY5N2ExZmE2Y2VkNTBiNzQYmII1: 00:29:26.413 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTJkZjFkMDc1NjM2NTZlZDg2ODM3NmQwOTE5ODI2MDOdrPvI: 00:29:26.413 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:26.413 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:26.413 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODUyMzM0YmU2MmJkZTI0NmY5N2ExZmE2Y2VkNTBiNzQYmII1: 00:29:26.413 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTJkZjFkMDc1NjM2NTZlZDg2ODM3NmQwOTE5ODI2MDOdrPvI: ]] 00:29:26.413 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTJkZjFkMDc1NjM2NTZlZDg2ODM3NmQwOTE5ODI2MDOdrPvI: 00:29:26.413 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:29:26.413 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:26.413 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:26.413 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:26.413 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:26.413 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:26.413 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:26.413 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.413 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.413 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.413 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:26.413 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.413 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.672 nvme0n1 00:29:26.672 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.672 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:26.672 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:26.672 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.672 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.672 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.672 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:26.672 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:26.672 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.672 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.672 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.672 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:26.672 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:29:26.672 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:26.672 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:26.672 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:26.672 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:26.672 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzdmZjJiOGZiYTVhOTgzNTk3MjRiMWNlYzBhNGZmNzY5ZTI1YTcyYzQ3ZDY2NTU5itkAIg==: 00:29:26.672 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzJlMTJjZTFhNjg3OGUyMTllYjFjMDNlNDM3NThiYzhnKNvU: 00:29:26.672 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:26.673 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:26.673 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzdmZjJiOGZiYTVhOTgzNTk3MjRiMWNlYzBhNGZmNzY5ZTI1YTcyYzQ3ZDY2NTU5itkAIg==: 00:29:26.673 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzJlMTJjZTFhNjg3OGUyMTllYjFjMDNlNDM3NThiYzhnKNvU: ]] 00:29:26.673 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzJlMTJjZTFhNjg3OGUyMTllYjFjMDNlNDM3NThiYzhnKNvU: 00:29:26.673 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:29:26.673 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:26.673 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:26.673 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:26.673 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:26.673 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:26.673 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:26.673 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.673 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.673 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.673 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:26.673 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.673 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.673 nvme0n1 00:29:26.673 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.673 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:26.673 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.673 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.673 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:26.673 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.932 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:26.932 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:26.932 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.932 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.932 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.932 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:26.932 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:29:26.932 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:26.932 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:26.932 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:26.932 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:26.932 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjc5MGQ1MTg2N2UxZjQyZmNjNmZlNGExYzkwMWE3NjE3YTA5NGM5Yjc0YjUyNWZlODY2YjM1ODdjZTE1OTgxNmTv0X8=: 00:29:26.932 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:26.932 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:26.932 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:26.932 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjc5MGQ1MTg2N2UxZjQyZmNjNmZlNGExYzkwMWE3NjE3YTA5NGM5Yjc0YjUyNWZlODY2YjM1ODdjZTE1OTgxNmTv0X8=: 00:29:26.932 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:26.932 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:29:26.932 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:26.932 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:26.932 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:26.932 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:26.932 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:26.932 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:26.932 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.932 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.932 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.932 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:26.932 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.932 12:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.932 nvme0n1 00:29:26.932 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.932 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:26.932 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:26.932 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.932 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.932 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.932 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:26.932 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:26.932 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.932 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.190 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.190 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:27.190 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:27.190 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:29:27.190 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:27.190 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:27.190 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:27.190 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:27.190 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjNmMTlhN2MzMjlkY2NkZjM2OWI0YTIzMTNmOWFjMjDSn53l: 00:29:27.190 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTcwNTk1ODNkZTNmYWMxMTVlNTAwZWRmMDljN2NmZjMyY2I1MmZlZTQ1MzNhNDAyOWFkZmFmNzAzMzE0YzgyZFWoOa4=: 00:29:27.190 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:27.190 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:27.190 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjNmMTlhN2MzMjlkY2NkZjM2OWI0YTIzMTNmOWFjMjDSn53l: 00:29:27.190 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTcwNTk1ODNkZTNmYWMxMTVlNTAwZWRmMDljN2NmZjMyY2I1MmZlZTQ1MzNhNDAyOWFkZmFmNzAzMzE0YzgyZFWoOa4=: ]] 00:29:27.190 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTcwNTk1ODNkZTNmYWMxMTVlNTAwZWRmMDljN2NmZjMyY2I1MmZlZTQ1MzNhNDAyOWFkZmFmNzAzMzE0YzgyZFWoOa4=: 00:29:27.190 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:29:27.190 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:27.190 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:27.190 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:27.190 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:27.190 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:27.190 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:27.190 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.190 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.190 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.190 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:27.190 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.190 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.190 nvme0n1 00:29:27.190 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.190 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:27.190 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:27.190 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.190 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.190 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.190 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:27.190 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:27.190 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.190 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.190 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.191 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:27.191 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:29:27.191 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:27.191 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:27.191 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:27.191 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:27.191 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGRlYTU4NjM4ZGFlZGQwYzFlZTIxYjE5YjNlYWY0MDQ5OTJmYmFmMmQ4NWY0NjA5+1RSgg==: 00:29:27.191 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmVjZTFkYWE2YTBlYmVjMDg5NWVlZTI2NzZkMjRhNmRiOWVkZDEwMjYzZWYzNDM3x7QykA==: 00:29:27.191 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:27.191 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:27.191 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGRlYTU4NjM4ZGFlZGQwYzFlZTIxYjE5YjNlYWY0MDQ5OTJmYmFmMmQ4NWY0NjA5+1RSgg==: 00:29:27.191 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmVjZTFkYWE2YTBlYmVjMDg5NWVlZTI2NzZkMjRhNmRiOWVkZDEwMjYzZWYzNDM3x7QykA==: ]] 00:29:27.191 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmVjZTFkYWE2YTBlYmVjMDg5NWVlZTI2NzZkMjRhNmRiOWVkZDEwMjYzZWYzNDM3x7QykA==: 00:29:27.191 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:29:27.191 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:27.191 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:27.191 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:27.191 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:27.191 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:27.191 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:27.191 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.191 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.449 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.449 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:27.449 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.449 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.449 nvme0n1 00:29:27.449 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.449 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:27.449 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:27.449 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.449 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.449 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.449 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:27.449 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:27.449 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.449 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.449 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.449 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:27.449 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:29:27.449 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:27.449 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:27.449 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:27.449 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:27.449 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODUyMzM0YmU2MmJkZTI0NmY5N2ExZmE2Y2VkNTBiNzQYmII1: 00:29:27.449 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTJkZjFkMDc1NjM2NTZlZDg2ODM3NmQwOTE5ODI2MDOdrPvI: 00:29:27.449 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:27.449 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:27.449 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODUyMzM0YmU2MmJkZTI0NmY5N2ExZmE2Y2VkNTBiNzQYmII1: 00:29:27.449 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTJkZjFkMDc1NjM2NTZlZDg2ODM3NmQwOTE5ODI2MDOdrPvI: ]] 00:29:27.449 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTJkZjFkMDc1NjM2NTZlZDg2ODM3NmQwOTE5ODI2MDOdrPvI: 00:29:27.449 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:29:27.449 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:27.449 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:27.449 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:27.449 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:27.449 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:27.449 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:27.449 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.449 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.449 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.450 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:27.450 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.450 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.708 nvme0n1 00:29:27.708 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.708 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:27.708 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.708 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:27.708 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.708 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.709 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:27.709 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:27.709 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.709 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.709 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.709 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:27.709 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:29:27.709 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:27.709 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:27.709 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:27.709 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:27.709 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzdmZjJiOGZiYTVhOTgzNTk3MjRiMWNlYzBhNGZmNzY5ZTI1YTcyYzQ3ZDY2NTU5itkAIg==: 00:29:27.709 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzJlMTJjZTFhNjg3OGUyMTllYjFjMDNlNDM3NThiYzhnKNvU: 00:29:27.709 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:27.709 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:27.709 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzdmZjJiOGZiYTVhOTgzNTk3MjRiMWNlYzBhNGZmNzY5ZTI1YTcyYzQ3ZDY2NTU5itkAIg==: 00:29:27.709 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzJlMTJjZTFhNjg3OGUyMTllYjFjMDNlNDM3NThiYzhnKNvU: ]] 00:29:27.709 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzJlMTJjZTFhNjg3OGUyMTllYjFjMDNlNDM3NThiYzhnKNvU: 00:29:27.709 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:29:27.709 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:27.709 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:27.709 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:27.709 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:27.709 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:27.709 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:27.709 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.709 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.709 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.709 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:27.709 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.709 12:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.967 nvme0n1 00:29:27.967 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.967 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:27.967 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:27.968 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.968 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.968 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.968 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:27.968 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:27.968 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.968 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.968 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.968 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:27.968 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:29:27.968 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:27.968 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:27.968 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:27.968 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:27.968 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjc5MGQ1MTg2N2UxZjQyZmNjNmZlNGExYzkwMWE3NjE3YTA5NGM5Yjc0YjUyNWZlODY2YjM1ODdjZTE1OTgxNmTv0X8=: 00:29:27.968 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:27.968 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:27.968 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:27.968 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjc5MGQ1MTg2N2UxZjQyZmNjNmZlNGExYzkwMWE3NjE3YTA5NGM5Yjc0YjUyNWZlODY2YjM1ODdjZTE1OTgxNmTv0X8=: 00:29:27.968 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:27.968 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:29:27.968 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:27.968 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:27.968 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:27.968 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:27.968 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:27.968 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:27.968 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.968 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.968 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.968 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:27.968 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.968 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.227 nvme0n1 00:29:28.227 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.227 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:28.227 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:28.227 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.227 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.227 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.227 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:28.227 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:28.227 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.227 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.227 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.227 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:28.227 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:28.227 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:29:28.227 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:28.227 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:28.227 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:28.227 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:28.227 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjNmMTlhN2MzMjlkY2NkZjM2OWI0YTIzMTNmOWFjMjDSn53l: 00:29:28.227 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTcwNTk1ODNkZTNmYWMxMTVlNTAwZWRmMDljN2NmZjMyY2I1MmZlZTQ1MzNhNDAyOWFkZmFmNzAzMzE0YzgyZFWoOa4=: 00:29:28.227 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:28.227 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:28.227 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjNmMTlhN2MzMjlkY2NkZjM2OWI0YTIzMTNmOWFjMjDSn53l: 00:29:28.227 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTcwNTk1ODNkZTNmYWMxMTVlNTAwZWRmMDljN2NmZjMyY2I1MmZlZTQ1MzNhNDAyOWFkZmFmNzAzMzE0YzgyZFWoOa4=: ]] 00:29:28.227 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTcwNTk1ODNkZTNmYWMxMTVlNTAwZWRmMDljN2NmZjMyY2I1MmZlZTQ1MzNhNDAyOWFkZmFmNzAzMzE0YzgyZFWoOa4=: 00:29:28.227 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:29:28.227 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:28.227 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:28.227 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:28.227 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:28.227 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:28.227 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:28.227 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.227 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.227 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.227 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:28.227 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.227 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.486 nvme0n1 00:29:28.486 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.486 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:28.486 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:28.486 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.486 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.486 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.486 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:28.486 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:28.486 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.486 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.745 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.745 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:28.745 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:29:28.745 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:28.745 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:28.745 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:28.745 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:28.745 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGRlYTU4NjM4ZGFlZGQwYzFlZTIxYjE5YjNlYWY0MDQ5OTJmYmFmMmQ4NWY0NjA5+1RSgg==: 00:29:28.745 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmVjZTFkYWE2YTBlYmVjMDg5NWVlZTI2NzZkMjRhNmRiOWVkZDEwMjYzZWYzNDM3x7QykA==: 00:29:28.745 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:28.745 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:28.745 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGRlYTU4NjM4ZGFlZGQwYzFlZTIxYjE5YjNlYWY0MDQ5OTJmYmFmMmQ4NWY0NjA5+1RSgg==: 00:29:28.745 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmVjZTFkYWE2YTBlYmVjMDg5NWVlZTI2NzZkMjRhNmRiOWVkZDEwMjYzZWYzNDM3x7QykA==: ]] 00:29:28.745 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmVjZTFkYWE2YTBlYmVjMDg5NWVlZTI2NzZkMjRhNmRiOWVkZDEwMjYzZWYzNDM3x7QykA==: 00:29:28.745 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:29:28.745 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:28.745 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:28.745 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:28.745 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:28.745 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:28.745 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:28.745 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.745 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.745 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.745 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:28.745 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.745 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.004 nvme0n1 00:29:29.004 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.004 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:29.004 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:29.005 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.005 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.005 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.005 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:29.005 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:29.005 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.005 12:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.005 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.005 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:29.005 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:29:29.005 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:29.005 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:29.005 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:29.005 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:29.005 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODUyMzM0YmU2MmJkZTI0NmY5N2ExZmE2Y2VkNTBiNzQYmII1: 00:29:29.005 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTJkZjFkMDc1NjM2NTZlZDg2ODM3NmQwOTE5ODI2MDOdrPvI: 00:29:29.005 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:29.005 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:29.005 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODUyMzM0YmU2MmJkZTI0NmY5N2ExZmE2Y2VkNTBiNzQYmII1: 00:29:29.005 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTJkZjFkMDc1NjM2NTZlZDg2ODM3NmQwOTE5ODI2MDOdrPvI: ]] 00:29:29.005 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTJkZjFkMDc1NjM2NTZlZDg2ODM3NmQwOTE5ODI2MDOdrPvI: 00:29:29.005 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:29:29.005 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:29.005 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:29.005 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:29.005 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:29.005 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:29.005 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:29.005 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.005 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.005 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.005 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:29.005 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.005 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.264 nvme0n1 00:29:29.264 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.264 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:29.264 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:29.264 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.264 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.264 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.264 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:29.264 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:29.264 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.264 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.264 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.264 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:29.264 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:29:29.264 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:29.264 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:29.264 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:29.264 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:29.264 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzdmZjJiOGZiYTVhOTgzNTk3MjRiMWNlYzBhNGZmNzY5ZTI1YTcyYzQ3ZDY2NTU5itkAIg==: 00:29:29.264 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzJlMTJjZTFhNjg3OGUyMTllYjFjMDNlNDM3NThiYzhnKNvU: 00:29:29.264 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:29.264 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:29.264 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzdmZjJiOGZiYTVhOTgzNTk3MjRiMWNlYzBhNGZmNzY5ZTI1YTcyYzQ3ZDY2NTU5itkAIg==: 00:29:29.264 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzJlMTJjZTFhNjg3OGUyMTllYjFjMDNlNDM3NThiYzhnKNvU: ]] 00:29:29.264 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzJlMTJjZTFhNjg3OGUyMTllYjFjMDNlNDM3NThiYzhnKNvU: 00:29:29.264 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:29:29.264 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:29.264 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:29.264 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:29.264 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:29.264 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:29.264 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:29.264 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.264 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.264 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.264 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:29.264 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.264 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.523 nvme0n1 00:29:29.523 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.523 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:29.523 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.523 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:29.523 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.523 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.523 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:29.523 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:29.523 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.523 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.523 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.523 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:29.523 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:29:29.523 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:29.523 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:29.523 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:29.523 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:29.523 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjc5MGQ1MTg2N2UxZjQyZmNjNmZlNGExYzkwMWE3NjE3YTA5NGM5Yjc0YjUyNWZlODY2YjM1ODdjZTE1OTgxNmTv0X8=: 00:29:29.523 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:29.523 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:29.523 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:29.523 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjc5MGQ1MTg2N2UxZjQyZmNjNmZlNGExYzkwMWE3NjE3YTA5NGM5Yjc0YjUyNWZlODY2YjM1ODdjZTE1OTgxNmTv0X8=: 00:29:29.523 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:29.523 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:29:29.523 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:29.523 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:29.523 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:29.523 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:29.523 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:29.523 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:29.523 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.523 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.523 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.523 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:29.523 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.523 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.782 nvme0n1 00:29:29.782 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.782 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:29.782 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:29.782 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.782 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.782 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.782 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:29.782 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:29.782 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.782 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.041 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.041 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:30.041 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:30.042 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:29:30.042 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:30.042 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:30.042 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:30.042 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:30.042 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjNmMTlhN2MzMjlkY2NkZjM2OWI0YTIzMTNmOWFjMjDSn53l: 00:29:30.042 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTcwNTk1ODNkZTNmYWMxMTVlNTAwZWRmMDljN2NmZjMyY2I1MmZlZTQ1MzNhNDAyOWFkZmFmNzAzMzE0YzgyZFWoOa4=: 00:29:30.042 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:30.042 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:30.042 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjNmMTlhN2MzMjlkY2NkZjM2OWI0YTIzMTNmOWFjMjDSn53l: 00:29:30.042 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTcwNTk1ODNkZTNmYWMxMTVlNTAwZWRmMDljN2NmZjMyY2I1MmZlZTQ1MzNhNDAyOWFkZmFmNzAzMzE0YzgyZFWoOa4=: ]] 00:29:30.042 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTcwNTk1ODNkZTNmYWMxMTVlNTAwZWRmMDljN2NmZjMyY2I1MmZlZTQ1MzNhNDAyOWFkZmFmNzAzMzE0YzgyZFWoOa4=: 00:29:30.042 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:29:30.042 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:30.042 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:30.042 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:30.042 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:30.042 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:30.042 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:30.042 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.042 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.042 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.042 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:30.042 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.042 12:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.301 nvme0n1 00:29:30.301 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.301 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:30.301 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:30.301 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.301 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.301 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.301 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:30.301 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:30.301 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.301 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.301 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.301 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:30.301 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:29:30.301 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:30.301 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:30.301 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:30.301 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:30.301 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGRlYTU4NjM4ZGFlZGQwYzFlZTIxYjE5YjNlYWY0MDQ5OTJmYmFmMmQ4NWY0NjA5+1RSgg==: 00:29:30.301 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmVjZTFkYWE2YTBlYmVjMDg5NWVlZTI2NzZkMjRhNmRiOWVkZDEwMjYzZWYzNDM3x7QykA==: 00:29:30.301 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:30.301 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:30.301 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGRlYTU4NjM4ZGFlZGQwYzFlZTIxYjE5YjNlYWY0MDQ5OTJmYmFmMmQ4NWY0NjA5+1RSgg==: 00:29:30.301 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmVjZTFkYWE2YTBlYmVjMDg5NWVlZTI2NzZkMjRhNmRiOWVkZDEwMjYzZWYzNDM3x7QykA==: ]] 00:29:30.301 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmVjZTFkYWE2YTBlYmVjMDg5NWVlZTI2NzZkMjRhNmRiOWVkZDEwMjYzZWYzNDM3x7QykA==: 00:29:30.301 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:29:30.301 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:30.301 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:30.301 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:30.301 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:30.301 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:30.301 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:30.301 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.301 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.301 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.301 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:30.301 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.301 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.870 nvme0n1 00:29:30.870 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.870 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:30.870 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:30.870 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.870 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.870 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.870 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:30.870 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:30.870 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.870 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.870 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.870 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:30.870 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:29:30.870 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:30.870 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:30.870 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:30.870 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:30.870 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODUyMzM0YmU2MmJkZTI0NmY5N2ExZmE2Y2VkNTBiNzQYmII1: 00:29:30.870 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTJkZjFkMDc1NjM2NTZlZDg2ODM3NmQwOTE5ODI2MDOdrPvI: 00:29:30.870 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:30.870 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:30.870 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODUyMzM0YmU2MmJkZTI0NmY5N2ExZmE2Y2VkNTBiNzQYmII1: 00:29:30.870 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTJkZjFkMDc1NjM2NTZlZDg2ODM3NmQwOTE5ODI2MDOdrPvI: ]] 00:29:30.870 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTJkZjFkMDc1NjM2NTZlZDg2ODM3NmQwOTE5ODI2MDOdrPvI: 00:29:30.870 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:29:30.870 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:30.870 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:30.870 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:30.870 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:30.870 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:30.870 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:30.870 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.870 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.870 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.870 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:30.870 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.870 12:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.128 nvme0n1 00:29:31.128 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.128 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:31.128 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:31.128 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.128 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.128 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.128 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:31.128 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:31.128 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.128 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.128 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.128 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:31.128 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:29:31.128 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:31.128 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:31.128 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:31.128 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:31.128 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzdmZjJiOGZiYTVhOTgzNTk3MjRiMWNlYzBhNGZmNzY5ZTI1YTcyYzQ3ZDY2NTU5itkAIg==: 00:29:31.128 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzJlMTJjZTFhNjg3OGUyMTllYjFjMDNlNDM3NThiYzhnKNvU: 00:29:31.128 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:31.128 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:31.128 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzdmZjJiOGZiYTVhOTgzNTk3MjRiMWNlYzBhNGZmNzY5ZTI1YTcyYzQ3ZDY2NTU5itkAIg==: 00:29:31.128 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzJlMTJjZTFhNjg3OGUyMTllYjFjMDNlNDM3NThiYzhnKNvU: ]] 00:29:31.128 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzJlMTJjZTFhNjg3OGUyMTllYjFjMDNlNDM3NThiYzhnKNvU: 00:29:31.128 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:29:31.128 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:31.128 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:31.128 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:31.128 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:31.128 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:31.128 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:31.128 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.128 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.386 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.386 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:31.386 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.386 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.643 nvme0n1 00:29:31.643 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.643 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:31.643 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:31.643 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.643 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.643 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.643 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:31.643 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:31.643 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.643 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.643 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.643 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:31.643 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:29:31.643 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:31.643 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:31.643 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:31.643 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:31.643 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjc5MGQ1MTg2N2UxZjQyZmNjNmZlNGExYzkwMWE3NjE3YTA5NGM5Yjc0YjUyNWZlODY2YjM1ODdjZTE1OTgxNmTv0X8=: 00:29:31.643 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:31.643 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:31.643 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:31.643 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjc5MGQ1MTg2N2UxZjQyZmNjNmZlNGExYzkwMWE3NjE3YTA5NGM5Yjc0YjUyNWZlODY2YjM1ODdjZTE1OTgxNmTv0X8=: 00:29:31.643 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:31.643 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:29:31.643 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:31.643 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:31.643 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:31.643 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:31.643 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:31.643 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:31.643 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.643 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.643 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.643 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:31.643 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.643 12:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.208 nvme0n1 00:29:32.208 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.208 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:32.208 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:32.208 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.208 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.208 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.208 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:32.208 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:32.208 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.208 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.208 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.208 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:32.208 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:32.208 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:29:32.208 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:32.208 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:32.208 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:32.208 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:32.208 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjNmMTlhN2MzMjlkY2NkZjM2OWI0YTIzMTNmOWFjMjDSn53l: 00:29:32.208 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTcwNTk1ODNkZTNmYWMxMTVlNTAwZWRmMDljN2NmZjMyY2I1MmZlZTQ1MzNhNDAyOWFkZmFmNzAzMzE0YzgyZFWoOa4=: 00:29:32.208 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:32.208 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:32.208 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjNmMTlhN2MzMjlkY2NkZjM2OWI0YTIzMTNmOWFjMjDSn53l: 00:29:32.208 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTcwNTk1ODNkZTNmYWMxMTVlNTAwZWRmMDljN2NmZjMyY2I1MmZlZTQ1MzNhNDAyOWFkZmFmNzAzMzE0YzgyZFWoOa4=: ]] 00:29:32.208 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTcwNTk1ODNkZTNmYWMxMTVlNTAwZWRmMDljN2NmZjMyY2I1MmZlZTQ1MzNhNDAyOWFkZmFmNzAzMzE0YzgyZFWoOa4=: 00:29:32.208 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:29:32.208 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:32.208 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:32.208 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:32.208 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:32.208 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:32.208 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:32.208 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.208 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.208 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.208 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:32.208 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.208 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.774 nvme0n1 00:29:32.774 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.774 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:32.774 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:32.774 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.774 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.774 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.774 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:32.774 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:32.774 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.774 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.774 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.774 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:32.774 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:29:32.774 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:32.774 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:32.774 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:32.774 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:32.774 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGRlYTU4NjM4ZGFlZGQwYzFlZTIxYjE5YjNlYWY0MDQ5OTJmYmFmMmQ4NWY0NjA5+1RSgg==: 00:29:32.774 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmVjZTFkYWE2YTBlYmVjMDg5NWVlZTI2NzZkMjRhNmRiOWVkZDEwMjYzZWYzNDM3x7QykA==: 00:29:32.774 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:32.774 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:32.774 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGRlYTU4NjM4ZGFlZGQwYzFlZTIxYjE5YjNlYWY0MDQ5OTJmYmFmMmQ4NWY0NjA5+1RSgg==: 00:29:32.775 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmVjZTFkYWE2YTBlYmVjMDg5NWVlZTI2NzZkMjRhNmRiOWVkZDEwMjYzZWYzNDM3x7QykA==: ]] 00:29:32.775 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmVjZTFkYWE2YTBlYmVjMDg5NWVlZTI2NzZkMjRhNmRiOWVkZDEwMjYzZWYzNDM3x7QykA==: 00:29:32.775 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:29:32.775 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:32.775 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:32.775 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:32.775 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:32.775 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:32.775 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:32.775 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.775 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.775 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.775 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:32.775 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.775 12:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.340 nvme0n1 00:29:33.340 12:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.340 12:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:33.340 12:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.340 12:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:33.340 12:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.340 12:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.340 12:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:33.340 12:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:33.340 12:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.340 12:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.340 12:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.340 12:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:33.340 12:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:29:33.340 12:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:33.340 12:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:33.340 12:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:33.340 12:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:33.340 12:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODUyMzM0YmU2MmJkZTI0NmY5N2ExZmE2Y2VkNTBiNzQYmII1: 00:29:33.340 12:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTJkZjFkMDc1NjM2NTZlZDg2ODM3NmQwOTE5ODI2MDOdrPvI: 00:29:33.340 12:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:33.341 12:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:33.341 12:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODUyMzM0YmU2MmJkZTI0NmY5N2ExZmE2Y2VkNTBiNzQYmII1: 00:29:33.341 12:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTJkZjFkMDc1NjM2NTZlZDg2ODM3NmQwOTE5ODI2MDOdrPvI: ]] 00:29:33.341 12:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTJkZjFkMDc1NjM2NTZlZDg2ODM3NmQwOTE5ODI2MDOdrPvI: 00:29:33.341 12:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:29:33.341 12:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:33.341 12:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:33.341 12:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:33.341 12:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:33.341 12:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:33.341 12:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:33.341 12:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.341 12:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.341 12:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.341 12:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:33.341 12:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.341 12:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.905 nvme0n1 00:29:33.905 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.905 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:33.905 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:33.905 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.905 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.163 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.163 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:34.163 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:34.163 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.163 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.163 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.163 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:34.163 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:29:34.163 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:34.163 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:34.163 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:34.163 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:34.163 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzdmZjJiOGZiYTVhOTgzNTk3MjRiMWNlYzBhNGZmNzY5ZTI1YTcyYzQ3ZDY2NTU5itkAIg==: 00:29:34.163 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzJlMTJjZTFhNjg3OGUyMTllYjFjMDNlNDM3NThiYzhnKNvU: 00:29:34.163 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:34.163 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:34.163 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzdmZjJiOGZiYTVhOTgzNTk3MjRiMWNlYzBhNGZmNzY5ZTI1YTcyYzQ3ZDY2NTU5itkAIg==: 00:29:34.163 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzJlMTJjZTFhNjg3OGUyMTllYjFjMDNlNDM3NThiYzhnKNvU: ]] 00:29:34.163 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzJlMTJjZTFhNjg3OGUyMTllYjFjMDNlNDM3NThiYzhnKNvU: 00:29:34.163 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:29:34.163 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:34.163 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:34.163 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:34.163 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:34.163 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:34.163 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:34.163 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.163 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.163 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.163 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:34.163 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.163 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.729 nvme0n1 00:29:34.729 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.729 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:34.729 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:34.729 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.729 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.729 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.729 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:34.729 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:34.729 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.729 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.729 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.729 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:34.729 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:29:34.729 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:34.729 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:34.729 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:34.729 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:34.729 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjc5MGQ1MTg2N2UxZjQyZmNjNmZlNGExYzkwMWE3NjE3YTA5NGM5Yjc0YjUyNWZlODY2YjM1ODdjZTE1OTgxNmTv0X8=: 00:29:34.729 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:34.729 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:34.729 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:34.729 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjc5MGQ1MTg2N2UxZjQyZmNjNmZlNGExYzkwMWE3NjE3YTA5NGM5Yjc0YjUyNWZlODY2YjM1ODdjZTE1OTgxNmTv0X8=: 00:29:34.729 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:34.729 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:29:34.729 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:34.729 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:34.729 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:34.729 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:34.729 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:34.729 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:34.729 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.729 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.729 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.729 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:34.729 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.729 12:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.294 nvme0n1 00:29:35.294 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.294 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:35.294 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:35.294 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.294 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.294 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.294 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:35.294 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:35.294 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.294 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.294 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.294 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:29:35.294 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:35.294 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:35.294 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:29:35.294 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:35.294 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:35.294 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:35.294 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:35.294 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjNmMTlhN2MzMjlkY2NkZjM2OWI0YTIzMTNmOWFjMjDSn53l: 00:29:35.294 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTcwNTk1ODNkZTNmYWMxMTVlNTAwZWRmMDljN2NmZjMyY2I1MmZlZTQ1MzNhNDAyOWFkZmFmNzAzMzE0YzgyZFWoOa4=: 00:29:35.294 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:35.294 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:35.294 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjNmMTlhN2MzMjlkY2NkZjM2OWI0YTIzMTNmOWFjMjDSn53l: 00:29:35.294 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTcwNTk1ODNkZTNmYWMxMTVlNTAwZWRmMDljN2NmZjMyY2I1MmZlZTQ1MzNhNDAyOWFkZmFmNzAzMzE0YzgyZFWoOa4=: ]] 00:29:35.294 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTcwNTk1ODNkZTNmYWMxMTVlNTAwZWRmMDljN2NmZjMyY2I1MmZlZTQ1MzNhNDAyOWFkZmFmNzAzMzE0YzgyZFWoOa4=: 00:29:35.294 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:29:35.294 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:35.294 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:35.294 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:35.294 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:35.295 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:35.295 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:35.295 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.295 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.295 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.295 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:35.295 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.295 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.553 nvme0n1 00:29:35.553 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.553 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:35.553 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:35.553 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.553 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.553 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.553 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:35.553 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:35.553 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.553 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.553 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.553 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:35.553 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:29:35.553 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:35.553 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:35.553 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:35.553 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:35.553 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGRlYTU4NjM4ZGFlZGQwYzFlZTIxYjE5YjNlYWY0MDQ5OTJmYmFmMmQ4NWY0NjA5+1RSgg==: 00:29:35.553 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmVjZTFkYWE2YTBlYmVjMDg5NWVlZTI2NzZkMjRhNmRiOWVkZDEwMjYzZWYzNDM3x7QykA==: 00:29:35.553 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:35.553 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:35.553 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGRlYTU4NjM4ZGFlZGQwYzFlZTIxYjE5YjNlYWY0MDQ5OTJmYmFmMmQ4NWY0NjA5+1RSgg==: 00:29:35.553 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmVjZTFkYWE2YTBlYmVjMDg5NWVlZTI2NzZkMjRhNmRiOWVkZDEwMjYzZWYzNDM3x7QykA==: ]] 00:29:35.553 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmVjZTFkYWE2YTBlYmVjMDg5NWVlZTI2NzZkMjRhNmRiOWVkZDEwMjYzZWYzNDM3x7QykA==: 00:29:35.553 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:29:35.553 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:35.553 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:35.553 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:35.553 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:35.553 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:35.553 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:35.553 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.553 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.553 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.553 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:35.553 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.553 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.811 nvme0n1 00:29:35.811 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.811 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:35.811 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:35.811 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.811 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.811 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.811 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:35.811 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:35.811 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.811 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.811 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.811 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:35.811 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:29:35.811 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:35.811 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:35.811 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:35.811 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:35.811 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODUyMzM0YmU2MmJkZTI0NmY5N2ExZmE2Y2VkNTBiNzQYmII1: 00:29:35.811 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTJkZjFkMDc1NjM2NTZlZDg2ODM3NmQwOTE5ODI2MDOdrPvI: 00:29:35.811 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:35.811 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:35.811 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODUyMzM0YmU2MmJkZTI0NmY5N2ExZmE2Y2VkNTBiNzQYmII1: 00:29:35.811 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTJkZjFkMDc1NjM2NTZlZDg2ODM3NmQwOTE5ODI2MDOdrPvI: ]] 00:29:35.811 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTJkZjFkMDc1NjM2NTZlZDg2ODM3NmQwOTE5ODI2MDOdrPvI: 00:29:35.811 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:29:35.811 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:35.811 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:35.811 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:35.811 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:35.811 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:35.811 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:35.811 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.811 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.811 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.811 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:35.811 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.811 12:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.067 nvme0n1 00:29:36.067 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.067 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:36.067 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:36.067 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.067 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.067 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.067 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:36.067 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:36.067 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.067 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.067 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.067 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:36.067 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:29:36.067 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:36.067 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:36.067 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:36.067 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:36.067 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzdmZjJiOGZiYTVhOTgzNTk3MjRiMWNlYzBhNGZmNzY5ZTI1YTcyYzQ3ZDY2NTU5itkAIg==: 00:29:36.067 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzJlMTJjZTFhNjg3OGUyMTllYjFjMDNlNDM3NThiYzhnKNvU: 00:29:36.067 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:36.067 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:36.067 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzdmZjJiOGZiYTVhOTgzNTk3MjRiMWNlYzBhNGZmNzY5ZTI1YTcyYzQ3ZDY2NTU5itkAIg==: 00:29:36.067 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzJlMTJjZTFhNjg3OGUyMTllYjFjMDNlNDM3NThiYzhnKNvU: ]] 00:29:36.067 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzJlMTJjZTFhNjg3OGUyMTllYjFjMDNlNDM3NThiYzhnKNvU: 00:29:36.067 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:29:36.067 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:36.067 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:36.067 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:36.067 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:36.067 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:36.067 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:36.068 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.068 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.068 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.068 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:36.068 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.068 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.068 nvme0n1 00:29:36.326 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.326 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:36.326 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:36.326 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.326 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.326 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.326 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:36.326 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:36.326 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.326 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.326 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.326 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:36.326 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:29:36.326 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:36.326 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:36.326 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:36.326 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:36.326 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjc5MGQ1MTg2N2UxZjQyZmNjNmZlNGExYzkwMWE3NjE3YTA5NGM5Yjc0YjUyNWZlODY2YjM1ODdjZTE1OTgxNmTv0X8=: 00:29:36.326 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:36.326 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:36.326 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:36.326 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjc5MGQ1MTg2N2UxZjQyZmNjNmZlNGExYzkwMWE3NjE3YTA5NGM5Yjc0YjUyNWZlODY2YjM1ODdjZTE1OTgxNmTv0X8=: 00:29:36.326 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:36.326 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:29:36.326 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:36.326 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:36.326 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:36.326 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:36.326 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:36.326 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:36.326 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.326 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.326 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.326 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:36.326 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.326 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.326 nvme0n1 00:29:36.326 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.326 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:36.326 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:36.326 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.326 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.326 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.585 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:36.585 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:36.585 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.585 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.585 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.585 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:36.585 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:36.585 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:29:36.585 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:36.585 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:36.585 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:36.585 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:36.585 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjNmMTlhN2MzMjlkY2NkZjM2OWI0YTIzMTNmOWFjMjDSn53l: 00:29:36.585 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTcwNTk1ODNkZTNmYWMxMTVlNTAwZWRmMDljN2NmZjMyY2I1MmZlZTQ1MzNhNDAyOWFkZmFmNzAzMzE0YzgyZFWoOa4=: 00:29:36.585 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:36.585 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:36.585 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjNmMTlhN2MzMjlkY2NkZjM2OWI0YTIzMTNmOWFjMjDSn53l: 00:29:36.585 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTcwNTk1ODNkZTNmYWMxMTVlNTAwZWRmMDljN2NmZjMyY2I1MmZlZTQ1MzNhNDAyOWFkZmFmNzAzMzE0YzgyZFWoOa4=: ]] 00:29:36.585 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTcwNTk1ODNkZTNmYWMxMTVlNTAwZWRmMDljN2NmZjMyY2I1MmZlZTQ1MzNhNDAyOWFkZmFmNzAzMzE0YzgyZFWoOa4=: 00:29:36.585 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:29:36.585 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:36.585 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:36.585 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:36.585 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:36.585 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:36.585 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:36.585 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.585 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.585 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.585 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:36.585 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.585 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.585 nvme0n1 00:29:36.585 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.585 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:36.585 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:36.585 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.585 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.585 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.585 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:36.585 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:36.585 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.585 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.844 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.844 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:36.844 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:29:36.844 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:36.844 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:36.844 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:36.844 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:36.844 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGRlYTU4NjM4ZGFlZGQwYzFlZTIxYjE5YjNlYWY0MDQ5OTJmYmFmMmQ4NWY0NjA5+1RSgg==: 00:29:36.844 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmVjZTFkYWE2YTBlYmVjMDg5NWVlZTI2NzZkMjRhNmRiOWVkZDEwMjYzZWYzNDM3x7QykA==: 00:29:36.844 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:36.844 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:36.844 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGRlYTU4NjM4ZGFlZGQwYzFlZTIxYjE5YjNlYWY0MDQ5OTJmYmFmMmQ4NWY0NjA5+1RSgg==: 00:29:36.844 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmVjZTFkYWE2YTBlYmVjMDg5NWVlZTI2NzZkMjRhNmRiOWVkZDEwMjYzZWYzNDM3x7QykA==: ]] 00:29:36.844 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmVjZTFkYWE2YTBlYmVjMDg5NWVlZTI2NzZkMjRhNmRiOWVkZDEwMjYzZWYzNDM3x7QykA==: 00:29:36.844 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:29:36.844 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:36.844 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:36.844 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:36.844 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:36.844 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:36.844 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:36.844 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.844 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.844 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.844 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:36.844 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.844 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.844 nvme0n1 00:29:36.844 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.844 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:36.844 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:36.844 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.844 12:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.844 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.844 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:36.844 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:36.844 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.844 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.102 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.102 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:37.102 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:29:37.102 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:37.102 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:37.102 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:37.102 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:37.102 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODUyMzM0YmU2MmJkZTI0NmY5N2ExZmE2Y2VkNTBiNzQYmII1: 00:29:37.103 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTJkZjFkMDc1NjM2NTZlZDg2ODM3NmQwOTE5ODI2MDOdrPvI: 00:29:37.103 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:37.103 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:37.103 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODUyMzM0YmU2MmJkZTI0NmY5N2ExZmE2Y2VkNTBiNzQYmII1: 00:29:37.103 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTJkZjFkMDc1NjM2NTZlZDg2ODM3NmQwOTE5ODI2MDOdrPvI: ]] 00:29:37.103 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTJkZjFkMDc1NjM2NTZlZDg2ODM3NmQwOTE5ODI2MDOdrPvI: 00:29:37.103 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:29:37.103 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:37.103 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:37.103 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:37.103 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:37.103 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:37.103 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:37.103 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.103 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.103 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.103 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:37.103 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.103 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.103 nvme0n1 00:29:37.103 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.103 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:37.103 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:37.103 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.103 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.103 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.103 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:37.103 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:37.103 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.103 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.362 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.362 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:37.362 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:29:37.362 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:37.362 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:37.362 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:37.362 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:37.362 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzdmZjJiOGZiYTVhOTgzNTk3MjRiMWNlYzBhNGZmNzY5ZTI1YTcyYzQ3ZDY2NTU5itkAIg==: 00:29:37.362 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzJlMTJjZTFhNjg3OGUyMTllYjFjMDNlNDM3NThiYzhnKNvU: 00:29:37.362 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:37.362 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:37.362 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzdmZjJiOGZiYTVhOTgzNTk3MjRiMWNlYzBhNGZmNzY5ZTI1YTcyYzQ3ZDY2NTU5itkAIg==: 00:29:37.362 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzJlMTJjZTFhNjg3OGUyMTllYjFjMDNlNDM3NThiYzhnKNvU: ]] 00:29:37.362 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzJlMTJjZTFhNjg3OGUyMTllYjFjMDNlNDM3NThiYzhnKNvU: 00:29:37.362 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:29:37.362 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:37.362 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:37.362 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:37.362 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:37.362 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:37.362 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:37.362 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.362 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.362 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.362 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:37.362 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.362 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.362 nvme0n1 00:29:37.362 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.362 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:37.362 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:37.362 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.362 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.362 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.362 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:37.362 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:37.362 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.362 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.362 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.362 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:37.362 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:29:37.362 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:37.362 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:37.362 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:37.362 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:37.362 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjc5MGQ1MTg2N2UxZjQyZmNjNmZlNGExYzkwMWE3NjE3YTA5NGM5Yjc0YjUyNWZlODY2YjM1ODdjZTE1OTgxNmTv0X8=: 00:29:37.362 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:37.362 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:37.622 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:37.622 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjc5MGQ1MTg2N2UxZjQyZmNjNmZlNGExYzkwMWE3NjE3YTA5NGM5Yjc0YjUyNWZlODY2YjM1ODdjZTE1OTgxNmTv0X8=: 00:29:37.622 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:37.622 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:29:37.622 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:37.622 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:37.622 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:37.622 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:37.622 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:37.622 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:37.622 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.622 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.622 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.622 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:37.622 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.622 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.622 nvme0n1 00:29:37.622 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.622 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:37.622 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:37.622 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.622 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.622 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.622 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:37.622 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:37.622 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.622 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.622 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.622 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:37.622 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:37.622 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:29:37.622 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:37.622 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:37.622 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:37.622 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:37.622 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjNmMTlhN2MzMjlkY2NkZjM2OWI0YTIzMTNmOWFjMjDSn53l: 00:29:37.622 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTcwNTk1ODNkZTNmYWMxMTVlNTAwZWRmMDljN2NmZjMyY2I1MmZlZTQ1MzNhNDAyOWFkZmFmNzAzMzE0YzgyZFWoOa4=: 00:29:37.622 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:37.622 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:37.622 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjNmMTlhN2MzMjlkY2NkZjM2OWI0YTIzMTNmOWFjMjDSn53l: 00:29:37.622 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTcwNTk1ODNkZTNmYWMxMTVlNTAwZWRmMDljN2NmZjMyY2I1MmZlZTQ1MzNhNDAyOWFkZmFmNzAzMzE0YzgyZFWoOa4=: ]] 00:29:37.622 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTcwNTk1ODNkZTNmYWMxMTVlNTAwZWRmMDljN2NmZjMyY2I1MmZlZTQ1MzNhNDAyOWFkZmFmNzAzMzE0YzgyZFWoOa4=: 00:29:37.622 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:29:37.622 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:37.622 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:37.622 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:37.622 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:37.622 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:37.622 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:37.622 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.622 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.622 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.622 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:37.622 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.622 12:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.881 nvme0n1 00:29:37.881 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.881 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:37.881 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:37.881 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.881 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.881 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.141 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:38.141 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:38.141 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.141 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.141 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.141 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:38.141 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:29:38.141 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:38.141 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:38.141 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:38.141 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:38.141 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGRlYTU4NjM4ZGFlZGQwYzFlZTIxYjE5YjNlYWY0MDQ5OTJmYmFmMmQ4NWY0NjA5+1RSgg==: 00:29:38.141 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmVjZTFkYWE2YTBlYmVjMDg5NWVlZTI2NzZkMjRhNmRiOWVkZDEwMjYzZWYzNDM3x7QykA==: 00:29:38.141 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:38.141 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:38.142 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGRlYTU4NjM4ZGFlZGQwYzFlZTIxYjE5YjNlYWY0MDQ5OTJmYmFmMmQ4NWY0NjA5+1RSgg==: 00:29:38.142 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmVjZTFkYWE2YTBlYmVjMDg5NWVlZTI2NzZkMjRhNmRiOWVkZDEwMjYzZWYzNDM3x7QykA==: ]] 00:29:38.142 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmVjZTFkYWE2YTBlYmVjMDg5NWVlZTI2NzZkMjRhNmRiOWVkZDEwMjYzZWYzNDM3x7QykA==: 00:29:38.142 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:29:38.142 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:38.142 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:38.142 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:38.142 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:38.142 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:38.142 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:38.142 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.142 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.142 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.142 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:38.142 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.142 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.401 nvme0n1 00:29:38.401 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.402 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:38.402 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:38.402 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.402 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.402 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.402 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:38.402 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:38.402 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.402 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.402 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.402 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:38.402 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:29:38.402 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:38.402 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:38.402 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:38.402 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:38.402 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODUyMzM0YmU2MmJkZTI0NmY5N2ExZmE2Y2VkNTBiNzQYmII1: 00:29:38.402 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTJkZjFkMDc1NjM2NTZlZDg2ODM3NmQwOTE5ODI2MDOdrPvI: 00:29:38.402 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:38.402 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:38.402 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODUyMzM0YmU2MmJkZTI0NmY5N2ExZmE2Y2VkNTBiNzQYmII1: 00:29:38.402 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTJkZjFkMDc1NjM2NTZlZDg2ODM3NmQwOTE5ODI2MDOdrPvI: ]] 00:29:38.402 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTJkZjFkMDc1NjM2NTZlZDg2ODM3NmQwOTE5ODI2MDOdrPvI: 00:29:38.402 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:29:38.402 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:38.402 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:38.402 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:38.402 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:38.402 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:38.402 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:38.402 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.402 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.402 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.402 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:38.402 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.402 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.661 nvme0n1 00:29:38.661 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.661 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:38.661 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:38.661 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.661 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.661 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.661 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:38.661 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:38.661 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.661 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.661 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.661 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:38.661 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:29:38.661 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:38.661 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:38.661 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:38.661 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:38.661 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzdmZjJiOGZiYTVhOTgzNTk3MjRiMWNlYzBhNGZmNzY5ZTI1YTcyYzQ3ZDY2NTU5itkAIg==: 00:29:38.661 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzJlMTJjZTFhNjg3OGUyMTllYjFjMDNlNDM3NThiYzhnKNvU: 00:29:38.661 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:38.661 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:38.661 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzdmZjJiOGZiYTVhOTgzNTk3MjRiMWNlYzBhNGZmNzY5ZTI1YTcyYzQ3ZDY2NTU5itkAIg==: 00:29:38.661 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzJlMTJjZTFhNjg3OGUyMTllYjFjMDNlNDM3NThiYzhnKNvU: ]] 00:29:38.661 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzJlMTJjZTFhNjg3OGUyMTllYjFjMDNlNDM3NThiYzhnKNvU: 00:29:38.661 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:29:38.661 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:38.661 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:38.661 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:38.661 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:38.661 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:38.661 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:38.661 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.661 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.661 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.661 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:38.661 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.661 12:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.920 nvme0n1 00:29:38.920 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.920 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:38.920 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:38.920 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.920 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.920 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.920 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:38.920 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:38.920 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.920 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.920 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.920 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:38.920 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:29:38.920 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:38.920 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:38.920 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:38.920 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:38.920 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjc5MGQ1MTg2N2UxZjQyZmNjNmZlNGExYzkwMWE3NjE3YTA5NGM5Yjc0YjUyNWZlODY2YjM1ODdjZTE1OTgxNmTv0X8=: 00:29:38.920 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:38.920 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:38.920 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:38.920 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjc5MGQ1MTg2N2UxZjQyZmNjNmZlNGExYzkwMWE3NjE3YTA5NGM5Yjc0YjUyNWZlODY2YjM1ODdjZTE1OTgxNmTv0X8=: 00:29:38.920 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:38.920 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:29:38.920 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:38.920 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:38.920 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:38.920 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:38.920 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:38.920 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:38.920 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.920 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.920 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.920 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:38.920 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.920 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.179 nvme0n1 00:29:39.179 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.179 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:39.179 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:39.179 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.179 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.179 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.179 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:39.179 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:39.179 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.437 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.437 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.438 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:39.438 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:39.438 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:29:39.438 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:39.438 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:39.438 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:39.438 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:39.438 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjNmMTlhN2MzMjlkY2NkZjM2OWI0YTIzMTNmOWFjMjDSn53l: 00:29:39.438 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTcwNTk1ODNkZTNmYWMxMTVlNTAwZWRmMDljN2NmZjMyY2I1MmZlZTQ1MzNhNDAyOWFkZmFmNzAzMzE0YzgyZFWoOa4=: 00:29:39.438 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:39.438 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:39.438 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjNmMTlhN2MzMjlkY2NkZjM2OWI0YTIzMTNmOWFjMjDSn53l: 00:29:39.438 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTcwNTk1ODNkZTNmYWMxMTVlNTAwZWRmMDljN2NmZjMyY2I1MmZlZTQ1MzNhNDAyOWFkZmFmNzAzMzE0YzgyZFWoOa4=: ]] 00:29:39.438 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTcwNTk1ODNkZTNmYWMxMTVlNTAwZWRmMDljN2NmZjMyY2I1MmZlZTQ1MzNhNDAyOWFkZmFmNzAzMzE0YzgyZFWoOa4=: 00:29:39.438 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:29:39.438 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:39.438 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:39.438 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:39.438 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:39.438 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:39.438 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:39.438 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.438 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.438 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.438 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:39.438 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.438 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.696 nvme0n1 00:29:39.696 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.696 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:39.696 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.696 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:39.696 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.697 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.697 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:39.697 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:39.697 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.697 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.697 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.697 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:39.697 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:29:39.697 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:39.697 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:39.697 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:39.697 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:39.697 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGRlYTU4NjM4ZGFlZGQwYzFlZTIxYjE5YjNlYWY0MDQ5OTJmYmFmMmQ4NWY0NjA5+1RSgg==: 00:29:39.697 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmVjZTFkYWE2YTBlYmVjMDg5NWVlZTI2NzZkMjRhNmRiOWVkZDEwMjYzZWYzNDM3x7QykA==: 00:29:39.697 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:39.697 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:39.697 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGRlYTU4NjM4ZGFlZGQwYzFlZTIxYjE5YjNlYWY0MDQ5OTJmYmFmMmQ4NWY0NjA5+1RSgg==: 00:29:39.697 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmVjZTFkYWE2YTBlYmVjMDg5NWVlZTI2NzZkMjRhNmRiOWVkZDEwMjYzZWYzNDM3x7QykA==: ]] 00:29:39.697 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmVjZTFkYWE2YTBlYmVjMDg5NWVlZTI2NzZkMjRhNmRiOWVkZDEwMjYzZWYzNDM3x7QykA==: 00:29:39.697 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:29:39.697 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:39.697 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:39.697 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:39.697 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:39.697 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:39.697 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:39.697 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.697 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.697 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.697 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:39.697 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.697 12:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.265 nvme0n1 00:29:40.265 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.265 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:40.265 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:40.265 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.265 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.265 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.265 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:40.265 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:40.265 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.265 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.265 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.265 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:40.265 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:29:40.265 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:40.265 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:40.265 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:40.265 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:40.265 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODUyMzM0YmU2MmJkZTI0NmY5N2ExZmE2Y2VkNTBiNzQYmII1: 00:29:40.265 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTJkZjFkMDc1NjM2NTZlZDg2ODM3NmQwOTE5ODI2MDOdrPvI: 00:29:40.265 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:40.265 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:40.265 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODUyMzM0YmU2MmJkZTI0NmY5N2ExZmE2Y2VkNTBiNzQYmII1: 00:29:40.265 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTJkZjFkMDc1NjM2NTZlZDg2ODM3NmQwOTE5ODI2MDOdrPvI: ]] 00:29:40.265 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTJkZjFkMDc1NjM2NTZlZDg2ODM3NmQwOTE5ODI2MDOdrPvI: 00:29:40.265 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:29:40.265 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:40.265 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:40.265 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:40.265 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:40.265 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:40.265 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:40.265 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.265 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.265 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.265 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:40.265 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.265 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.523 nvme0n1 00:29:40.523 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.523 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:40.523 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:40.523 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.523 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.523 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.523 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:40.523 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:40.782 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.782 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.782 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.782 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:40.782 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:29:40.782 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:40.782 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:40.782 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:40.782 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:40.782 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzdmZjJiOGZiYTVhOTgzNTk3MjRiMWNlYzBhNGZmNzY5ZTI1YTcyYzQ3ZDY2NTU5itkAIg==: 00:29:40.782 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzJlMTJjZTFhNjg3OGUyMTllYjFjMDNlNDM3NThiYzhnKNvU: 00:29:40.782 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:40.782 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:40.782 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzdmZjJiOGZiYTVhOTgzNTk3MjRiMWNlYzBhNGZmNzY5ZTI1YTcyYzQ3ZDY2NTU5itkAIg==: 00:29:40.782 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzJlMTJjZTFhNjg3OGUyMTllYjFjMDNlNDM3NThiYzhnKNvU: ]] 00:29:40.782 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzJlMTJjZTFhNjg3OGUyMTllYjFjMDNlNDM3NThiYzhnKNvU: 00:29:40.782 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:29:40.782 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:40.782 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:40.782 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:40.782 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:40.782 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:40.782 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:40.782 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.782 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.782 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.782 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:40.782 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.782 12:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.041 nvme0n1 00:29:41.041 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.041 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:41.041 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:41.041 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.041 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.041 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.041 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:41.041 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:41.041 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.041 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.041 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.041 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:41.041 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:29:41.041 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:41.041 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:41.041 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:41.041 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:41.041 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjc5MGQ1MTg2N2UxZjQyZmNjNmZlNGExYzkwMWE3NjE3YTA5NGM5Yjc0YjUyNWZlODY2YjM1ODdjZTE1OTgxNmTv0X8=: 00:29:41.041 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:41.041 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:41.041 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:41.041 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjc5MGQ1MTg2N2UxZjQyZmNjNmZlNGExYzkwMWE3NjE3YTA5NGM5Yjc0YjUyNWZlODY2YjM1ODdjZTE1OTgxNmTv0X8=: 00:29:41.041 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:41.041 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:29:41.041 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:41.041 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:41.041 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:41.041 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:41.041 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:41.041 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:41.041 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.041 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.041 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.041 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:41.041 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.041 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.612 nvme0n1 00:29:41.612 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.612 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:41.612 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:41.612 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.612 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.612 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.612 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:41.612 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:41.612 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.612 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.612 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.612 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:41.612 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:41.612 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:29:41.612 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:41.612 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:41.612 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:41.613 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:41.613 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjNmMTlhN2MzMjlkY2NkZjM2OWI0YTIzMTNmOWFjMjDSn53l: 00:29:41.613 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTcwNTk1ODNkZTNmYWMxMTVlNTAwZWRmMDljN2NmZjMyY2I1MmZlZTQ1MzNhNDAyOWFkZmFmNzAzMzE0YzgyZFWoOa4=: 00:29:41.613 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:41.613 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:41.613 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjNmMTlhN2MzMjlkY2NkZjM2OWI0YTIzMTNmOWFjMjDSn53l: 00:29:41.613 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTcwNTk1ODNkZTNmYWMxMTVlNTAwZWRmMDljN2NmZjMyY2I1MmZlZTQ1MzNhNDAyOWFkZmFmNzAzMzE0YzgyZFWoOa4=: ]] 00:29:41.613 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTcwNTk1ODNkZTNmYWMxMTVlNTAwZWRmMDljN2NmZjMyY2I1MmZlZTQ1MzNhNDAyOWFkZmFmNzAzMzE0YzgyZFWoOa4=: 00:29:41.613 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:29:41.613 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:41.613 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:41.613 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:41.613 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:41.613 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:41.613 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:41.613 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.613 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.613 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.613 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:41.613 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.613 12:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.228 nvme0n1 00:29:42.228 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.228 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:42.228 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:42.228 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.228 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.228 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.228 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:42.228 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:42.228 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.228 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.228 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.228 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:42.228 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:29:42.228 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:42.228 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:42.228 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:42.228 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:42.228 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGRlYTU4NjM4ZGFlZGQwYzFlZTIxYjE5YjNlYWY0MDQ5OTJmYmFmMmQ4NWY0NjA5+1RSgg==: 00:29:42.228 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmVjZTFkYWE2YTBlYmVjMDg5NWVlZTI2NzZkMjRhNmRiOWVkZDEwMjYzZWYzNDM3x7QykA==: 00:29:42.228 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:42.228 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:42.228 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGRlYTU4NjM4ZGFlZGQwYzFlZTIxYjE5YjNlYWY0MDQ5OTJmYmFmMmQ4NWY0NjA5+1RSgg==: 00:29:42.228 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmVjZTFkYWE2YTBlYmVjMDg5NWVlZTI2NzZkMjRhNmRiOWVkZDEwMjYzZWYzNDM3x7QykA==: ]] 00:29:42.228 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmVjZTFkYWE2YTBlYmVjMDg5NWVlZTI2NzZkMjRhNmRiOWVkZDEwMjYzZWYzNDM3x7QykA==: 00:29:42.228 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:29:42.228 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:42.228 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:42.228 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:42.228 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:42.228 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:42.228 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:42.228 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.228 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.228 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.228 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:42.228 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.228 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.893 nvme0n1 00:29:42.893 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.893 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:42.893 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:42.893 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.893 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.894 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.894 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:42.894 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:42.894 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.894 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.894 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.894 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:42.894 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:29:42.894 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:42.894 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:42.894 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:42.894 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:42.894 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODUyMzM0YmU2MmJkZTI0NmY5N2ExZmE2Y2VkNTBiNzQYmII1: 00:29:42.894 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTJkZjFkMDc1NjM2NTZlZDg2ODM3NmQwOTE5ODI2MDOdrPvI: 00:29:42.894 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:42.894 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:42.894 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODUyMzM0YmU2MmJkZTI0NmY5N2ExZmE2Y2VkNTBiNzQYmII1: 00:29:42.894 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTJkZjFkMDc1NjM2NTZlZDg2ODM3NmQwOTE5ODI2MDOdrPvI: ]] 00:29:42.894 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTJkZjFkMDc1NjM2NTZlZDg2ODM3NmQwOTE5ODI2MDOdrPvI: 00:29:42.894 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:29:42.894 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:42.894 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:42.894 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:42.894 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:42.894 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:42.894 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:42.894 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.894 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.894 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.894 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:42.894 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.894 12:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.462 nvme0n1 00:29:43.462 12:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.462 12:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:43.462 12:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:43.462 12:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.462 12:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.462 12:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.462 12:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:43.462 12:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:43.462 12:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.462 12:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.462 12:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.462 12:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:43.462 12:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:29:43.462 12:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:43.462 12:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:43.462 12:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:43.462 12:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:43.462 12:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzdmZjJiOGZiYTVhOTgzNTk3MjRiMWNlYzBhNGZmNzY5ZTI1YTcyYzQ3ZDY2NTU5itkAIg==: 00:29:43.462 12:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzJlMTJjZTFhNjg3OGUyMTllYjFjMDNlNDM3NThiYzhnKNvU: 00:29:43.462 12:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:43.462 12:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:43.462 12:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzdmZjJiOGZiYTVhOTgzNTk3MjRiMWNlYzBhNGZmNzY5ZTI1YTcyYzQ3ZDY2NTU5itkAIg==: 00:29:43.462 12:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzJlMTJjZTFhNjg3OGUyMTllYjFjMDNlNDM3NThiYzhnKNvU: ]] 00:29:43.462 12:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzJlMTJjZTFhNjg3OGUyMTllYjFjMDNlNDM3NThiYzhnKNvU: 00:29:43.462 12:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:29:43.462 12:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:43.462 12:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:43.462 12:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:43.462 12:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:43.462 12:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:43.462 12:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:43.462 12:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.462 12:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.462 12:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.462 12:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:43.462 12:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.462 12:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.031 nvme0n1 00:29:44.031 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.031 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:44.031 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.031 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:44.031 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.031 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.031 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:44.031 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:44.031 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.031 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.031 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.031 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:44.031 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:29:44.031 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:44.031 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:44.031 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:44.031 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:44.031 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjc5MGQ1MTg2N2UxZjQyZmNjNmZlNGExYzkwMWE3NjE3YTA5NGM5Yjc0YjUyNWZlODY2YjM1ODdjZTE1OTgxNmTv0X8=: 00:29:44.031 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:44.031 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:44.031 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:44.031 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjc5MGQ1MTg2N2UxZjQyZmNjNmZlNGExYzkwMWE3NjE3YTA5NGM5Yjc0YjUyNWZlODY2YjM1ODdjZTE1OTgxNmTv0X8=: 00:29:44.031 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:44.031 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:29:44.031 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:44.031 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:44.031 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:44.031 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:44.031 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:44.031 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:44.031 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.031 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.031 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.290 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:44.290 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.290 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.859 nvme0n1 00:29:44.859 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.859 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:44.859 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:44.859 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.859 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.859 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.859 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:44.859 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:44.859 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.859 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.859 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.859 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:44.859 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:44.859 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:44.859 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:44.859 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:44.859 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGRlYTU4NjM4ZGFlZGQwYzFlZTIxYjE5YjNlYWY0MDQ5OTJmYmFmMmQ4NWY0NjA5+1RSgg==: 00:29:44.859 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmVjZTFkYWE2YTBlYmVjMDg5NWVlZTI2NzZkMjRhNmRiOWVkZDEwMjYzZWYzNDM3x7QykA==: 00:29:44.859 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:44.859 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:44.859 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGRlYTU4NjM4ZGFlZGQwYzFlZTIxYjE5YjNlYWY0MDQ5OTJmYmFmMmQ4NWY0NjA5+1RSgg==: 00:29:44.859 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmVjZTFkYWE2YTBlYmVjMDg5NWVlZTI2NzZkMjRhNmRiOWVkZDEwMjYzZWYzNDM3x7QykA==: ]] 00:29:44.859 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmVjZTFkYWE2YTBlYmVjMDg5NWVlZTI2NzZkMjRhNmRiOWVkZDEwMjYzZWYzNDM3x7QykA==: 00:29:44.859 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:44.859 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.859 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.859 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.859 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:44.859 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:29:44.859 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:44.859 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:44.859 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:44.859 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:44.859 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:44.859 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:44.859 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.859 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.859 request: 00:29:44.859 { 00:29:44.859 "name": "nvme0", 00:29:44.859 "trtype": "tcp", 00:29:44.859 "traddr": "10.0.0.1", 00:29:44.859 "adrfam": "ipv4", 00:29:44.859 "trsvcid": "4420", 00:29:44.859 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:44.859 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:44.859 "prchk_reftag": false, 00:29:44.859 "prchk_guard": false, 00:29:44.859 "hdgst": false, 00:29:44.859 "ddgst": false, 00:29:44.859 "allow_unrecognized_csi": false, 00:29:44.859 "method": "bdev_nvme_attach_controller", 00:29:44.859 "req_id": 1 00:29:44.859 } 00:29:44.859 Got JSON-RPC error response 00:29:44.859 response: 00:29:44.859 { 00:29:44.859 "code": -5, 00:29:44.859 "message": "Input/output error" 00:29:44.859 } 00:29:44.859 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:44.859 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:29:44.859 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:44.859 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:44.860 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:44.860 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:29:44.860 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:29:44.860 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.860 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.860 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.860 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:29:44.860 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:44.860 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:29:44.860 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:44.860 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:44.860 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:44.860 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:44.860 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:44.860 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:44.860 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.860 12:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.860 request: 00:29:44.860 { 00:29:44.860 "name": "nvme0", 00:29:44.860 "trtype": "tcp", 00:29:44.860 "traddr": "10.0.0.1", 00:29:44.860 "adrfam": "ipv4", 00:29:44.860 "trsvcid": "4420", 00:29:44.860 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:44.860 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:44.860 "prchk_reftag": false, 00:29:44.860 "prchk_guard": false, 00:29:44.860 "hdgst": false, 00:29:44.860 "ddgst": false, 00:29:44.860 "dhchap_key": "key2", 00:29:44.860 "allow_unrecognized_csi": false, 00:29:44.860 "method": "bdev_nvme_attach_controller", 00:29:44.860 "req_id": 1 00:29:44.860 } 00:29:44.860 Got JSON-RPC error response 00:29:44.860 response: 00:29:44.860 { 00:29:44.860 "code": -5, 00:29:44.860 "message": "Input/output error" 00:29:44.860 } 00:29:44.860 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:44.860 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:29:44.860 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:44.860 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:44.860 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:44.860 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:29:44.860 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:29:44.860 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.860 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.118 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.118 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:29:45.118 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:45.119 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:29:45.119 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:45.119 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:45.119 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:45.119 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:45.119 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:45.119 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:45.119 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.119 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.119 request: 00:29:45.119 { 00:29:45.119 "name": "nvme0", 00:29:45.119 "trtype": "tcp", 00:29:45.119 "traddr": "10.0.0.1", 00:29:45.119 "adrfam": "ipv4", 00:29:45.119 "trsvcid": "4420", 00:29:45.119 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:45.119 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:45.119 "prchk_reftag": false, 00:29:45.119 "prchk_guard": false, 00:29:45.119 "hdgst": false, 00:29:45.119 "ddgst": false, 00:29:45.119 "dhchap_key": "key1", 00:29:45.119 "dhchap_ctrlr_key": "ckey2", 00:29:45.119 "allow_unrecognized_csi": false, 00:29:45.119 "method": "bdev_nvme_attach_controller", 00:29:45.119 "req_id": 1 00:29:45.119 } 00:29:45.119 Got JSON-RPC error response 00:29:45.119 response: 00:29:45.119 { 00:29:45.119 "code": -5, 00:29:45.119 "message": "Input/output error" 00:29:45.119 } 00:29:45.119 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:45.119 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:29:45.119 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:45.119 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:45.119 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:45.119 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:29:45.119 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.119 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.119 nvme0n1 00:29:45.119 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.119 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:29:45.119 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:45.119 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:45.119 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:45.119 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:45.119 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODUyMzM0YmU2MmJkZTI0NmY5N2ExZmE2Y2VkNTBiNzQYmII1: 00:29:45.119 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTJkZjFkMDc1NjM2NTZlZDg2ODM3NmQwOTE5ODI2MDOdrPvI: 00:29:45.119 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:45.119 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:45.119 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODUyMzM0YmU2MmJkZTI0NmY5N2ExZmE2Y2VkNTBiNzQYmII1: 00:29:45.119 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTJkZjFkMDc1NjM2NTZlZDg2ODM3NmQwOTE5ODI2MDOdrPvI: ]] 00:29:45.119 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTJkZjFkMDc1NjM2NTZlZDg2ODM3NmQwOTE5ODI2MDOdrPvI: 00:29:45.119 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:45.119 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.119 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.378 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.378 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:29:45.378 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.378 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:29:45.378 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.378 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.378 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:45.378 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:45.378 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:29:45.378 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:45.378 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:45.378 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:45.378 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:45.378 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:45.378 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:45.378 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.378 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.378 request: 00:29:45.378 { 00:29:45.378 "name": "nvme0", 00:29:45.378 "dhchap_key": "key1", 00:29:45.378 "dhchap_ctrlr_key": "ckey2", 00:29:45.378 "method": "bdev_nvme_set_keys", 00:29:45.378 "req_id": 1 00:29:45.378 } 00:29:45.378 Got JSON-RPC error response 00:29:45.378 response: 00:29:45.378 { 00:29:45.378 "code": -13, 00:29:45.378 "message": "Permission denied" 00:29:45.378 } 00:29:45.378 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:45.378 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:29:45.378 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:45.378 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:45.378 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:45.378 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:29:45.378 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.378 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:29:45.378 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.378 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.378 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:29:45.378 12:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:29:46.753 12:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:29:46.753 12:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:29:46.753 12:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.753 12:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.753 12:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.753 12:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:29:46.753 12:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:29:47.688 12:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:29:47.688 12:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:29:47.688 12:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.688 12:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.688 12:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.688 12:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:29:47.688 12:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:47.688 12:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:47.688 12:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:47.688 12:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:47.688 12:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:47.688 12:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGRlYTU4NjM4ZGFlZGQwYzFlZTIxYjE5YjNlYWY0MDQ5OTJmYmFmMmQ4NWY0NjA5+1RSgg==: 00:29:47.688 12:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmVjZTFkYWE2YTBlYmVjMDg5NWVlZTI2NzZkMjRhNmRiOWVkZDEwMjYzZWYzNDM3x7QykA==: 00:29:47.688 12:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:47.688 12:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:47.688 12:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGRlYTU4NjM4ZGFlZGQwYzFlZTIxYjE5YjNlYWY0MDQ5OTJmYmFmMmQ4NWY0NjA5+1RSgg==: 00:29:47.688 12:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmVjZTFkYWE2YTBlYmVjMDg5NWVlZTI2NzZkMjRhNmRiOWVkZDEwMjYzZWYzNDM3x7QykA==: ]] 00:29:47.688 12:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmVjZTFkYWE2YTBlYmVjMDg5NWVlZTI2NzZkMjRhNmRiOWVkZDEwMjYzZWYzNDM3x7QykA==: 00:29:47.688 12:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:29:47.688 12:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.688 12:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.688 nvme0n1 00:29:47.688 12:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.688 12:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:29:47.688 12:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:47.688 12:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:47.688 12:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:47.688 12:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:47.688 12:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODUyMzM0YmU2MmJkZTI0NmY5N2ExZmE2Y2VkNTBiNzQYmII1: 00:29:47.688 12:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTJkZjFkMDc1NjM2NTZlZDg2ODM3NmQwOTE5ODI2MDOdrPvI: 00:29:47.688 12:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:47.688 12:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:47.688 12:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODUyMzM0YmU2MmJkZTI0NmY5N2ExZmE2Y2VkNTBiNzQYmII1: 00:29:47.688 12:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTJkZjFkMDc1NjM2NTZlZDg2ODM3NmQwOTE5ODI2MDOdrPvI: ]] 00:29:47.688 12:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTJkZjFkMDc1NjM2NTZlZDg2ODM3NmQwOTE5ODI2MDOdrPvI: 00:29:47.688 12:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:29:47.688 12:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:29:47.688 12:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:29:47.688 12:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:47.689 12:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:47.689 12:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:47.689 12:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:47.689 12:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:29:47.689 12:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.689 12:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.689 request: 00:29:47.689 { 00:29:47.689 "name": "nvme0", 00:29:47.689 "dhchap_key": "key2", 00:29:47.689 "dhchap_ctrlr_key": "ckey1", 00:29:47.689 "method": "bdev_nvme_set_keys", 00:29:47.689 "req_id": 1 00:29:47.689 } 00:29:47.689 Got JSON-RPC error response 00:29:47.689 response: 00:29:47.689 { 00:29:47.689 "code": -13, 00:29:47.689 "message": "Permission denied" 00:29:47.689 } 00:29:47.689 12:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:47.689 12:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:29:47.689 12:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:47.689 12:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:47.689 12:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:47.689 12:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:29:47.689 12:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:29:47.689 12:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.689 12:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.689 12:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.947 12:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:29:47.947 12:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:29:48.883 12:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:29:48.883 12:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:29:48.883 12:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.883 12:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.883 12:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.883 12:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:29:48.883 12:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:29:48.883 12:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:29:48.883 12:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:29:48.883 12:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # nvmfcleanup 00:29:48.883 12:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@99 -- # sync 00:29:48.883 12:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:29:48.883 12:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@102 -- # set +e 00:29:48.883 12:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@103 -- # for i in {1..20} 00:29:48.883 12:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:29:48.883 rmmod nvme_tcp 00:29:48.883 rmmod nvme_fabrics 00:29:48.883 12:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:29:48.883 12:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # set -e 00:29:48.883 12:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # return 0 00:29:48.883 12:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # '[' -n 202688 ']' 00:29:48.883 12:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@337 -- # killprocess 202688 00:29:48.883 12:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 202688 ']' 00:29:48.883 12:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 202688 00:29:48.883 12:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:29:48.883 12:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:48.883 12:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 202688 00:29:48.883 12:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:48.883 12:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:48.883 12:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 202688' 00:29:48.883 killing process with pid 202688 00:29:48.883 12:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 202688 00:29:48.883 12:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 202688 00:29:49.143 12:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:29:49.143 12:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # nvmf_fini 00:29:49.143 12:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@264 -- # local dev 00:29:49.143 12:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@267 -- # remove_target_ns 00:29:49.143 12:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:29:49.143 12:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:29:49.143 12:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_target_ns 00:29:51.046 12:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@268 -- # delete_main_bridge 00:29:51.046 12:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:29:51.046 12:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@130 -- # return 0 00:29:51.046 12:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:29:51.046 12:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:29:51.046 12:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:29:51.046 12:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:29:51.046 12:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:29:51.046 12:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:29:51.046 12:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:29:51.046 12:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:29:51.046 12:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:29:51.046 12:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:29:51.046 12:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:29:51.046 12:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:29:51.046 12:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:29:51.046 12:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:29:51.046 12:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:29:51.046 12:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:29:51.304 12:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:29:51.304 12:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@41 -- # _dev=0 00:29:51.304 12:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@41 -- # dev_map=() 00:29:51.304 12:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@284 -- # iptr 00:29:51.304 12:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@542 -- # iptables-save 00:29:51.304 12:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:29:51.304 12:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@542 -- # iptables-restore 00:29:51.304 12:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:29:51.304 12:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:51.304 12:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:29:51.304 12:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:29:51.304 12:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # echo 0 00:29:51.304 12:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:51.304 12:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:51.304 12:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:51.304 12:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:51.304 12:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # modules=(/sys/module/nvmet/holders/*) 00:29:51.304 12:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@497 -- # modprobe -r nvmet_tcp nvmet 00:29:51.304 12:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:54.586 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:54.586 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:54.586 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:54.586 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:54.586 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:54.586 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:54.586 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:54.586 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:54.586 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:54.586 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:54.586 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:54.586 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:54.586 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:54.586 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:54.586 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:54.586 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:55.521 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:29:55.779 12:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.EK6 /tmp/spdk.key-null.J8r /tmp/spdk.key-sha256.6El /tmp/spdk.key-sha384.Uiz /tmp/spdk.key-sha512.Rm6 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:29:55.779 12:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:58.315 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:29:58.315 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:29:58.315 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:29:58.315 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:29:58.315 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:29:58.315 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:29:58.315 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:29:58.315 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:29:58.315 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:29:58.315 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:29:58.315 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:29:58.315 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:29:58.315 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:29:58.315 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:29:58.315 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:29:58.315 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:29:58.315 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:29:58.574 00:29:58.574 real 0m54.285s 00:29:58.574 user 0m48.196s 00:29:58.574 sys 0m12.457s 00:29:58.574 12:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:58.574 12:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.574 ************************************ 00:29:58.574 END TEST nvmf_auth_host 00:29:58.574 ************************************ 00:29:58.574 12:13:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:58.574 12:13:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:58.574 12:13:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:58.574 12:13:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.574 ************************************ 00:29:58.574 START TEST nvmf_bdevperf 00:29:58.574 ************************************ 00:29:58.574 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:58.834 * Looking for test storage... 00:29:58.834 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:58.834 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:58.834 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:29:58.834 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:58.834 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:58.834 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:58.834 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:58.834 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:58.834 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:29:58.834 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:29:58.834 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:29:58.834 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:29:58.834 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:29:58.834 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:29:58.834 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:29:58.834 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:58.834 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:29:58.834 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:29:58.834 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:58.834 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:58.834 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:29:58.834 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:29:58.834 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:58.834 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:29:58.834 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:29:58.834 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:29:58.834 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:29:58.834 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:58.834 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:29:58.834 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:29:58.834 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:58.834 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:58.834 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:29:58.834 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:58.835 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:58.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:58.835 --rc genhtml_branch_coverage=1 00:29:58.835 --rc genhtml_function_coverage=1 00:29:58.835 --rc genhtml_legend=1 00:29:58.835 --rc geninfo_all_blocks=1 00:29:58.835 --rc geninfo_unexecuted_blocks=1 00:29:58.835 00:29:58.835 ' 00:29:58.835 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:58.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:58.835 --rc genhtml_branch_coverage=1 00:29:58.835 --rc genhtml_function_coverage=1 00:29:58.835 --rc genhtml_legend=1 00:29:58.835 --rc geninfo_all_blocks=1 00:29:58.835 --rc geninfo_unexecuted_blocks=1 00:29:58.835 00:29:58.835 ' 00:29:58.835 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:58.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:58.835 --rc genhtml_branch_coverage=1 00:29:58.835 --rc genhtml_function_coverage=1 00:29:58.835 --rc genhtml_legend=1 00:29:58.835 --rc geninfo_all_blocks=1 00:29:58.835 --rc geninfo_unexecuted_blocks=1 00:29:58.835 00:29:58.835 ' 00:29:58.835 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:58.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:58.835 --rc genhtml_branch_coverage=1 00:29:58.835 --rc genhtml_function_coverage=1 00:29:58.835 --rc genhtml_legend=1 00:29:58.835 --rc geninfo_all_blocks=1 00:29:58.835 --rc geninfo_unexecuted_blocks=1 00:29:58.835 00:29:58.835 ' 00:29:58.835 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:58.835 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:29:58.835 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:58.835 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:58.835 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:58.835 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:58.835 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:58.835 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:29:58.835 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:58.835 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:29:58.835 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:29:58.835 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:29:58.835 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:58.835 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:29:58.835 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:29:58.835 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:58.835 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:58.835 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:29:58.835 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:58.835 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:58.835 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:58.835 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.835 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.835 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.835 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:29:58.835 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.835 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:29:58.835 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:29:58.835 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:29:58.835 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:29:58.835 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@50 -- # : 0 00:29:58.835 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:29:58.835 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:29:58.835 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:29:58.835 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:58.835 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:58.835 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:29:58.835 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:29:58.835 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:29:58.835 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:29:58.835 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@54 -- # have_pci_nics=0 00:29:58.835 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:58.835 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:58.835 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:29:58.835 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:29:58.835 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:58.835 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # prepare_net_devs 00:29:58.835 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # local -g is_hw=no 00:29:58.835 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@260 -- # remove_target_ns 00:29:58.835 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:29:58.835 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:29:58.835 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_target_ns 00:29:58.835 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:29:58.835 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:29:58.835 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # xtrace_disable 00:29:58.835 12:13:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@131 -- # pci_devs=() 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@131 -- # local -a pci_devs 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@132 -- # pci_net_devs=() 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@133 -- # pci_drivers=() 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@133 -- # local -A pci_drivers 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@135 -- # net_devs=() 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@135 -- # local -ga net_devs 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@136 -- # e810=() 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@136 -- # local -ga e810 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@137 -- # x722=() 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@137 -- # local -ga x722 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@138 -- # mlx=() 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@138 -- # local -ga mlx 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:05.405 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:05.405 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # [[ up == up ]] 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:05.405 Found net devices under 0000:86:00.0: cvl_0_0 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # [[ up == up ]] 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:05.405 Found net devices under 0000:86:00.1: cvl_0_1 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # is_hw=yes 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@257 -- # create_target_ns 00:30:05.405 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@27 -- # local -gA dev_map 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@28 -- # local -g _dev 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@44 -- # ips=() 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@11 -- # local val=167772161 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:30:05.406 10.0.0.1 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@11 -- # local val=167772162 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:30:05.406 10.0.0.2 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@38 -- # ping_ips 1 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@107 -- # local dev=initiator0 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:30:05.406 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:05.406 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.335 ms 00:30:05.406 00:30:05.406 --- 10.0.0.1 ping statistics --- 00:30:05.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:05.406 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:05.406 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@168 -- # get_net_dev target0 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@107 -- # local dev=target0 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:30:05.407 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:05.407 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:30:05.407 00:30:05.407 --- 10.0.0.2 ping statistics --- 00:30:05.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:05.407 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@98 -- # (( pair++ )) 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@270 -- # return 0 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@107 -- # local dev=initiator0 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@107 -- # local dev=initiator1 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@109 -- # return 1 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@168 -- # dev= 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@169 -- # return 0 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@168 -- # get_net_dev target0 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@107 -- # local dev=target0 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@168 -- # get_net_dev target1 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@107 -- # local dev=target1 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@109 -- # return 1 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@168 -- # dev= 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@169 -- # return 0 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # nvmfpid=216388 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # waitforlisten 216388 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 216388 ']' 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:05.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:05.407 12:13:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:05.407 [2024-12-05 12:13:38.999541] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:30:05.407 [2024-12-05 12:13:38.999595] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:05.407 [2024-12-05 12:13:39.080684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:05.407 [2024-12-05 12:13:39.120810] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:05.407 [2024-12-05 12:13:39.120848] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:05.407 [2024-12-05 12:13:39.120856] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:05.408 [2024-12-05 12:13:39.120862] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:05.408 [2024-12-05 12:13:39.120867] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:05.408 [2024-12-05 12:13:39.122312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:05.408 [2024-12-05 12:13:39.122434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:05.408 [2024-12-05 12:13:39.122434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:05.666 12:13:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:05.666 12:13:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:30:05.666 12:13:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:30:05.666 12:13:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:05.666 12:13:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:05.926 12:13:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:05.926 12:13:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:05.926 12:13:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.926 12:13:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:05.926 [2024-12-05 12:13:39.877223] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:05.926 12:13:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.926 12:13:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:05.926 12:13:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.926 12:13:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:05.926 Malloc0 00:30:05.926 12:13:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.926 12:13:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:05.926 12:13:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.926 12:13:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:05.926 12:13:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.926 12:13:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:05.926 12:13:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.926 12:13:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:05.926 12:13:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.926 12:13:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:05.926 12:13:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.926 12:13:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:05.926 [2024-12-05 12:13:39.929853] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:05.926 12:13:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.926 12:13:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:30:05.926 12:13:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:30:05.926 12:13:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # config=() 00:30:05.926 12:13:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # local subsystem config 00:30:05.926 12:13:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:30:05.926 12:13:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:30:05.926 { 00:30:05.926 "params": { 00:30:05.926 "name": "Nvme$subsystem", 00:30:05.926 "trtype": "$TEST_TRANSPORT", 00:30:05.926 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:05.926 "adrfam": "ipv4", 00:30:05.926 "trsvcid": "$NVMF_PORT", 00:30:05.926 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:05.926 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:05.926 "hdgst": ${hdgst:-false}, 00:30:05.926 "ddgst": ${ddgst:-false} 00:30:05.926 }, 00:30:05.926 "method": "bdev_nvme_attach_controller" 00:30:05.926 } 00:30:05.926 EOF 00:30:05.926 )") 00:30:05.926 12:13:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # cat 00:30:05.927 12:13:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@396 -- # jq . 00:30:05.927 12:13:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@397 -- # IFS=, 00:30:05.927 12:13:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:30:05.927 "params": { 00:30:05.927 "name": "Nvme1", 00:30:05.927 "trtype": "tcp", 00:30:05.927 "traddr": "10.0.0.2", 00:30:05.927 "adrfam": "ipv4", 00:30:05.927 "trsvcid": "4420", 00:30:05.927 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:05.927 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:05.927 "hdgst": false, 00:30:05.927 "ddgst": false 00:30:05.927 }, 00:30:05.927 "method": "bdev_nvme_attach_controller" 00:30:05.927 }' 00:30:05.927 [2024-12-05 12:13:39.979623] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:30:05.927 [2024-12-05 12:13:39.979666] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid216485 ] 00:30:05.927 [2024-12-05 12:13:40.054420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:05.927 [2024-12-05 12:13:40.100096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:06.186 Running I/O for 1 seconds... 00:30:07.122 11216.00 IOPS, 43.81 MiB/s 00:30:07.122 Latency(us) 00:30:07.122 [2024-12-05T11:13:41.318Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:07.122 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:07.122 Verification LBA range: start 0x0 length 0x4000 00:30:07.122 Nvme1n1 : 1.00 11310.99 44.18 0.00 0.00 11269.78 760.69 13232.03 00:30:07.122 [2024-12-05T11:13:41.318Z] =================================================================================================================== 00:30:07.122 [2024-12-05T11:13:41.318Z] Total : 11310.99 44.18 0.00 0.00 11269.78 760.69 13232.03 00:30:07.382 12:13:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=216776 00:30:07.382 12:13:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:30:07.382 12:13:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:30:07.382 12:13:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:30:07.382 12:13:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # config=() 00:30:07.382 12:13:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # local subsystem config 00:30:07.382 12:13:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:30:07.382 12:13:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:30:07.382 { 00:30:07.382 "params": { 00:30:07.382 "name": "Nvme$subsystem", 00:30:07.382 "trtype": "$TEST_TRANSPORT", 00:30:07.382 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:07.382 "adrfam": "ipv4", 00:30:07.382 "trsvcid": "$NVMF_PORT", 00:30:07.382 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:07.382 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:07.382 "hdgst": ${hdgst:-false}, 00:30:07.382 "ddgst": ${ddgst:-false} 00:30:07.382 }, 00:30:07.382 "method": "bdev_nvme_attach_controller" 00:30:07.382 } 00:30:07.382 EOF 00:30:07.382 )") 00:30:07.382 12:13:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # cat 00:30:07.382 12:13:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@396 -- # jq . 00:30:07.382 12:13:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@397 -- # IFS=, 00:30:07.382 12:13:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:30:07.383 "params": { 00:30:07.383 "name": "Nvme1", 00:30:07.383 "trtype": "tcp", 00:30:07.383 "traddr": "10.0.0.2", 00:30:07.383 "adrfam": "ipv4", 00:30:07.383 "trsvcid": "4420", 00:30:07.383 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:07.383 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:07.383 "hdgst": false, 00:30:07.383 "ddgst": false 00:30:07.383 }, 00:30:07.383 "method": "bdev_nvme_attach_controller" 00:30:07.383 }' 00:30:07.383 [2024-12-05 12:13:41.516139] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:30:07.383 [2024-12-05 12:13:41.516189] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid216776 ] 00:30:07.642 [2024-12-05 12:13:41.593271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:07.642 [2024-12-05 12:13:41.633743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:07.642 Running I/O for 15 seconds... 00:30:09.956 11345.00 IOPS, 44.32 MiB/s [2024-12-05T11:13:44.725Z] 11427.50 IOPS, 44.64 MiB/s [2024-12-05T11:13:44.725Z] 12:13:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 216388 00:30:10.529 12:13:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:30:10.529 [2024-12-05 12:13:44.485009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:111528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.529 [2024-12-05 12:13:44.485045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.529 [2024-12-05 12:13:44.485060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:111536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.529 [2024-12-05 12:13:44.485070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.529 [2024-12-05 12:13:44.485079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:111544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.529 [2024-12-05 12:13:44.485087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.529 [2024-12-05 12:13:44.485101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:111552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.529 [2024-12-05 12:13:44.485110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.529 [2024-12-05 12:13:44.485121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:111688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.529 [2024-12-05 12:13:44.485129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.529 [2024-12-05 12:13:44.485138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:111696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.529 [2024-12-05 12:13:44.485146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.529 [2024-12-05 12:13:44.485155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:111704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.529 [2024-12-05 12:13:44.485163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.529 [2024-12-05 12:13:44.485172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:111712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.529 [2024-12-05 12:13:44.485179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.529 [2024-12-05 12:13:44.485188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:111720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.529 [2024-12-05 12:13:44.485194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.529 [2024-12-05 12:13:44.485203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:111728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.529 [2024-12-05 12:13:44.485210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.529 [2024-12-05 12:13:44.485220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:111736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.529 [2024-12-05 12:13:44.485227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.529 [2024-12-05 12:13:44.485236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:111744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.529 [2024-12-05 12:13:44.485244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.529 [2024-12-05 12:13:44.485253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:111752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.529 [2024-12-05 12:13:44.485260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.529 [2024-12-05 12:13:44.485270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:111760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.529 [2024-12-05 12:13:44.485277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.529 [2024-12-05 12:13:44.485285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:111768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.529 [2024-12-05 12:13:44.485293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.529 [2024-12-05 12:13:44.485301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:111776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.529 [2024-12-05 12:13:44.485311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.529 [2024-12-05 12:13:44.485319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:111784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.529 [2024-12-05 12:13:44.485327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.529 [2024-12-05 12:13:44.485337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:111792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.529 [2024-12-05 12:13:44.485346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.529 [2024-12-05 12:13:44.485357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:111800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.529 [2024-12-05 12:13:44.485364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.529 [2024-12-05 12:13:44.485384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:111808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.529 [2024-12-05 12:13:44.485392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.529 [2024-12-05 12:13:44.485401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:111816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.529 [2024-12-05 12:13:44.485409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.529 [2024-12-05 12:13:44.485418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:111824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.529 [2024-12-05 12:13:44.485430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.529 [2024-12-05 12:13:44.485439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:111832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.529 [2024-12-05 12:13:44.485447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.529 [2024-12-05 12:13:44.485456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:111840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.529 [2024-12-05 12:13:44.485463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.529 [2024-12-05 12:13:44.485472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:111848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.529 [2024-12-05 12:13:44.485478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.529 [2024-12-05 12:13:44.485489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:111856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.529 [2024-12-05 12:13:44.485495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.529 [2024-12-05 12:13:44.485504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:111864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.529 [2024-12-05 12:13:44.485510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.529 [2024-12-05 12:13:44.485518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:111872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.529 [2024-12-05 12:13:44.485525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.529 [2024-12-05 12:13:44.485533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:111880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.529 [2024-12-05 12:13:44.485542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.529 [2024-12-05 12:13:44.485550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:111888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.529 [2024-12-05 12:13:44.485557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.529 [2024-12-05 12:13:44.485566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:111896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.529 [2024-12-05 12:13:44.485573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.530 [2024-12-05 12:13:44.485581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:111904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.530 [2024-12-05 12:13:44.485588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.530 [2024-12-05 12:13:44.485596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:111912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.530 [2024-12-05 12:13:44.485602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.530 [2024-12-05 12:13:44.485611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:111920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.530 [2024-12-05 12:13:44.485617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.530 [2024-12-05 12:13:44.485625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:111928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.530 [2024-12-05 12:13:44.485632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.530 [2024-12-05 12:13:44.485639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:111936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.530 [2024-12-05 12:13:44.485646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.530 [2024-12-05 12:13:44.485654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:111944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.530 [2024-12-05 12:13:44.485660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.530 [2024-12-05 12:13:44.485668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:111952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.530 [2024-12-05 12:13:44.485674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.530 [2024-12-05 12:13:44.485682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:111960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.530 [2024-12-05 12:13:44.485688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.530 [2024-12-05 12:13:44.485696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:111968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.530 [2024-12-05 12:13:44.485702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.530 [2024-12-05 12:13:44.485710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:111976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.530 [2024-12-05 12:13:44.485717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.530 [2024-12-05 12:13:44.485726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:111984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.530 [2024-12-05 12:13:44.485734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.530 [2024-12-05 12:13:44.485742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:111992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.530 [2024-12-05 12:13:44.485748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.530 [2024-12-05 12:13:44.485756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:112000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.530 [2024-12-05 12:13:44.485763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.530 [2024-12-05 12:13:44.485771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:112008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.530 [2024-12-05 12:13:44.485778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.530 [2024-12-05 12:13:44.485786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:111560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.530 [2024-12-05 12:13:44.485792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.530 [2024-12-05 12:13:44.485800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:112016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.530 [2024-12-05 12:13:44.485808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.530 [2024-12-05 12:13:44.485815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:112024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.530 [2024-12-05 12:13:44.485822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.530 [2024-12-05 12:13:44.485830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:112032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.530 [2024-12-05 12:13:44.485836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.530 [2024-12-05 12:13:44.485844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:112040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.530 [2024-12-05 12:13:44.485850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.530 [2024-12-05 12:13:44.485859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:112048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.530 [2024-12-05 12:13:44.485866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.530 [2024-12-05 12:13:44.485875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:112056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.530 [2024-12-05 12:13:44.485881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.530 [2024-12-05 12:13:44.485889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:112064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.530 [2024-12-05 12:13:44.485896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.530 [2024-12-05 12:13:44.485904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:112072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.530 [2024-12-05 12:13:44.485912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.530 [2024-12-05 12:13:44.485919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:112080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.530 [2024-12-05 12:13:44.485926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.530 [2024-12-05 12:13:44.485933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:112088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.530 [2024-12-05 12:13:44.485942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.530 [2024-12-05 12:13:44.485950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:112096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.530 [2024-12-05 12:13:44.485957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.530 [2024-12-05 12:13:44.485964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:112104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.530 [2024-12-05 12:13:44.485971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.530 [2024-12-05 12:13:44.485979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:112112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.530 [2024-12-05 12:13:44.485985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.530 [2024-12-05 12:13:44.485994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:112120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.530 [2024-12-05 12:13:44.486002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.530 [2024-12-05 12:13:44.486009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:112128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.530 [2024-12-05 12:13:44.486016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.530 [2024-12-05 12:13:44.486024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:112136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.530 [2024-12-05 12:13:44.486031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.530 [2024-12-05 12:13:44.486039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:112144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.530 [2024-12-05 12:13:44.486045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.530 [2024-12-05 12:13:44.486054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:112152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.530 [2024-12-05 12:13:44.486060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.530 [2024-12-05 12:13:44.486068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:112160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.530 [2024-12-05 12:13:44.486074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.530 [2024-12-05 12:13:44.486083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:112168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.530 [2024-12-05 12:13:44.486089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.530 [2024-12-05 12:13:44.486098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:112176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.530 [2024-12-05 12:13:44.486106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.530 [2024-12-05 12:13:44.486114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:112184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.530 [2024-12-05 12:13:44.486121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.530 [2024-12-05 12:13:44.486129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:112192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.530 [2024-12-05 12:13:44.486136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.530 [2024-12-05 12:13:44.486144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:112200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.530 [2024-12-05 12:13:44.486150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.530 [2024-12-05 12:13:44.486160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:112208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.531 [2024-12-05 12:13:44.486167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.531 [2024-12-05 12:13:44.486175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:112216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.531 [2024-12-05 12:13:44.486181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.531 [2024-12-05 12:13:44.486189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:112224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.531 [2024-12-05 12:13:44.486196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.531 [2024-12-05 12:13:44.486203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:112232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.531 [2024-12-05 12:13:44.486210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.531 [2024-12-05 12:13:44.486219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:112240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.531 [2024-12-05 12:13:44.486225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.531 [2024-12-05 12:13:44.486233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:112248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.531 [2024-12-05 12:13:44.486239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.531 [2024-12-05 12:13:44.486247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:112256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.531 [2024-12-05 12:13:44.486253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.531 [2024-12-05 12:13:44.486261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:112264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.531 [2024-12-05 12:13:44.486270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.531 [2024-12-05 12:13:44.486279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:112272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.531 [2024-12-05 12:13:44.486287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.531 [2024-12-05 12:13:44.486295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:112280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.531 [2024-12-05 12:13:44.486301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.531 [2024-12-05 12:13:44.486309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:112288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.531 [2024-12-05 12:13:44.486315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.531 [2024-12-05 12:13:44.486323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:112296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.531 [2024-12-05 12:13:44.486331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.531 [2024-12-05 12:13:44.486339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:112304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.531 [2024-12-05 12:13:44.486345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.531 [2024-12-05 12:13:44.486353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:112312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.531 [2024-12-05 12:13:44.486360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.531 [2024-12-05 12:13:44.486374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:112320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.531 [2024-12-05 12:13:44.486381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.531 [2024-12-05 12:13:44.486390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:112328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.531 [2024-12-05 12:13:44.486396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.531 [2024-12-05 12:13:44.486405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:112336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.531 [2024-12-05 12:13:44.486412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.531 [2024-12-05 12:13:44.486420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:112344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.531 [2024-12-05 12:13:44.486426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.531 [2024-12-05 12:13:44.486434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:112352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.531 [2024-12-05 12:13:44.486441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.531 [2024-12-05 12:13:44.486449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:112360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.531 [2024-12-05 12:13:44.486456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.531 [2024-12-05 12:13:44.486464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:112368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.531 [2024-12-05 12:13:44.486471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.531 [2024-12-05 12:13:44.486481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:112376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.531 [2024-12-05 12:13:44.486487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.531 [2024-12-05 12:13:44.486497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:112384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.531 [2024-12-05 12:13:44.486504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.531 [2024-12-05 12:13:44.486513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:112392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.531 [2024-12-05 12:13:44.486520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.531 [2024-12-05 12:13:44.486528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:112400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.531 [2024-12-05 12:13:44.486535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.531 [2024-12-05 12:13:44.486543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:112408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.531 [2024-12-05 12:13:44.486549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.531 [2024-12-05 12:13:44.486557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:112416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.531 [2024-12-05 12:13:44.486564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.531 [2024-12-05 12:13:44.486575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:112424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.531 [2024-12-05 12:13:44.486581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.531 [2024-12-05 12:13:44.486589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:112432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.531 [2024-12-05 12:13:44.486596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.531 [2024-12-05 12:13:44.486603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:112440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.531 [2024-12-05 12:13:44.486610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.531 [2024-12-05 12:13:44.486618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:112448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.531 [2024-12-05 12:13:44.486625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.531 [2024-12-05 12:13:44.486633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:112456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.531 [2024-12-05 12:13:44.486639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.531 [2024-12-05 12:13:44.486647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.531 [2024-12-05 12:13:44.486653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.531 [2024-12-05 12:13:44.486661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:112472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.531 [2024-12-05 12:13:44.486668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.531 [2024-12-05 12:13:44.486680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:112480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.531 [2024-12-05 12:13:44.486687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.531 [2024-12-05 12:13:44.486695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:112488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.531 [2024-12-05 12:13:44.486701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.531 [2024-12-05 12:13:44.486709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:112496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.531 [2024-12-05 12:13:44.486716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.531 [2024-12-05 12:13:44.486723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:112504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.531 [2024-12-05 12:13:44.486730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.531 [2024-12-05 12:13:44.486743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:112512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.531 [2024-12-05 12:13:44.486750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.531 [2024-12-05 12:13:44.486758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:112520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.531 [2024-12-05 12:13:44.486764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.531 [2024-12-05 12:13:44.486772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:112528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.532 [2024-12-05 12:13:44.486778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.532 [2024-12-05 12:13:44.486787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:112536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.532 [2024-12-05 12:13:44.486794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.532 [2024-12-05 12:13:44.486803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:112544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:10.532 [2024-12-05 12:13:44.486809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.532 [2024-12-05 12:13:44.486818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:111568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.532 [2024-12-05 12:13:44.486824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.532 [2024-12-05 12:13:44.486833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:111576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.532 [2024-12-05 12:13:44.486840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.532 [2024-12-05 12:13:44.486848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:111584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.532 [2024-12-05 12:13:44.486855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.532 [2024-12-05 12:13:44.486862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:111592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.532 [2024-12-05 12:13:44.486870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.532 [2024-12-05 12:13:44.486878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:111600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.532 [2024-12-05 12:13:44.486884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.532 [2024-12-05 12:13:44.486892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:111608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.532 [2024-12-05 12:13:44.486899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.532 [2024-12-05 12:13:44.486908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:111616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.532 [2024-12-05 12:13:44.486914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.532 [2024-12-05 12:13:44.486922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:111624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.532 [2024-12-05 12:13:44.486928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.532 [2024-12-05 12:13:44.486936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:111632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.532 [2024-12-05 12:13:44.486943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.532 [2024-12-05 12:13:44.486952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:111640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.532 [2024-12-05 12:13:44.486959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.532 [2024-12-05 12:13:44.486966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:111648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.532 [2024-12-05 12:13:44.486973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.532 [2024-12-05 12:13:44.486982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:111656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.532 [2024-12-05 12:13:44.486988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.532 [2024-12-05 12:13:44.486998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:111664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.532 [2024-12-05 12:13:44.487005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.532 [2024-12-05 12:13:44.487013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:111672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.532 [2024-12-05 12:13:44.487020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.532 [2024-12-05 12:13:44.487027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf6c0 is same with the state(6) to be set 00:30:10.532 [2024-12-05 12:13:44.487036] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:10.532 [2024-12-05 12:13:44.487041] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:10.532 [2024-12-05 12:13:44.487049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111680 len:8 PRP1 0x0 PRP2 0x0 00:30:10.532 [2024-12-05 12:13:44.487058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.532 [2024-12-05 12:13:44.489969] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.532 [2024-12-05 12:13:44.490026] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:10.532 [2024-12-05 12:13:44.490558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.532 [2024-12-05 12:13:44.490574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:10.532 [2024-12-05 12:13:44.490583] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:10.532 [2024-12-05 12:13:44.490757] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:10.532 [2024-12-05 12:13:44.490931] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.532 [2024-12-05 12:13:44.490941] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.532 [2024-12-05 12:13:44.490949] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.532 [2024-12-05 12:13:44.490957] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.532 [2024-12-05 12:13:44.503205] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.532 [2024-12-05 12:13:44.503496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.532 [2024-12-05 12:13:44.503514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:10.532 [2024-12-05 12:13:44.503523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:10.532 [2024-12-05 12:13:44.503692] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:10.532 [2024-12-05 12:13:44.503861] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.532 [2024-12-05 12:13:44.503871] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.532 [2024-12-05 12:13:44.503879] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.532 [2024-12-05 12:13:44.503887] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.532 [2024-12-05 12:13:44.516034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.532 [2024-12-05 12:13:44.516498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.532 [2024-12-05 12:13:44.516517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:10.532 [2024-12-05 12:13:44.516525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:10.532 [2024-12-05 12:13:44.516686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:10.532 [2024-12-05 12:13:44.516846] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.532 [2024-12-05 12:13:44.516856] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.532 [2024-12-05 12:13:44.516863] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.532 [2024-12-05 12:13:44.516870] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.532 [2024-12-05 12:13:44.528955] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.532 [2024-12-05 12:13:44.529378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.532 [2024-12-05 12:13:44.529395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:10.532 [2024-12-05 12:13:44.529403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:10.532 [2024-12-05 12:13:44.529563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:10.532 [2024-12-05 12:13:44.529724] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.532 [2024-12-05 12:13:44.529733] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.532 [2024-12-05 12:13:44.529739] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.532 [2024-12-05 12:13:44.529746] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.532 [2024-12-05 12:13:44.541807] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.532 [2024-12-05 12:13:44.542201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.532 [2024-12-05 12:13:44.542244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:10.532 [2024-12-05 12:13:44.542270] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:10.532 [2024-12-05 12:13:44.542822] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:10.532 [2024-12-05 12:13:44.542985] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.532 [2024-12-05 12:13:44.542995] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.532 [2024-12-05 12:13:44.543001] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.532 [2024-12-05 12:13:44.543007] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.532 [2024-12-05 12:13:44.554641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.532 [2024-12-05 12:13:44.555050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.533 [2024-12-05 12:13:44.555067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:10.533 [2024-12-05 12:13:44.555074] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:10.533 [2024-12-05 12:13:44.555235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:10.533 [2024-12-05 12:13:44.555418] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.533 [2024-12-05 12:13:44.555429] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.533 [2024-12-05 12:13:44.555436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.533 [2024-12-05 12:13:44.555443] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.533 [2024-12-05 12:13:44.567502] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.533 [2024-12-05 12:13:44.567923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.533 [2024-12-05 12:13:44.567940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:10.533 [2024-12-05 12:13:44.567950] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:10.533 [2024-12-05 12:13:44.568111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:10.533 [2024-12-05 12:13:44.568271] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.533 [2024-12-05 12:13:44.568280] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.533 [2024-12-05 12:13:44.568287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.533 [2024-12-05 12:13:44.568294] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.533 [2024-12-05 12:13:44.580305] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.533 [2024-12-05 12:13:44.580707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.533 [2024-12-05 12:13:44.580725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:10.533 [2024-12-05 12:13:44.580732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:10.533 [2024-12-05 12:13:44.580891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:10.533 [2024-12-05 12:13:44.581051] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.533 [2024-12-05 12:13:44.581060] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.533 [2024-12-05 12:13:44.581066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.533 [2024-12-05 12:13:44.581073] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.533 [2024-12-05 12:13:44.593136] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.533 [2024-12-05 12:13:44.593553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.533 [2024-12-05 12:13:44.593601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:10.533 [2024-12-05 12:13:44.593634] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:10.533 [2024-12-05 12:13:44.593795] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:10.533 [2024-12-05 12:13:44.593955] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.533 [2024-12-05 12:13:44.593965] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.533 [2024-12-05 12:13:44.593971] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.533 [2024-12-05 12:13:44.593977] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.533 [2024-12-05 12:13:44.605931] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.533 [2024-12-05 12:13:44.606270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.533 [2024-12-05 12:13:44.606287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:10.533 [2024-12-05 12:13:44.606295] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:10.533 [2024-12-05 12:13:44.606481] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:10.533 [2024-12-05 12:13:44.606654] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.533 [2024-12-05 12:13:44.606664] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.533 [2024-12-05 12:13:44.606671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.533 [2024-12-05 12:13:44.606677] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.533 [2024-12-05 12:13:44.618709] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.533 [2024-12-05 12:13:44.619056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.533 [2024-12-05 12:13:44.619100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:10.533 [2024-12-05 12:13:44.619124] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:10.533 [2024-12-05 12:13:44.619722] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:10.533 [2024-12-05 12:13:44.620177] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.533 [2024-12-05 12:13:44.620187] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.533 [2024-12-05 12:13:44.620194] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.533 [2024-12-05 12:13:44.620202] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.533 [2024-12-05 12:13:44.631556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.533 [2024-12-05 12:13:44.631914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.533 [2024-12-05 12:13:44.631959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:10.533 [2024-12-05 12:13:44.631983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:10.533 [2024-12-05 12:13:44.632585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:10.533 [2024-12-05 12:13:44.633074] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.533 [2024-12-05 12:13:44.633084] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.533 [2024-12-05 12:13:44.633091] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.533 [2024-12-05 12:13:44.633098] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.533 [2024-12-05 12:13:44.644385] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.533 [2024-12-05 12:13:44.644802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.533 [2024-12-05 12:13:44.644819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:10.533 [2024-12-05 12:13:44.644826] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:10.533 [2024-12-05 12:13:44.644986] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:10.533 [2024-12-05 12:13:44.645147] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.533 [2024-12-05 12:13:44.645156] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.533 [2024-12-05 12:13:44.645162] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.533 [2024-12-05 12:13:44.645172] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.533 [2024-12-05 12:13:44.657178] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.533 [2024-12-05 12:13:44.657587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.533 [2024-12-05 12:13:44.657604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:10.533 [2024-12-05 12:13:44.657613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:10.533 [2024-12-05 12:13:44.657772] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:10.534 [2024-12-05 12:13:44.657932] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.534 [2024-12-05 12:13:44.657941] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.534 [2024-12-05 12:13:44.657948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.534 [2024-12-05 12:13:44.657955] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.534 [2024-12-05 12:13:44.669960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.534 [2024-12-05 12:13:44.670373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.534 [2024-12-05 12:13:44.670390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:10.534 [2024-12-05 12:13:44.670398] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:10.534 [2024-12-05 12:13:44.670558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:10.534 [2024-12-05 12:13:44.670718] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.534 [2024-12-05 12:13:44.670727] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.534 [2024-12-05 12:13:44.670733] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.534 [2024-12-05 12:13:44.670740] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.534 [2024-12-05 12:13:44.682812] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.534 [2024-12-05 12:13:44.683223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.534 [2024-12-05 12:13:44.683240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:10.534 [2024-12-05 12:13:44.683247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:10.534 [2024-12-05 12:13:44.683429] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:10.534 [2024-12-05 12:13:44.683598] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.534 [2024-12-05 12:13:44.683608] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.534 [2024-12-05 12:13:44.683615] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.534 [2024-12-05 12:13:44.683621] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.534 [2024-12-05 12:13:44.695651] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.534 [2024-12-05 12:13:44.696072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.534 [2024-12-05 12:13:44.696122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:10.534 [2024-12-05 12:13:44.696146] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:10.534 [2024-12-05 12:13:44.696743] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:10.534 [2024-12-05 12:13:44.697314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.534 [2024-12-05 12:13:44.697323] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.534 [2024-12-05 12:13:44.697330] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.534 [2024-12-05 12:13:44.697336] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.534 [2024-12-05 12:13:44.708462] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.534 [2024-12-05 12:13:44.708794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.534 [2024-12-05 12:13:44.708811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:10.534 [2024-12-05 12:13:44.708818] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:10.534 [2024-12-05 12:13:44.708979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:10.534 [2024-12-05 12:13:44.709139] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.534 [2024-12-05 12:13:44.709148] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.534 [2024-12-05 12:13:44.709155] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.534 [2024-12-05 12:13:44.709161] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.795 [2024-12-05 12:13:44.721568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.795 [2024-12-05 12:13:44.721995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.795 [2024-12-05 12:13:44.722013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:10.795 [2024-12-05 12:13:44.722021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:10.795 [2024-12-05 12:13:44.722190] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:10.795 [2024-12-05 12:13:44.722359] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.795 [2024-12-05 12:13:44.722375] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.795 [2024-12-05 12:13:44.722383] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.795 [2024-12-05 12:13:44.722389] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.795 [2024-12-05 12:13:44.734504] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.795 [2024-12-05 12:13:44.734933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.795 [2024-12-05 12:13:44.734950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:10.795 [2024-12-05 12:13:44.734972] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:10.795 [2024-12-05 12:13:44.735142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:10.795 [2024-12-05 12:13:44.735331] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.795 [2024-12-05 12:13:44.735341] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.795 [2024-12-05 12:13:44.735359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.795 [2024-12-05 12:13:44.735374] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.795 [2024-12-05 12:13:44.747539] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.795 [2024-12-05 12:13:44.747904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.795 [2024-12-05 12:13:44.747925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:10.795 [2024-12-05 12:13:44.747933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:10.795 [2024-12-05 12:13:44.748103] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:10.795 [2024-12-05 12:13:44.748273] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.795 [2024-12-05 12:13:44.748282] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.795 [2024-12-05 12:13:44.748290] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.795 [2024-12-05 12:13:44.748297] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.795 [2024-12-05 12:13:44.760579] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.795 [2024-12-05 12:13:44.761009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.795 [2024-12-05 12:13:44.761055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:10.795 [2024-12-05 12:13:44.761081] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:10.795 [2024-12-05 12:13:44.761337] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:10.795 [2024-12-05 12:13:44.761520] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.795 [2024-12-05 12:13:44.761532] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.795 [2024-12-05 12:13:44.761539] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.795 [2024-12-05 12:13:44.761547] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.795 [2024-12-05 12:13:44.773581] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.795 [2024-12-05 12:13:44.774003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.795 [2024-12-05 12:13:44.774021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:10.795 [2024-12-05 12:13:44.774028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:10.795 [2024-12-05 12:13:44.774197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:10.795 [2024-12-05 12:13:44.774378] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.795 [2024-12-05 12:13:44.774389] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.795 [2024-12-05 12:13:44.774396] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.795 [2024-12-05 12:13:44.774420] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.795 [2024-12-05 12:13:44.786339] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.795 [2024-12-05 12:13:44.786763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.795 [2024-12-05 12:13:44.786781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:10.795 [2024-12-05 12:13:44.786788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:10.795 [2024-12-05 12:13:44.786948] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:10.795 [2024-12-05 12:13:44.787109] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.795 [2024-12-05 12:13:44.787118] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.795 [2024-12-05 12:13:44.787125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.795 [2024-12-05 12:13:44.787131] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.795 [2024-12-05 12:13:44.799219] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.795 [2024-12-05 12:13:44.799639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.795 [2024-12-05 12:13:44.799685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:10.795 [2024-12-05 12:13:44.799709] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:10.795 [2024-12-05 12:13:44.800180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:10.795 [2024-12-05 12:13:44.800394] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.795 [2024-12-05 12:13:44.800414] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.795 [2024-12-05 12:13:44.800430] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.795 [2024-12-05 12:13:44.800444] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.795 [2024-12-05 12:13:44.814035] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.795 [2024-12-05 12:13:44.814563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.795 [2024-12-05 12:13:44.814609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:10.795 [2024-12-05 12:13:44.814634] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:10.795 [2024-12-05 12:13:44.815219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:10.795 [2024-12-05 12:13:44.815631] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.796 [2024-12-05 12:13:44.815645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.796 [2024-12-05 12:13:44.815655] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.796 [2024-12-05 12:13:44.815669] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.796 [2024-12-05 12:13:44.826985] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.796 [2024-12-05 12:13:44.827265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.796 [2024-12-05 12:13:44.827283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:10.796 [2024-12-05 12:13:44.827290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:10.796 [2024-12-05 12:13:44.827463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:10.796 [2024-12-05 12:13:44.827632] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.796 [2024-12-05 12:13:44.827642] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.796 [2024-12-05 12:13:44.827649] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.796 [2024-12-05 12:13:44.827656] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.796 [2024-12-05 12:13:44.839902] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.796 [2024-12-05 12:13:44.840325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.796 [2024-12-05 12:13:44.840344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:10.796 [2024-12-05 12:13:44.840352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:10.796 [2024-12-05 12:13:44.840525] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:10.796 [2024-12-05 12:13:44.840702] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.796 [2024-12-05 12:13:44.840711] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.796 [2024-12-05 12:13:44.840718] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.796 [2024-12-05 12:13:44.840724] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.796 10108.33 IOPS, 39.49 MiB/s [2024-12-05T11:13:44.992Z] [2024-12-05 12:13:44.852795] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.796 [2024-12-05 12:13:44.853124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.796 [2024-12-05 12:13:44.853141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:10.796 [2024-12-05 12:13:44.853149] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:10.796 [2024-12-05 12:13:44.853309] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:10.796 [2024-12-05 12:13:44.853474] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.796 [2024-12-05 12:13:44.853484] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.796 [2024-12-05 12:13:44.853491] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.796 [2024-12-05 12:13:44.853498] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.796 [2024-12-05 12:13:44.865806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.796 [2024-12-05 12:13:44.866235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.796 [2024-12-05 12:13:44.866278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:10.796 [2024-12-05 12:13:44.866302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:10.796 [2024-12-05 12:13:44.866899] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:10.796 [2024-12-05 12:13:44.867496] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.796 [2024-12-05 12:13:44.867522] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.796 [2024-12-05 12:13:44.867545] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.796 [2024-12-05 12:13:44.867575] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.796 [2024-12-05 12:13:44.878721] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.796 [2024-12-05 12:13:44.879105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.796 [2024-12-05 12:13:44.879122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:10.796 [2024-12-05 12:13:44.879130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:10.796 [2024-12-05 12:13:44.879291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:10.796 [2024-12-05 12:13:44.879458] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.796 [2024-12-05 12:13:44.879467] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.796 [2024-12-05 12:13:44.879474] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.796 [2024-12-05 12:13:44.879481] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.796 [2024-12-05 12:13:44.891629] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.796 [2024-12-05 12:13:44.892041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.796 [2024-12-05 12:13:44.892059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:10.796 [2024-12-05 12:13:44.892066] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:10.796 [2024-12-05 12:13:44.892226] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:10.796 [2024-12-05 12:13:44.892393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.796 [2024-12-05 12:13:44.892419] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.796 [2024-12-05 12:13:44.892426] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.796 [2024-12-05 12:13:44.892432] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.796 [2024-12-05 12:13:44.904435] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.796 [2024-12-05 12:13:44.904845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.796 [2024-12-05 12:13:44.904883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:10.796 [2024-12-05 12:13:44.904916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:10.796 [2024-12-05 12:13:44.905488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:10.796 [2024-12-05 12:13:44.905659] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.796 [2024-12-05 12:13:44.905669] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.796 [2024-12-05 12:13:44.905676] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.796 [2024-12-05 12:13:44.905683] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.796 [2024-12-05 12:13:44.917224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.796 [2024-12-05 12:13:44.917639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.796 [2024-12-05 12:13:44.917677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:10.796 [2024-12-05 12:13:44.917704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:10.796 [2024-12-05 12:13:44.918288] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:10.796 [2024-12-05 12:13:44.918512] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.796 [2024-12-05 12:13:44.918530] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.796 [2024-12-05 12:13:44.918537] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.796 [2024-12-05 12:13:44.918544] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.796 [2024-12-05 12:13:44.930057] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.796 [2024-12-05 12:13:44.930480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.796 [2024-12-05 12:13:44.930526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:10.796 [2024-12-05 12:13:44.930551] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:10.796 [2024-12-05 12:13:44.931137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:10.796 [2024-12-05 12:13:44.931738] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.796 [2024-12-05 12:13:44.931776] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.796 [2024-12-05 12:13:44.931792] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.796 [2024-12-05 12:13:44.931807] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.796 [2024-12-05 12:13:44.945215] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.796 [2024-12-05 12:13:44.945633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.796 [2024-12-05 12:13:44.945656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:10.796 [2024-12-05 12:13:44.945667] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:10.796 [2024-12-05 12:13:44.945923] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:10.797 [2024-12-05 12:13:44.946188] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.797 [2024-12-05 12:13:44.946201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.797 [2024-12-05 12:13:44.946211] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.797 [2024-12-05 12:13:44.946221] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.797 [2024-12-05 12:13:44.958189] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.797 [2024-12-05 12:13:44.958632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.797 [2024-12-05 12:13:44.958679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:10.797 [2024-12-05 12:13:44.958703] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:10.797 [2024-12-05 12:13:44.959289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:10.797 [2024-12-05 12:13:44.959807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.797 [2024-12-05 12:13:44.959817] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.797 [2024-12-05 12:13:44.959824] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.797 [2024-12-05 12:13:44.959830] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.797 [2024-12-05 12:13:44.971035] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.797 [2024-12-05 12:13:44.971422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.797 [2024-12-05 12:13:44.971439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:10.797 [2024-12-05 12:13:44.971446] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:10.797 [2024-12-05 12:13:44.971606] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:10.797 [2024-12-05 12:13:44.971766] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.797 [2024-12-05 12:13:44.971775] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.797 [2024-12-05 12:13:44.971783] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.797 [2024-12-05 12:13:44.971789] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.797 [2024-12-05 12:13:44.983860] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.797 [2024-12-05 12:13:44.984274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.797 [2024-12-05 12:13:44.984291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:10.797 [2024-12-05 12:13:44.984299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:10.797 [2024-12-05 12:13:44.984484] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:10.797 [2024-12-05 12:13:44.984654] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.797 [2024-12-05 12:13:44.984664] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.797 [2024-12-05 12:13:44.984674] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.797 [2024-12-05 12:13:44.984681] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.062 [2024-12-05 12:13:44.996842] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.062 [2024-12-05 12:13:44.997268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.062 [2024-12-05 12:13:44.997285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.062 [2024-12-05 12:13:44.997293] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.062 [2024-12-05 12:13:44.997474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.062 [2024-12-05 12:13:44.997648] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.062 [2024-12-05 12:13:44.997659] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.062 [2024-12-05 12:13:44.997669] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.062 [2024-12-05 12:13:44.997676] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.062 [2024-12-05 12:13:45.009915] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.062 [2024-12-05 12:13:45.010335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.062 [2024-12-05 12:13:45.010389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.062 [2024-12-05 12:13:45.010416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.062 [2024-12-05 12:13:45.010935] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.062 [2024-12-05 12:13:45.011106] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.062 [2024-12-05 12:13:45.011114] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.062 [2024-12-05 12:13:45.011121] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.062 [2024-12-05 12:13:45.011127] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.062 [2024-12-05 12:13:45.022846] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.062 [2024-12-05 12:13:45.023262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.062 [2024-12-05 12:13:45.023279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.062 [2024-12-05 12:13:45.023287] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.062 [2024-12-05 12:13:45.023464] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.062 [2024-12-05 12:13:45.023634] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.062 [2024-12-05 12:13:45.023643] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.062 [2024-12-05 12:13:45.023650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.062 [2024-12-05 12:13:45.023657] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.062 [2024-12-05 12:13:45.035611] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.062 [2024-12-05 12:13:45.036042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.062 [2024-12-05 12:13:45.036087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.062 [2024-12-05 12:13:45.036112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.062 [2024-12-05 12:13:45.036711] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.062 [2024-12-05 12:13:45.037105] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.062 [2024-12-05 12:13:45.037115] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.062 [2024-12-05 12:13:45.037122] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.062 [2024-12-05 12:13:45.037129] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.062 [2024-12-05 12:13:45.048437] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.062 [2024-12-05 12:13:45.048862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.062 [2024-12-05 12:13:45.048906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.062 [2024-12-05 12:13:45.048930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.062 [2024-12-05 12:13:45.049529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.062 [2024-12-05 12:13:45.050107] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.062 [2024-12-05 12:13:45.050116] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.062 [2024-12-05 12:13:45.050123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.062 [2024-12-05 12:13:45.050129] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.063 [2024-12-05 12:13:45.061207] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.063 [2024-12-05 12:13:45.061615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.063 [2024-12-05 12:13:45.061633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.063 [2024-12-05 12:13:45.061641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.063 [2024-12-05 12:13:45.061801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.063 [2024-12-05 12:13:45.061962] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.063 [2024-12-05 12:13:45.061971] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.063 [2024-12-05 12:13:45.061977] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.063 [2024-12-05 12:13:45.061984] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.063 [2024-12-05 12:13:45.074053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.063 [2024-12-05 12:13:45.074466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.063 [2024-12-05 12:13:45.074507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.063 [2024-12-05 12:13:45.074541] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.063 [2024-12-05 12:13:45.075101] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.063 [2024-12-05 12:13:45.075263] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.063 [2024-12-05 12:13:45.075271] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.063 [2024-12-05 12:13:45.075278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.063 [2024-12-05 12:13:45.075283] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.063 [2024-12-05 12:13:45.086918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.063 [2024-12-05 12:13:45.087331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.063 [2024-12-05 12:13:45.087349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.063 [2024-12-05 12:13:45.087356] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.063 [2024-12-05 12:13:45.087544] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.063 [2024-12-05 12:13:45.087715] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.063 [2024-12-05 12:13:45.087726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.063 [2024-12-05 12:13:45.087732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.063 [2024-12-05 12:13:45.087739] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.063 [2024-12-05 12:13:45.099747] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.063 [2024-12-05 12:13:45.100159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.063 [2024-12-05 12:13:45.100200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.063 [2024-12-05 12:13:45.100226] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.063 [2024-12-05 12:13:45.100828] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.063 [2024-12-05 12:13:45.101061] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.063 [2024-12-05 12:13:45.101070] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.063 [2024-12-05 12:13:45.101077] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.063 [2024-12-05 12:13:45.101085] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.063 [2024-12-05 12:13:45.112597] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.063 [2024-12-05 12:13:45.113010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.063 [2024-12-05 12:13:45.113027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.063 [2024-12-05 12:13:45.113035] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.063 [2024-12-05 12:13:45.113194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.063 [2024-12-05 12:13:45.113357] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.063 [2024-12-05 12:13:45.113372] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.063 [2024-12-05 12:13:45.113380] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.063 [2024-12-05 12:13:45.113388] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.063 [2024-12-05 12:13:45.125504] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.063 [2024-12-05 12:13:45.125935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.063 [2024-12-05 12:13:45.125979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.063 [2024-12-05 12:13:45.126004] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.063 [2024-12-05 12:13:45.126603] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.063 [2024-12-05 12:13:45.127183] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.063 [2024-12-05 12:13:45.127192] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.063 [2024-12-05 12:13:45.127199] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.063 [2024-12-05 12:13:45.127205] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.063 [2024-12-05 12:13:45.138359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.063 [2024-12-05 12:13:45.138764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.063 [2024-12-05 12:13:45.138809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.063 [2024-12-05 12:13:45.138835] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.063 [2024-12-05 12:13:45.139255] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.063 [2024-12-05 12:13:45.139421] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.063 [2024-12-05 12:13:45.139429] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.063 [2024-12-05 12:13:45.139436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.063 [2024-12-05 12:13:45.139442] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.063 [2024-12-05 12:13:45.151243] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.063 [2024-12-05 12:13:45.151656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.063 [2024-12-05 12:13:45.151673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.063 [2024-12-05 12:13:45.151681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.063 [2024-12-05 12:13:45.151841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.063 [2024-12-05 12:13:45.152001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.063 [2024-12-05 12:13:45.152011] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.063 [2024-12-05 12:13:45.152021] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.063 [2024-12-05 12:13:45.152029] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.063 [2024-12-05 12:13:45.164111] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.063 [2024-12-05 12:13:45.164524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.063 [2024-12-05 12:13:45.164542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.063 [2024-12-05 12:13:45.164550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.063 [2024-12-05 12:13:45.164710] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.063 [2024-12-05 12:13:45.164871] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.063 [2024-12-05 12:13:45.164881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.063 [2024-12-05 12:13:45.164887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.063 [2024-12-05 12:13:45.164893] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.063 [2024-12-05 12:13:45.176916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.063 [2024-12-05 12:13:45.177331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.063 [2024-12-05 12:13:45.177387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.063 [2024-12-05 12:13:45.177413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.063 [2024-12-05 12:13:45.177998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.063 [2024-12-05 12:13:45.178412] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.063 [2024-12-05 12:13:45.178423] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.063 [2024-12-05 12:13:45.178430] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.063 [2024-12-05 12:13:45.178437] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.063 [2024-12-05 12:13:45.189786] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.063 [2024-12-05 12:13:45.190143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.063 [2024-12-05 12:13:45.190160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.063 [2024-12-05 12:13:45.190168] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.063 [2024-12-05 12:13:45.190337] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.063 [2024-12-05 12:13:45.190513] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.063 [2024-12-05 12:13:45.190523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.063 [2024-12-05 12:13:45.190530] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.063 [2024-12-05 12:13:45.190537] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.064 [2024-12-05 12:13:45.202632] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.064 [2024-12-05 12:13:45.203050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.064 [2024-12-05 12:13:45.203066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.064 [2024-12-05 12:13:45.203074] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.064 [2024-12-05 12:13:45.203233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.064 [2024-12-05 12:13:45.203399] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.064 [2024-12-05 12:13:45.203410] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.064 [2024-12-05 12:13:45.203417] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.064 [2024-12-05 12:13:45.203424] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.064 [2024-12-05 12:13:45.215396] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.064 [2024-12-05 12:13:45.215803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.064 [2024-12-05 12:13:45.215819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.064 [2024-12-05 12:13:45.215827] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.064 [2024-12-05 12:13:45.215987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.064 [2024-12-05 12:13:45.216147] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.064 [2024-12-05 12:13:45.216156] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.064 [2024-12-05 12:13:45.216162] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.064 [2024-12-05 12:13:45.216169] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.064 [2024-12-05 12:13:45.228149] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.064 [2024-12-05 12:13:45.228560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.064 [2024-12-05 12:13:45.228601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.064 [2024-12-05 12:13:45.228628] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.064 [2024-12-05 12:13:45.229175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.064 [2024-12-05 12:13:45.229336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.064 [2024-12-05 12:13:45.229346] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.064 [2024-12-05 12:13:45.229352] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.064 [2024-12-05 12:13:45.229359] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.064 [2024-12-05 12:13:45.240996] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.064 [2024-12-05 12:13:45.241358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.064 [2024-12-05 12:13:45.241380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.064 [2024-12-05 12:13:45.241390] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.064 [2024-12-05 12:13:45.241552] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.064 [2024-12-05 12:13:45.241712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.064 [2024-12-05 12:13:45.241721] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.064 [2024-12-05 12:13:45.241728] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.064 [2024-12-05 12:13:45.241734] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.064 [2024-12-05 12:13:45.254008] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.064 [2024-12-05 12:13:45.254445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.064 [2024-12-05 12:13:45.254464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.064 [2024-12-05 12:13:45.254471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.064 [2024-12-05 12:13:45.254646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.064 [2024-12-05 12:13:45.254820] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.064 [2024-12-05 12:13:45.254830] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.064 [2024-12-05 12:13:45.254837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.064 [2024-12-05 12:13:45.254845] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.328 [2024-12-05 12:13:45.267101] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.328 [2024-12-05 12:13:45.267504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.328 [2024-12-05 12:13:45.267523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.328 [2024-12-05 12:13:45.267543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.328 [2024-12-05 12:13:45.267713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.328 [2024-12-05 12:13:45.267882] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.328 [2024-12-05 12:13:45.267892] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.328 [2024-12-05 12:13:45.267899] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.328 [2024-12-05 12:13:45.267906] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.328 [2024-12-05 12:13:45.280040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.328 [2024-12-05 12:13:45.280492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.328 [2024-12-05 12:13:45.280510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.328 [2024-12-05 12:13:45.280517] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.328 [2024-12-05 12:13:45.280689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.328 [2024-12-05 12:13:45.280852] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.328 [2024-12-05 12:13:45.280863] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.328 [2024-12-05 12:13:45.280871] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.328 [2024-12-05 12:13:45.280877] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.328 [2024-12-05 12:13:45.292980] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.328 [2024-12-05 12:13:45.293404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.328 [2024-12-05 12:13:45.293450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.328 [2024-12-05 12:13:45.293475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.328 [2024-12-05 12:13:45.293874] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.328 [2024-12-05 12:13:45.294036] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.328 [2024-12-05 12:13:45.294046] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.328 [2024-12-05 12:13:45.294052] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.328 [2024-12-05 12:13:45.294059] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.328 [2024-12-05 12:13:45.305887] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.328 [2024-12-05 12:13:45.306303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.329 [2024-12-05 12:13:45.306320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.329 [2024-12-05 12:13:45.306328] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.329 [2024-12-05 12:13:45.306507] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.329 [2024-12-05 12:13:45.306677] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.329 [2024-12-05 12:13:45.306687] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.329 [2024-12-05 12:13:45.306694] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.329 [2024-12-05 12:13:45.306701] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.329 [2024-12-05 12:13:45.318830] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.329 [2024-12-05 12:13:45.319244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.329 [2024-12-05 12:13:45.319262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.329 [2024-12-05 12:13:45.319270] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.329 [2024-12-05 12:13:45.319457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.329 [2024-12-05 12:13:45.319627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.329 [2024-12-05 12:13:45.319637] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.329 [2024-12-05 12:13:45.319648] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.329 [2024-12-05 12:13:45.319656] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.329 [2024-12-05 12:13:45.331742] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.329 [2024-12-05 12:13:45.332127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.329 [2024-12-05 12:13:45.332146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.329 [2024-12-05 12:13:45.332153] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.329 [2024-12-05 12:13:45.332315] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.329 [2024-12-05 12:13:45.332505] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.329 [2024-12-05 12:13:45.332516] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.329 [2024-12-05 12:13:45.332523] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.329 [2024-12-05 12:13:45.332530] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.329 [2024-12-05 12:13:45.344584] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.329 [2024-12-05 12:13:45.345009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.329 [2024-12-05 12:13:45.345055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.329 [2024-12-05 12:13:45.345080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.329 [2024-12-05 12:13:45.345690] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.329 [2024-12-05 12:13:45.346075] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.329 [2024-12-05 12:13:45.346095] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.329 [2024-12-05 12:13:45.346110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.329 [2024-12-05 12:13:45.346124] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.329 [2024-12-05 12:13:45.359727] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.329 [2024-12-05 12:13:45.360252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.329 [2024-12-05 12:13:45.360275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.329 [2024-12-05 12:13:45.360287] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.329 [2024-12-05 12:13:45.360556] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.329 [2024-12-05 12:13:45.360819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.329 [2024-12-05 12:13:45.360833] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.329 [2024-12-05 12:13:45.360843] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.329 [2024-12-05 12:13:45.360853] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.329 [2024-12-05 12:13:45.372754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.329 [2024-12-05 12:13:45.373172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.329 [2024-12-05 12:13:45.373189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.329 [2024-12-05 12:13:45.373197] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.329 [2024-12-05 12:13:45.373375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.329 [2024-12-05 12:13:45.373546] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.329 [2024-12-05 12:13:45.373556] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.329 [2024-12-05 12:13:45.373563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.329 [2024-12-05 12:13:45.373570] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.329 [2024-12-05 12:13:45.385673] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.329 [2024-12-05 12:13:45.386131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.329 [2024-12-05 12:13:45.386176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.329 [2024-12-05 12:13:45.386200] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.329 [2024-12-05 12:13:45.386629] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.329 [2024-12-05 12:13:45.386792] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.329 [2024-12-05 12:13:45.386801] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.329 [2024-12-05 12:13:45.386808] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.329 [2024-12-05 12:13:45.386814] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.329 [2024-12-05 12:13:45.398548] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.329 [2024-12-05 12:13:45.398876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.329 [2024-12-05 12:13:45.398921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.329 [2024-12-05 12:13:45.398945] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.329 [2024-12-05 12:13:45.399542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.329 [2024-12-05 12:13:45.399904] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.329 [2024-12-05 12:13:45.399913] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.329 [2024-12-05 12:13:45.399920] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.329 [2024-12-05 12:13:45.399926] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.329 [2024-12-05 12:13:45.411380] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.329 [2024-12-05 12:13:45.411813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.329 [2024-12-05 12:13:45.411830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.329 [2024-12-05 12:13:45.411841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.329 [2024-12-05 12:13:45.412002] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.329 [2024-12-05 12:13:45.412162] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.329 [2024-12-05 12:13:45.412171] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.329 [2024-12-05 12:13:45.412178] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.329 [2024-12-05 12:13:45.412184] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.329 [2024-12-05 12:13:45.424230] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.329 [2024-12-05 12:13:45.424629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.329 [2024-12-05 12:13:45.424647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.329 [2024-12-05 12:13:45.424654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.329 [2024-12-05 12:13:45.424814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.329 [2024-12-05 12:13:45.424974] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.329 [2024-12-05 12:13:45.424984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.329 [2024-12-05 12:13:45.424990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.329 [2024-12-05 12:13:45.424996] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.329 [2024-12-05 12:13:45.437038] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.329 [2024-12-05 12:13:45.437383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.329 [2024-12-05 12:13:45.437401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.330 [2024-12-05 12:13:45.437409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.330 [2024-12-05 12:13:45.437569] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.330 [2024-12-05 12:13:45.437730] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.330 [2024-12-05 12:13:45.437739] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.330 [2024-12-05 12:13:45.437745] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.330 [2024-12-05 12:13:45.437751] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.330 [2024-12-05 12:13:45.449960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.330 [2024-12-05 12:13:45.450373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.330 [2024-12-05 12:13:45.450428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.330 [2024-12-05 12:13:45.450453] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.330 [2024-12-05 12:13:45.450988] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.330 [2024-12-05 12:13:45.451152] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.330 [2024-12-05 12:13:45.451163] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.330 [2024-12-05 12:13:45.451169] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.330 [2024-12-05 12:13:45.451175] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.330 [2024-12-05 12:13:45.462817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.330 [2024-12-05 12:13:45.463224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.330 [2024-12-05 12:13:45.463241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.330 [2024-12-05 12:13:45.463248] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.330 [2024-12-05 12:13:45.463414] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.330 [2024-12-05 12:13:45.463575] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.330 [2024-12-05 12:13:45.463584] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.330 [2024-12-05 12:13:45.463591] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.330 [2024-12-05 12:13:45.463597] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.330 [2024-12-05 12:13:45.475569] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.330 [2024-12-05 12:13:45.475981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.330 [2024-12-05 12:13:45.476021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.330 [2024-12-05 12:13:45.476048] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.330 [2024-12-05 12:13:45.476647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.330 [2024-12-05 12:13:45.477235] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.330 [2024-12-05 12:13:45.477266] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.330 [2024-12-05 12:13:45.477273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.330 [2024-12-05 12:13:45.477280] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.330 [2024-12-05 12:13:45.490871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.330 [2024-12-05 12:13:45.491387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.330 [2024-12-05 12:13:45.491408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.330 [2024-12-05 12:13:45.491419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.330 [2024-12-05 12:13:45.491675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.330 [2024-12-05 12:13:45.491932] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.330 [2024-12-05 12:13:45.491945] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.330 [2024-12-05 12:13:45.491959] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.330 [2024-12-05 12:13:45.491969] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.330 [2024-12-05 12:13:45.503830] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.330 [2024-12-05 12:13:45.504265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.330 [2024-12-05 12:13:45.504309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.330 [2024-12-05 12:13:45.504332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.330 [2024-12-05 12:13:45.504821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.330 [2024-12-05 12:13:45.504997] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.330 [2024-12-05 12:13:45.505007] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.330 [2024-12-05 12:13:45.505014] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.330 [2024-12-05 12:13:45.505022] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.330 [2024-12-05 12:13:45.516979] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.330 [2024-12-05 12:13:45.517413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.330 [2024-12-05 12:13:45.517434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.330 [2024-12-05 12:13:45.517442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.330 [2024-12-05 12:13:45.517616] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.330 [2024-12-05 12:13:45.517796] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.330 [2024-12-05 12:13:45.517805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.330 [2024-12-05 12:13:45.517812] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.330 [2024-12-05 12:13:45.517819] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.589 [2024-12-05 12:13:45.529996] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.589 [2024-12-05 12:13:45.530447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.589 [2024-12-05 12:13:45.530468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.589 [2024-12-05 12:13:45.530478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.589 [2024-12-05 12:13:45.530654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.589 [2024-12-05 12:13:45.530829] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.589 [2024-12-05 12:13:45.530841] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.589 [2024-12-05 12:13:45.530848] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.589 [2024-12-05 12:13:45.530855] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.590 [2024-12-05 12:13:45.543028] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.590 [2024-12-05 12:13:45.543432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.590 [2024-12-05 12:13:45.543477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.590 [2024-12-05 12:13:45.543502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.590 [2024-12-05 12:13:45.544088] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.590 [2024-12-05 12:13:45.544469] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.590 [2024-12-05 12:13:45.544479] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.590 [2024-12-05 12:13:45.544487] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.590 [2024-12-05 12:13:45.544493] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.590 [2024-12-05 12:13:45.555970] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.590 [2024-12-05 12:13:45.556362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.590 [2024-12-05 12:13:45.556385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.590 [2024-12-05 12:13:45.556393] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.590 [2024-12-05 12:13:45.556552] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.590 [2024-12-05 12:13:45.556712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.590 [2024-12-05 12:13:45.556722] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.590 [2024-12-05 12:13:45.556728] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.590 [2024-12-05 12:13:45.556735] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.590 [2024-12-05 12:13:45.568974] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.590 [2024-12-05 12:13:45.569407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.590 [2024-12-05 12:13:45.569425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.590 [2024-12-05 12:13:45.569433] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.590 [2024-12-05 12:13:45.569608] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.590 [2024-12-05 12:13:45.569782] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.590 [2024-12-05 12:13:45.569792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.590 [2024-12-05 12:13:45.569799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.590 [2024-12-05 12:13:45.569806] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.590 [2024-12-05 12:13:45.581895] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.590 [2024-12-05 12:13:45.582158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.590 [2024-12-05 12:13:45.582175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.590 [2024-12-05 12:13:45.582187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.590 [2024-12-05 12:13:45.582349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.590 [2024-12-05 12:13:45.582517] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.590 [2024-12-05 12:13:45.582527] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.590 [2024-12-05 12:13:45.582534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.590 [2024-12-05 12:13:45.582540] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.590 [2024-12-05 12:13:45.594829] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.590 [2024-12-05 12:13:45.595147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.590 [2024-12-05 12:13:45.595165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.590 [2024-12-05 12:13:45.595172] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.590 [2024-12-05 12:13:45.595331] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.590 [2024-12-05 12:13:45.595498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.590 [2024-12-05 12:13:45.595507] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.590 [2024-12-05 12:13:45.595514] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.590 [2024-12-05 12:13:45.595520] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.590 [2024-12-05 12:13:45.607754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.590 [2024-12-05 12:13:45.608014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.590 [2024-12-05 12:13:45.608031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.590 [2024-12-05 12:13:45.608038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.590 [2024-12-05 12:13:45.608199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.590 [2024-12-05 12:13:45.608360] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.590 [2024-12-05 12:13:45.608376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.590 [2024-12-05 12:13:45.608383] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.590 [2024-12-05 12:13:45.608390] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.590 [2024-12-05 12:13:45.620693] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.590 [2024-12-05 12:13:45.621084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.590 [2024-12-05 12:13:45.621101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.590 [2024-12-05 12:13:45.621109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.590 [2024-12-05 12:13:45.621270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.590 [2024-12-05 12:13:45.621440] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.590 [2024-12-05 12:13:45.621450] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.590 [2024-12-05 12:13:45.621457] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.590 [2024-12-05 12:13:45.621463] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.590 [2024-12-05 12:13:45.633662] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.590 [2024-12-05 12:13:45.634060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.590 [2024-12-05 12:13:45.634078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.590 [2024-12-05 12:13:45.634086] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.590 [2024-12-05 12:13:45.634261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.590 [2024-12-05 12:13:45.634438] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.590 [2024-12-05 12:13:45.634449] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.590 [2024-12-05 12:13:45.634455] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.590 [2024-12-05 12:13:45.634462] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.590 [2024-12-05 12:13:45.646602] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.590 [2024-12-05 12:13:45.647012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.590 [2024-12-05 12:13:45.647029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.590 [2024-12-05 12:13:45.647038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.590 [2024-12-05 12:13:45.647198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.590 [2024-12-05 12:13:45.647358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.590 [2024-12-05 12:13:45.647374] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.590 [2024-12-05 12:13:45.647382] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.590 [2024-12-05 12:13:45.647389] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.590 [2024-12-05 12:13:45.659437] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.590 [2024-12-05 12:13:45.659754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.590 [2024-12-05 12:13:45.659771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.590 [2024-12-05 12:13:45.659779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.590 [2024-12-05 12:13:45.659940] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.590 [2024-12-05 12:13:45.660100] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.590 [2024-12-05 12:13:45.660109] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.591 [2024-12-05 12:13:45.660119] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.591 [2024-12-05 12:13:45.660127] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.591 [2024-12-05 12:13:45.674561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.591 [2024-12-05 12:13:45.674950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.591 [2024-12-05 12:13:45.674971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.591 [2024-12-05 12:13:45.674981] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.591 [2024-12-05 12:13:45.675194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.591 [2024-12-05 12:13:45.675427] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.591 [2024-12-05 12:13:45.675438] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.591 [2024-12-05 12:13:45.675445] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.591 [2024-12-05 12:13:45.675452] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.591 [2024-12-05 12:13:45.687573] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.591 [2024-12-05 12:13:45.687981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.591 [2024-12-05 12:13:45.687999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.591 [2024-12-05 12:13:45.688006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.591 [2024-12-05 12:13:45.688166] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.591 [2024-12-05 12:13:45.688327] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.591 [2024-12-05 12:13:45.688337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.591 [2024-12-05 12:13:45.688343] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.591 [2024-12-05 12:13:45.688349] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.591 [2024-12-05 12:13:45.700500] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.591 [2024-12-05 12:13:45.700777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.591 [2024-12-05 12:13:45.700794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.591 [2024-12-05 12:13:45.700802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.591 [2024-12-05 12:13:45.700962] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.591 [2024-12-05 12:13:45.701123] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.591 [2024-12-05 12:13:45.701133] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.591 [2024-12-05 12:13:45.701139] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.591 [2024-12-05 12:13:45.701146] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.591 [2024-12-05 12:13:45.713341] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.591 [2024-12-05 12:13:45.713656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.591 [2024-12-05 12:13:45.713675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.591 [2024-12-05 12:13:45.713683] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.591 [2024-12-05 12:13:45.713851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.591 [2024-12-05 12:13:45.714021] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.591 [2024-12-05 12:13:45.714030] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.591 [2024-12-05 12:13:45.714037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.591 [2024-12-05 12:13:45.714044] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.591 [2024-12-05 12:13:45.726276] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.591 [2024-12-05 12:13:45.726565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.591 [2024-12-05 12:13:45.726582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.591 [2024-12-05 12:13:45.726590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.591 [2024-12-05 12:13:45.726760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.591 [2024-12-05 12:13:45.726929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.591 [2024-12-05 12:13:45.726939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.591 [2024-12-05 12:13:45.726945] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.591 [2024-12-05 12:13:45.726952] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.591 [2024-12-05 12:13:45.739147] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.591 [2024-12-05 12:13:45.739555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.591 [2024-12-05 12:13:45.739573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.591 [2024-12-05 12:13:45.739581] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.591 [2024-12-05 12:13:45.739740] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.591 [2024-12-05 12:13:45.739900] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.591 [2024-12-05 12:13:45.739909] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.591 [2024-12-05 12:13:45.739916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.591 [2024-12-05 12:13:45.739922] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.591 [2024-12-05 12:13:45.752065] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.591 [2024-12-05 12:13:45.752375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.591 [2024-12-05 12:13:45.752409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.591 [2024-12-05 12:13:45.752421] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.591 [2024-12-05 12:13:45.752590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.591 [2024-12-05 12:13:45.752761] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.591 [2024-12-05 12:13:45.752771] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.591 [2024-12-05 12:13:45.752778] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.591 [2024-12-05 12:13:45.752786] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.591 [2024-12-05 12:13:45.764934] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.591 [2024-12-05 12:13:45.765219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.591 [2024-12-05 12:13:45.765235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.591 [2024-12-05 12:13:45.765243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.591 [2024-12-05 12:13:45.765419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.591 [2024-12-05 12:13:45.765588] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.591 [2024-12-05 12:13:45.765598] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.591 [2024-12-05 12:13:45.765605] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.591 [2024-12-05 12:13:45.765612] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.591 [2024-12-05 12:13:45.778018] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.591 [2024-12-05 12:13:45.778374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.591 [2024-12-05 12:13:45.778391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.591 [2024-12-05 12:13:45.778399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.591 [2024-12-05 12:13:45.778568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.591 [2024-12-05 12:13:45.778737] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.591 [2024-12-05 12:13:45.778747] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.591 [2024-12-05 12:13:45.778754] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.591 [2024-12-05 12:13:45.778761] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.851 [2024-12-05 12:13:45.791050] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.851 [2024-12-05 12:13:45.791385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.851 [2024-12-05 12:13:45.791404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.851 [2024-12-05 12:13:45.791412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.851 [2024-12-05 12:13:45.791586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.851 [2024-12-05 12:13:45.791765] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.851 [2024-12-05 12:13:45.791775] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.851 [2024-12-05 12:13:45.791782] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.851 [2024-12-05 12:13:45.791789] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.851 [2024-12-05 12:13:45.803989] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.851 [2024-12-05 12:13:45.804334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.851 [2024-12-05 12:13:45.804388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.851 [2024-12-05 12:13:45.804415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.851 [2024-12-05 12:13:45.804955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.851 [2024-12-05 12:13:45.805116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.851 [2024-12-05 12:13:45.805126] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.851 [2024-12-05 12:13:45.805133] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.851 [2024-12-05 12:13:45.805139] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.851 [2024-12-05 12:13:45.816881] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.851 [2024-12-05 12:13:45.817237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.851 [2024-12-05 12:13:45.817256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.851 [2024-12-05 12:13:45.817264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.851 [2024-12-05 12:13:45.817439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.851 [2024-12-05 12:13:45.817609] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.851 [2024-12-05 12:13:45.817630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.851 [2024-12-05 12:13:45.817636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.851 [2024-12-05 12:13:45.817643] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.851 [2024-12-05 12:13:45.829692] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.851 [2024-12-05 12:13:45.829964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.851 [2024-12-05 12:13:45.829981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.851 [2024-12-05 12:13:45.829988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.851 [2024-12-05 12:13:45.830149] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.851 [2024-12-05 12:13:45.830309] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.851 [2024-12-05 12:13:45.830319] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.851 [2024-12-05 12:13:45.830329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.851 [2024-12-05 12:13:45.830337] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.851 [2024-12-05 12:13:45.842644] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.851 [2024-12-05 12:13:45.842980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.851 [2024-12-05 12:13:45.842996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.851 [2024-12-05 12:13:45.843003] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.851 [2024-12-05 12:13:45.843176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.851 [2024-12-05 12:13:45.843336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.851 [2024-12-05 12:13:45.843345] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.851 [2024-12-05 12:13:45.843351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.851 [2024-12-05 12:13:45.843357] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.851 7581.25 IOPS, 29.61 MiB/s [2024-12-05T11:13:46.047Z] [2024-12-05 12:13:45.855642] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.851 [2024-12-05 12:13:45.855926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.851 [2024-12-05 12:13:45.855943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.851 [2024-12-05 12:13:45.855950] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.851 [2024-12-05 12:13:45.856125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.851 [2024-12-05 12:13:45.856298] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.852 [2024-12-05 12:13:45.856307] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.852 [2024-12-05 12:13:45.856314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.852 [2024-12-05 12:13:45.856321] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.852 [2024-12-05 12:13:45.868742] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.852 [2024-12-05 12:13:45.869078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.852 [2024-12-05 12:13:45.869095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.852 [2024-12-05 12:13:45.869103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.852 [2024-12-05 12:13:45.869277] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.852 [2024-12-05 12:13:45.869457] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.852 [2024-12-05 12:13:45.869466] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.852 [2024-12-05 12:13:45.869473] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.852 [2024-12-05 12:13:45.869480] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.852 [2024-12-05 12:13:45.881725] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.852 [2024-12-05 12:13:45.882154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.852 [2024-12-05 12:13:45.882197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.852 [2024-12-05 12:13:45.882220] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.852 [2024-12-05 12:13:45.882830] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.852 [2024-12-05 12:13:45.882999] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.852 [2024-12-05 12:13:45.883007] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.852 [2024-12-05 12:13:45.883014] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.852 [2024-12-05 12:13:45.883020] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.852 [2024-12-05 12:13:45.894755] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.852 [2024-12-05 12:13:45.895060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.852 [2024-12-05 12:13:45.895104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.852 [2024-12-05 12:13:45.895128] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.852 [2024-12-05 12:13:45.895728] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.852 [2024-12-05 12:13:45.896200] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.852 [2024-12-05 12:13:45.896207] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.852 [2024-12-05 12:13:45.896214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.852 [2024-12-05 12:13:45.896220] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.852 [2024-12-05 12:13:45.907653] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.852 [2024-12-05 12:13:45.908023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.852 [2024-12-05 12:13:45.908039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.852 [2024-12-05 12:13:45.908046] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.852 [2024-12-05 12:13:45.908215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.852 [2024-12-05 12:13:45.908387] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.852 [2024-12-05 12:13:45.908396] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.852 [2024-12-05 12:13:45.908403] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.852 [2024-12-05 12:13:45.908409] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.852 [2024-12-05 12:13:45.920542] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.852 [2024-12-05 12:13:45.920902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.852 [2024-12-05 12:13:45.920919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.852 [2024-12-05 12:13:45.920929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.852 [2024-12-05 12:13:45.921103] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.852 [2024-12-05 12:13:45.921275] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.852 [2024-12-05 12:13:45.921283] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.852 [2024-12-05 12:13:45.921289] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.852 [2024-12-05 12:13:45.921296] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.852 [2024-12-05 12:13:45.933499] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.852 [2024-12-05 12:13:45.933919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.852 [2024-12-05 12:13:45.933962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.852 [2024-12-05 12:13:45.933985] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.852 [2024-12-05 12:13:45.934435] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.852 [2024-12-05 12:13:45.934605] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.852 [2024-12-05 12:13:45.934613] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.852 [2024-12-05 12:13:45.934620] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.852 [2024-12-05 12:13:45.934626] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.852 [2024-12-05 12:13:45.946500] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.852 [2024-12-05 12:13:45.946774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.852 [2024-12-05 12:13:45.946790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.852 [2024-12-05 12:13:45.946797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.852 [2024-12-05 12:13:45.946965] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.852 [2024-12-05 12:13:45.947132] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.852 [2024-12-05 12:13:45.947140] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.852 [2024-12-05 12:13:45.947147] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.852 [2024-12-05 12:13:45.947152] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.852 [2024-12-05 12:13:45.959260] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.852 [2024-12-05 12:13:45.959715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.852 [2024-12-05 12:13:45.959758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.852 [2024-12-05 12:13:45.959781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.852 [2024-12-05 12:13:45.960222] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.852 [2024-12-05 12:13:45.960400] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.852 [2024-12-05 12:13:45.960409] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.852 [2024-12-05 12:13:45.960415] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.852 [2024-12-05 12:13:45.960421] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.852 [2024-12-05 12:13:45.971994] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.852 [2024-12-05 12:13:45.972411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.852 [2024-12-05 12:13:45.972428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.852 [2024-12-05 12:13:45.972435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.852 [2024-12-05 12:13:45.972594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.852 [2024-12-05 12:13:45.972753] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.852 [2024-12-05 12:13:45.972761] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.852 [2024-12-05 12:13:45.972767] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.852 [2024-12-05 12:13:45.972773] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.852 [2024-12-05 12:13:45.984861] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.852 [2024-12-05 12:13:45.985269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.852 [2024-12-05 12:13:45.985284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.852 [2024-12-05 12:13:45.985290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.852 [2024-12-05 12:13:45.985475] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.853 [2024-12-05 12:13:45.985644] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.853 [2024-12-05 12:13:45.985653] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.853 [2024-12-05 12:13:45.985659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.853 [2024-12-05 12:13:45.985665] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.853 [2024-12-05 12:13:45.997636] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.853 [2024-12-05 12:13:45.998065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.853 [2024-12-05 12:13:45.998108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.853 [2024-12-05 12:13:45.998131] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.853 [2024-12-05 12:13:45.998730] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.853 [2024-12-05 12:13:45.999309] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.853 [2024-12-05 12:13:45.999317] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.853 [2024-12-05 12:13:45.999327] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.853 [2024-12-05 12:13:45.999333] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.853 [2024-12-05 12:13:46.010459] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.853 [2024-12-05 12:13:46.010909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.853 [2024-12-05 12:13:46.010952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.853 [2024-12-05 12:13:46.010975] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.853 [2024-12-05 12:13:46.011415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.853 [2024-12-05 12:13:46.011585] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.853 [2024-12-05 12:13:46.011593] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.853 [2024-12-05 12:13:46.011600] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.853 [2024-12-05 12:13:46.011606] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.853 [2024-12-05 12:13:46.023287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.853 [2024-12-05 12:13:46.023728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.853 [2024-12-05 12:13:46.023744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.853 [2024-12-05 12:13:46.023751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.853 [2024-12-05 12:13:46.023925] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.853 [2024-12-05 12:13:46.024098] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.853 [2024-12-05 12:13:46.024106] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.853 [2024-12-05 12:13:46.024113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.853 [2024-12-05 12:13:46.024119] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:11.853 [2024-12-05 12:13:46.036378] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:11.853 [2024-12-05 12:13:46.036829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.853 [2024-12-05 12:13:46.036870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:11.853 [2024-12-05 12:13:46.036896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:11.853 [2024-12-05 12:13:46.037449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:11.853 [2024-12-05 12:13:46.037619] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:11.853 [2024-12-05 12:13:46.037628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:11.853 [2024-12-05 12:13:46.037634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:11.853 [2024-12-05 12:13:46.037640] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.113 [2024-12-05 12:13:46.049492] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.113 [2024-12-05 12:13:46.049935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.113 [2024-12-05 12:13:46.049951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.113 [2024-12-05 12:13:46.049958] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.113 [2024-12-05 12:13:46.050127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.113 [2024-12-05 12:13:46.050295] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.113 [2024-12-05 12:13:46.050304] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.113 [2024-12-05 12:13:46.050310] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.113 [2024-12-05 12:13:46.050316] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.113 [2024-12-05 12:13:46.062354] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.113 [2024-12-05 12:13:46.062685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.113 [2024-12-05 12:13:46.062701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.113 [2024-12-05 12:13:46.062709] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.113 [2024-12-05 12:13:46.062876] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.113 [2024-12-05 12:13:46.063044] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.113 [2024-12-05 12:13:46.063052] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.113 [2024-12-05 12:13:46.063058] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.113 [2024-12-05 12:13:46.063064] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.113 [2024-12-05 12:13:46.075092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.113 [2024-12-05 12:13:46.075500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.113 [2024-12-05 12:13:46.075516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.113 [2024-12-05 12:13:46.075523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.113 [2024-12-05 12:13:46.075682] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.113 [2024-12-05 12:13:46.075841] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.113 [2024-12-05 12:13:46.075849] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.113 [2024-12-05 12:13:46.075855] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.113 [2024-12-05 12:13:46.075860] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.113 [2024-12-05 12:13:46.087834] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.113 [2024-12-05 12:13:46.088253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.113 [2024-12-05 12:13:46.088269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.113 [2024-12-05 12:13:46.088279] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.113 [2024-12-05 12:13:46.088462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.113 [2024-12-05 12:13:46.088631] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.113 [2024-12-05 12:13:46.088639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.113 [2024-12-05 12:13:46.088645] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.113 [2024-12-05 12:13:46.088651] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.113 [2024-12-05 12:13:46.100769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.113 [2024-12-05 12:13:46.101096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.113 [2024-12-05 12:13:46.101111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.113 [2024-12-05 12:13:46.101118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.113 [2024-12-05 12:13:46.101278] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.113 [2024-12-05 12:13:46.101459] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.113 [2024-12-05 12:13:46.101467] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.113 [2024-12-05 12:13:46.101474] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.113 [2024-12-05 12:13:46.101480] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.113 [2024-12-05 12:13:46.113551] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.113 [2024-12-05 12:13:46.113961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.113 [2024-12-05 12:13:46.113977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.113 [2024-12-05 12:13:46.113984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.113 [2024-12-05 12:13:46.114153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.114 [2024-12-05 12:13:46.114321] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.114 [2024-12-05 12:13:46.114329] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.114 [2024-12-05 12:13:46.114335] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.114 [2024-12-05 12:13:46.114342] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.114 [2024-12-05 12:13:46.126380] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.114 [2024-12-05 12:13:46.126804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.114 [2024-12-05 12:13:46.126848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.114 [2024-12-05 12:13:46.126871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.114 [2024-12-05 12:13:46.127471] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.114 [2024-12-05 12:13:46.127879] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.114 [2024-12-05 12:13:46.127887] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.114 [2024-12-05 12:13:46.127893] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.114 [2024-12-05 12:13:46.127899] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.114 [2024-12-05 12:13:46.139257] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.114 [2024-12-05 12:13:46.139640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.114 [2024-12-05 12:13:46.139684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.114 [2024-12-05 12:13:46.139708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.114 [2024-12-05 12:13:46.140292] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.114 [2024-12-05 12:13:46.140498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.114 [2024-12-05 12:13:46.140506] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.114 [2024-12-05 12:13:46.140513] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.114 [2024-12-05 12:13:46.140519] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.114 [2024-12-05 12:13:46.152135] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.114 [2024-12-05 12:13:46.152576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.114 [2024-12-05 12:13:46.152592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.114 [2024-12-05 12:13:46.152599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.114 [2024-12-05 12:13:46.152768] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.114 [2024-12-05 12:13:46.152935] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.114 [2024-12-05 12:13:46.152943] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.114 [2024-12-05 12:13:46.152950] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.114 [2024-12-05 12:13:46.152956] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.114 [2024-12-05 12:13:46.164886] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.114 [2024-12-05 12:13:46.165278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.114 [2024-12-05 12:13:46.165322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.114 [2024-12-05 12:13:46.165344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.114 [2024-12-05 12:13:46.165819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.114 [2024-12-05 12:13:46.165988] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.114 [2024-12-05 12:13:46.165996] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.114 [2024-12-05 12:13:46.166006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.114 [2024-12-05 12:13:46.166012] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.114 [2024-12-05 12:13:46.177641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.114 [2024-12-05 12:13:46.178031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.114 [2024-12-05 12:13:46.178046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.114 [2024-12-05 12:13:46.178053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.114 [2024-12-05 12:13:46.178212] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.114 [2024-12-05 12:13:46.178377] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.114 [2024-12-05 12:13:46.178385] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.114 [2024-12-05 12:13:46.178407] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.114 [2024-12-05 12:13:46.178414] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.114 [2024-12-05 12:13:46.190469] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.114 [2024-12-05 12:13:46.190889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.114 [2024-12-05 12:13:46.190904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.114 [2024-12-05 12:13:46.190912] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.114 [2024-12-05 12:13:46.191071] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.114 [2024-12-05 12:13:46.191230] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.114 [2024-12-05 12:13:46.191238] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.114 [2024-12-05 12:13:46.191244] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.114 [2024-12-05 12:13:46.191250] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.114 [2024-12-05 12:13:46.203221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.114 [2024-12-05 12:13:46.203668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.114 [2024-12-05 12:13:46.203684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.114 [2024-12-05 12:13:46.203691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.114 [2024-12-05 12:13:46.203859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.114 [2024-12-05 12:13:46.204027] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.114 [2024-12-05 12:13:46.204035] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.114 [2024-12-05 12:13:46.204042] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.114 [2024-12-05 12:13:46.204048] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.114 [2024-12-05 12:13:46.216084] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.115 [2024-12-05 12:13:46.216438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.115 [2024-12-05 12:13:46.216454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.115 [2024-12-05 12:13:46.216461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.115 [2024-12-05 12:13:46.216629] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.115 [2024-12-05 12:13:46.216797] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.115 [2024-12-05 12:13:46.216805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.115 [2024-12-05 12:13:46.216811] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.115 [2024-12-05 12:13:46.216817] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.115 [2024-12-05 12:13:46.228953] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.115 [2024-12-05 12:13:46.229365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.115 [2024-12-05 12:13:46.229385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.115 [2024-12-05 12:13:46.229392] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.115 [2024-12-05 12:13:46.229551] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.115 [2024-12-05 12:13:46.229710] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.115 [2024-12-05 12:13:46.229718] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.115 [2024-12-05 12:13:46.229723] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.115 [2024-12-05 12:13:46.229729] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.115 [2024-12-05 12:13:46.241768] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.115 [2024-12-05 12:13:46.242158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.115 [2024-12-05 12:13:46.242173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.115 [2024-12-05 12:13:46.242180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.115 [2024-12-05 12:13:46.242340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.115 [2024-12-05 12:13:46.242527] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.115 [2024-12-05 12:13:46.242536] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.115 [2024-12-05 12:13:46.242542] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.115 [2024-12-05 12:13:46.242548] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.115 [2024-12-05 12:13:46.254626] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.115 [2024-12-05 12:13:46.255035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.115 [2024-12-05 12:13:46.255069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.115 [2024-12-05 12:13:46.255102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.115 [2024-12-05 12:13:46.255701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.115 [2024-12-05 12:13:46.255870] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.115 [2024-12-05 12:13:46.255878] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.115 [2024-12-05 12:13:46.255884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.115 [2024-12-05 12:13:46.255890] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.115 [2024-12-05 12:13:46.267480] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.115 [2024-12-05 12:13:46.267863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.115 [2024-12-05 12:13:46.267878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.115 [2024-12-05 12:13:46.267885] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.115 [2024-12-05 12:13:46.268045] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.115 [2024-12-05 12:13:46.268204] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.115 [2024-12-05 12:13:46.268211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.115 [2024-12-05 12:13:46.268218] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.115 [2024-12-05 12:13:46.268223] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.115 [2024-12-05 12:13:46.280338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.115 [2024-12-05 12:13:46.280699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.115 [2024-12-05 12:13:46.280716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.115 [2024-12-05 12:13:46.280723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.115 [2024-12-05 12:13:46.280892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.115 [2024-12-05 12:13:46.281060] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.115 [2024-12-05 12:13:46.281068] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.115 [2024-12-05 12:13:46.281075] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.115 [2024-12-05 12:13:46.281081] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.115 [2024-12-05 12:13:46.293384] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.115 [2024-12-05 12:13:46.293819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.115 [2024-12-05 12:13:46.293835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.115 [2024-12-05 12:13:46.293843] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.115 [2024-12-05 12:13:46.294016] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.115 [2024-12-05 12:13:46.294196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.115 [2024-12-05 12:13:46.294204] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.115 [2024-12-05 12:13:46.294211] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.115 [2024-12-05 12:13:46.294217] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.115 [2024-12-05 12:13:46.306508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.115 [2024-12-05 12:13:46.306850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.115 [2024-12-05 12:13:46.306867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.115 [2024-12-05 12:13:46.306874] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.115 [2024-12-05 12:13:46.307047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.115 [2024-12-05 12:13:46.307219] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.115 [2024-12-05 12:13:46.307228] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.115 [2024-12-05 12:13:46.307234] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.116 [2024-12-05 12:13:46.307240] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.376 [2024-12-05 12:13:46.319573] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.376 [2024-12-05 12:13:46.319980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.376 [2024-12-05 12:13:46.319997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.376 [2024-12-05 12:13:46.320004] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.376 [2024-12-05 12:13:46.320172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.376 [2024-12-05 12:13:46.320341] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.376 [2024-12-05 12:13:46.320349] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.376 [2024-12-05 12:13:46.320356] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.376 [2024-12-05 12:13:46.320362] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.376 [2024-12-05 12:13:46.332432] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.376 [2024-12-05 12:13:46.332884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.376 [2024-12-05 12:13:46.332929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.376 [2024-12-05 12:13:46.332953] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.376 [2024-12-05 12:13:46.333386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.376 [2024-12-05 12:13:46.333556] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.376 [2024-12-05 12:13:46.333564] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.376 [2024-12-05 12:13:46.333574] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.376 [2024-12-05 12:13:46.333580] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.376 [2024-12-05 12:13:46.345216] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.376 [2024-12-05 12:13:46.345628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.376 [2024-12-05 12:13:46.345644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.376 [2024-12-05 12:13:46.345651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.376 [2024-12-05 12:13:46.345820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.376 [2024-12-05 12:13:46.345988] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.376 [2024-12-05 12:13:46.345997] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.376 [2024-12-05 12:13:46.346003] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.376 [2024-12-05 12:13:46.346009] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.376 [2024-12-05 12:13:46.358139] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.376 [2024-12-05 12:13:46.358555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.376 [2024-12-05 12:13:46.358572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.376 [2024-12-05 12:13:46.358579] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.376 [2024-12-05 12:13:46.358737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.376 [2024-12-05 12:13:46.358897] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.376 [2024-12-05 12:13:46.358905] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.376 [2024-12-05 12:13:46.358910] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.376 [2024-12-05 12:13:46.358917] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.376 [2024-12-05 12:13:46.370947] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.376 [2024-12-05 12:13:46.371398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.376 [2024-12-05 12:13:46.371444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.376 [2024-12-05 12:13:46.371468] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.376 [2024-12-05 12:13:46.372051] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.376 [2024-12-05 12:13:46.372386] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.376 [2024-12-05 12:13:46.372395] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.376 [2024-12-05 12:13:46.372401] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.376 [2024-12-05 12:13:46.372408] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.376 [2024-12-05 12:13:46.383861] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.376 [2024-12-05 12:13:46.384311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.376 [2024-12-05 12:13:46.384355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.376 [2024-12-05 12:13:46.384391] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.376 [2024-12-05 12:13:46.384977] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.376 [2024-12-05 12:13:46.385146] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.376 [2024-12-05 12:13:46.385154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.376 [2024-12-05 12:13:46.385161] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.377 [2024-12-05 12:13:46.385167] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.377 [2024-12-05 12:13:46.396868] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.377 [2024-12-05 12:13:46.397234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.377 [2024-12-05 12:13:46.397277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.377 [2024-12-05 12:13:46.397300] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.377 [2024-12-05 12:13:46.397897] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.377 [2024-12-05 12:13:46.398444] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.377 [2024-12-05 12:13:46.398453] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.377 [2024-12-05 12:13:46.398460] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.377 [2024-12-05 12:13:46.398466] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.377 [2024-12-05 12:13:46.409723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.377 [2024-12-05 12:13:46.410160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.377 [2024-12-05 12:13:46.410176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.377 [2024-12-05 12:13:46.410183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.377 [2024-12-05 12:13:46.410352] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.377 [2024-12-05 12:13:46.410525] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.377 [2024-12-05 12:13:46.410534] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.377 [2024-12-05 12:13:46.410540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.377 [2024-12-05 12:13:46.410547] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.377 [2024-12-05 12:13:46.422554] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.377 [2024-12-05 12:13:46.422978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.377 [2024-12-05 12:13:46.423028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.377 [2024-12-05 12:13:46.423060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.377 [2024-12-05 12:13:46.423466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.377 [2024-12-05 12:13:46.423636] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.377 [2024-12-05 12:13:46.423644] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.377 [2024-12-05 12:13:46.423651] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.377 [2024-12-05 12:13:46.423658] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.377 [2024-12-05 12:13:46.435395] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.377 [2024-12-05 12:13:46.435837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.377 [2024-12-05 12:13:46.435853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.377 [2024-12-05 12:13:46.435861] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.377 [2024-12-05 12:13:46.436034] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.377 [2024-12-05 12:13:46.436206] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.377 [2024-12-05 12:13:46.436215] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.377 [2024-12-05 12:13:46.436221] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.377 [2024-12-05 12:13:46.436227] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.377 [2024-12-05 12:13:46.448131] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.377 [2024-12-05 12:13:46.448477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.377 [2024-12-05 12:13:46.448494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.377 [2024-12-05 12:13:46.448502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.377 [2024-12-05 12:13:46.448671] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.377 [2024-12-05 12:13:46.448838] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.377 [2024-12-05 12:13:46.448847] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.377 [2024-12-05 12:13:46.448853] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.377 [2024-12-05 12:13:46.448860] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.377 [2024-12-05 12:13:46.461033] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.377 [2024-12-05 12:13:46.461386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.377 [2024-12-05 12:13:46.461403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.377 [2024-12-05 12:13:46.461411] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.377 [2024-12-05 12:13:46.461579] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.377 [2024-12-05 12:13:46.461755] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.377 [2024-12-05 12:13:46.461764] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.377 [2024-12-05 12:13:46.461770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.377 [2024-12-05 12:13:46.461776] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.377 [2024-12-05 12:13:46.473938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.377 [2024-12-05 12:13:46.474335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.377 [2024-12-05 12:13:46.474350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.377 [2024-12-05 12:13:46.474357] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.377 [2024-12-05 12:13:46.474530] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.377 [2024-12-05 12:13:46.474699] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.377 [2024-12-05 12:13:46.474707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.377 [2024-12-05 12:13:46.474713] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.377 [2024-12-05 12:13:46.474720] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.377 [2024-12-05 12:13:46.486769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.377 [2024-12-05 12:13:46.487200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.377 [2024-12-05 12:13:46.487216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.377 [2024-12-05 12:13:46.487223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.377 [2024-12-05 12:13:46.487396] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.377 [2024-12-05 12:13:46.487564] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.377 [2024-12-05 12:13:46.487573] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.377 [2024-12-05 12:13:46.487579] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.377 [2024-12-05 12:13:46.487585] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.377 [2024-12-05 12:13:46.499563] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.377 [2024-12-05 12:13:46.499977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.377 [2024-12-05 12:13:46.499991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.377 [2024-12-05 12:13:46.499998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.377 [2024-12-05 12:13:46.500157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.377 [2024-12-05 12:13:46.500316] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.377 [2024-12-05 12:13:46.500323] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.377 [2024-12-05 12:13:46.500332] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.377 [2024-12-05 12:13:46.500338] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.377 [2024-12-05 12:13:46.512435] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.377 [2024-12-05 12:13:46.512854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.377 [2024-12-05 12:13:46.512870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.377 [2024-12-05 12:13:46.512877] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.377 [2024-12-05 12:13:46.513046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.377 [2024-12-05 12:13:46.513214] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.377 [2024-12-05 12:13:46.513222] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.378 [2024-12-05 12:13:46.513229] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.378 [2024-12-05 12:13:46.513235] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.378 [2024-12-05 12:13:46.525215] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.378 [2024-12-05 12:13:46.525632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.378 [2024-12-05 12:13:46.525648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.378 [2024-12-05 12:13:46.525655] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.378 [2024-12-05 12:13:46.525823] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.378 [2024-12-05 12:13:46.525991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.378 [2024-12-05 12:13:46.525999] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.378 [2024-12-05 12:13:46.526005] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.378 [2024-12-05 12:13:46.526011] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.378 [2024-12-05 12:13:46.538046] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.378 [2024-12-05 12:13:46.538488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.378 [2024-12-05 12:13:46.538505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.378 [2024-12-05 12:13:46.538512] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.378 [2024-12-05 12:13:46.538681] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.378 [2024-12-05 12:13:46.538849] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.378 [2024-12-05 12:13:46.538857] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.378 [2024-12-05 12:13:46.538864] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.378 [2024-12-05 12:13:46.538870] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.378 [2024-12-05 12:13:46.551121] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.378 [2024-12-05 12:13:46.551530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.378 [2024-12-05 12:13:46.551547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.378 [2024-12-05 12:13:46.551554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.378 [2024-12-05 12:13:46.551734] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.378 [2024-12-05 12:13:46.551903] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.378 [2024-12-05 12:13:46.551911] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.378 [2024-12-05 12:13:46.551918] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.378 [2024-12-05 12:13:46.551924] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.378 [2024-12-05 12:13:46.564047] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.378 [2024-12-05 12:13:46.564484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.378 [2024-12-05 12:13:46.564522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.378 [2024-12-05 12:13:46.564547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.378 [2024-12-05 12:13:46.565132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.378 [2024-12-05 12:13:46.565723] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.378 [2024-12-05 12:13:46.565731] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.378 [2024-12-05 12:13:46.565737] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.378 [2024-12-05 12:13:46.565743] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.638 [2024-12-05 12:13:46.576937] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.638 [2024-12-05 12:13:46.577380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.638 [2024-12-05 12:13:46.577425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.638 [2024-12-05 12:13:46.577448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.638 [2024-12-05 12:13:46.577883] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.638 [2024-12-05 12:13:46.578057] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.638 [2024-12-05 12:13:46.578065] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.638 [2024-12-05 12:13:46.578071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.638 [2024-12-05 12:13:46.578078] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.638 [2024-12-05 12:13:46.589666] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.638 [2024-12-05 12:13:46.589983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.638 [2024-12-05 12:13:46.589998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.638 [2024-12-05 12:13:46.590008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.638 [2024-12-05 12:13:46.590167] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.638 [2024-12-05 12:13:46.590326] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.638 [2024-12-05 12:13:46.590333] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.638 [2024-12-05 12:13:46.590339] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.638 [2024-12-05 12:13:46.590345] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.638 [2024-12-05 12:13:46.602537] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.638 [2024-12-05 12:13:46.602951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.638 [2024-12-05 12:13:46.602967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.638 [2024-12-05 12:13:46.602974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.638 [2024-12-05 12:13:46.603143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.638 [2024-12-05 12:13:46.603310] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.638 [2024-12-05 12:13:46.603318] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.638 [2024-12-05 12:13:46.603324] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.638 [2024-12-05 12:13:46.603330] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.638 [2024-12-05 12:13:46.615398] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.638 [2024-12-05 12:13:46.615787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.638 [2024-12-05 12:13:46.615802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.638 [2024-12-05 12:13:46.615809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.638 [2024-12-05 12:13:46.615968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.638 [2024-12-05 12:13:46.616128] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.638 [2024-12-05 12:13:46.616135] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.639 [2024-12-05 12:13:46.616141] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.639 [2024-12-05 12:13:46.616147] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.639 [2024-12-05 12:13:46.628210] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.639 [2024-12-05 12:13:46.628617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.639 [2024-12-05 12:13:46.628634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.639 [2024-12-05 12:13:46.628641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.639 [2024-12-05 12:13:46.628809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.639 [2024-12-05 12:13:46.628980] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.639 [2024-12-05 12:13:46.628989] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.639 [2024-12-05 12:13:46.628995] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.639 [2024-12-05 12:13:46.629001] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.639 [2024-12-05 12:13:46.641029] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.639 [2024-12-05 12:13:46.641443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.639 [2024-12-05 12:13:46.641459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.639 [2024-12-05 12:13:46.641466] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.639 [2024-12-05 12:13:46.641634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.639 [2024-12-05 12:13:46.641802] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.639 [2024-12-05 12:13:46.641810] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.639 [2024-12-05 12:13:46.641816] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.639 [2024-12-05 12:13:46.641823] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.639 [2024-12-05 12:13:46.653760] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.639 [2024-12-05 12:13:46.654155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.639 [2024-12-05 12:13:46.654171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.639 [2024-12-05 12:13:46.654178] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.639 [2024-12-05 12:13:46.654336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.639 [2024-12-05 12:13:46.654524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.639 [2024-12-05 12:13:46.654532] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.639 [2024-12-05 12:13:46.654539] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.639 [2024-12-05 12:13:46.654545] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.639 [2024-12-05 12:13:46.666551] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.639 [2024-12-05 12:13:46.666966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.639 [2024-12-05 12:13:46.666982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.639 [2024-12-05 12:13:46.666989] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.639 [2024-12-05 12:13:46.667157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.639 [2024-12-05 12:13:46.667324] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.639 [2024-12-05 12:13:46.667332] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.639 [2024-12-05 12:13:46.667341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.639 [2024-12-05 12:13:46.667348] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.639 [2024-12-05 12:13:46.679348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.639 [2024-12-05 12:13:46.679718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.639 [2024-12-05 12:13:46.679734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.639 [2024-12-05 12:13:46.679740] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.639 [2024-12-05 12:13:46.679899] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.639 [2024-12-05 12:13:46.680059] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.639 [2024-12-05 12:13:46.680066] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.639 [2024-12-05 12:13:46.680072] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.639 [2024-12-05 12:13:46.680078] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.639 [2024-12-05 12:13:46.692088] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.639 [2024-12-05 12:13:46.692497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.639 [2024-12-05 12:13:46.692553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.639 [2024-12-05 12:13:46.692577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.639 [2024-12-05 12:13:46.693092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.639 [2024-12-05 12:13:46.693260] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.639 [2024-12-05 12:13:46.693268] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.639 [2024-12-05 12:13:46.693275] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.639 [2024-12-05 12:13:46.693281] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.639 [2024-12-05 12:13:46.704820] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.639 [2024-12-05 12:13:46.705211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.639 [2024-12-05 12:13:46.705226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.639 [2024-12-05 12:13:46.705233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.639 [2024-12-05 12:13:46.705413] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.639 [2024-12-05 12:13:46.705581] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.639 [2024-12-05 12:13:46.705589] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.639 [2024-12-05 12:13:46.705595] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.639 [2024-12-05 12:13:46.705602] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.639 [2024-12-05 12:13:46.717607] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.639 [2024-12-05 12:13:46.718021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.639 [2024-12-05 12:13:46.718036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.639 [2024-12-05 12:13:46.718043] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.639 [2024-12-05 12:13:46.718212] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.639 [2024-12-05 12:13:46.718387] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.639 [2024-12-05 12:13:46.718396] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.639 [2024-12-05 12:13:46.718402] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.639 [2024-12-05 12:13:46.718408] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.639 [2024-12-05 12:13:46.730472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.639 [2024-12-05 12:13:46.730871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.639 [2024-12-05 12:13:46.730913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.639 [2024-12-05 12:13:46.730936] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.639 [2024-12-05 12:13:46.731536] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.639 [2024-12-05 12:13:46.732109] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.639 [2024-12-05 12:13:46.732117] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.639 [2024-12-05 12:13:46.732123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.639 [2024-12-05 12:13:46.732129] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.639 [2024-12-05 12:13:46.743250] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.639 [2024-12-05 12:13:46.743657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.639 [2024-12-05 12:13:46.743673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.639 [2024-12-05 12:13:46.743680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.639 [2024-12-05 12:13:46.743848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.639 [2024-12-05 12:13:46.744016] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.639 [2024-12-05 12:13:46.744024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.640 [2024-12-05 12:13:46.744030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.640 [2024-12-05 12:13:46.744037] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.640 [2024-12-05 12:13:46.756022] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.640 [2024-12-05 12:13:46.756442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.640 [2024-12-05 12:13:46.756458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.640 [2024-12-05 12:13:46.756468] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.640 [2024-12-05 12:13:46.756637] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.640 [2024-12-05 12:13:46.756804] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.640 [2024-12-05 12:13:46.756812] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.640 [2024-12-05 12:13:46.756818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.640 [2024-12-05 12:13:46.756824] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.640 [2024-12-05 12:13:46.768762] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.640 [2024-12-05 12:13:46.769149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.640 [2024-12-05 12:13:46.769164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.640 [2024-12-05 12:13:46.769171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.640 [2024-12-05 12:13:46.769330] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.640 [2024-12-05 12:13:46.769515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.640 [2024-12-05 12:13:46.769523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.640 [2024-12-05 12:13:46.769529] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.640 [2024-12-05 12:13:46.769536] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.640 [2024-12-05 12:13:46.781548] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.640 [2024-12-05 12:13:46.781935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.640 [2024-12-05 12:13:46.781951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.640 [2024-12-05 12:13:46.781958] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.640 [2024-12-05 12:13:46.782126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.640 [2024-12-05 12:13:46.782294] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.640 [2024-12-05 12:13:46.782302] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.640 [2024-12-05 12:13:46.782309] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.640 [2024-12-05 12:13:46.782315] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.640 [2024-12-05 12:13:46.794320] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.640 [2024-12-05 12:13:46.794686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.640 [2024-12-05 12:13:46.794730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.640 [2024-12-05 12:13:46.794753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.640 [2024-12-05 12:13:46.795254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.640 [2024-12-05 12:13:46.795436] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.640 [2024-12-05 12:13:46.795445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.640 [2024-12-05 12:13:46.795451] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.640 [2024-12-05 12:13:46.795458] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.640 [2024-12-05 12:13:46.807362] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.640 [2024-12-05 12:13:46.807793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.640 [2024-12-05 12:13:46.807837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.640 [2024-12-05 12:13:46.807860] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.640 [2024-12-05 12:13:46.808409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.640 [2024-12-05 12:13:46.808579] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.640 [2024-12-05 12:13:46.808588] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.640 [2024-12-05 12:13:46.808594] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.640 [2024-12-05 12:13:46.808600] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.640 [2024-12-05 12:13:46.820315] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.640 [2024-12-05 12:13:46.820694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.640 [2024-12-05 12:13:46.820710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.640 [2024-12-05 12:13:46.820717] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.640 [2024-12-05 12:13:46.820885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.640 [2024-12-05 12:13:46.821052] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.640 [2024-12-05 12:13:46.821060] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.640 [2024-12-05 12:13:46.821067] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.640 [2024-12-05 12:13:46.821073] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.640 [2024-12-05 12:13:46.833411] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.640 [2024-12-05 12:13:46.833792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.640 [2024-12-05 12:13:46.833807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.640 [2024-12-05 12:13:46.833815] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.640 [2024-12-05 12:13:46.833988] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.640 [2024-12-05 12:13:46.834165] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.640 [2024-12-05 12:13:46.834173] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.640 [2024-12-05 12:13:46.834186] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.640 [2024-12-05 12:13:46.834192] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.901 [2024-12-05 12:13:46.846299] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.901 [2024-12-05 12:13:46.846716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.901 [2024-12-05 12:13:46.846733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.901 [2024-12-05 12:13:46.846740] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.901 [2024-12-05 12:13:46.846912] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.901 [2024-12-05 12:13:46.847081] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.901 [2024-12-05 12:13:46.847089] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.901 [2024-12-05 12:13:46.847095] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.901 [2024-12-05 12:13:46.847101] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.901 6065.00 IOPS, 23.69 MiB/s [2024-12-05T11:13:47.097Z] [2024-12-05 12:13:46.859215] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.901 [2024-12-05 12:13:46.859633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.901 [2024-12-05 12:13:46.859649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.901 [2024-12-05 12:13:46.859656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.901 [2024-12-05 12:13:46.859830] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.901 [2024-12-05 12:13:46.860007] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.901 [2024-12-05 12:13:46.860015] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.901 [2024-12-05 12:13:46.860021] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.901 [2024-12-05 12:13:46.860027] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.901 [2024-12-05 12:13:46.872200] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.901 [2024-12-05 12:13:46.872663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.901 [2024-12-05 12:13:46.872680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.901 [2024-12-05 12:13:46.872687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.901 [2024-12-05 12:13:46.872855] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.901 [2024-12-05 12:13:46.873022] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.901 [2024-12-05 12:13:46.873030] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.901 [2024-12-05 12:13:46.873036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.901 [2024-12-05 12:13:46.873043] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.901 [2024-12-05 12:13:46.885217] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.901 [2024-12-05 12:13:46.885572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.901 [2024-12-05 12:13:46.885588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.901 [2024-12-05 12:13:46.885595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.901 [2024-12-05 12:13:46.885763] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.901 [2024-12-05 12:13:46.885932] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.901 [2024-12-05 12:13:46.885940] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.901 [2024-12-05 12:13:46.885947] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.901 [2024-12-05 12:13:46.885953] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.901 [2024-12-05 12:13:46.898056] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.901 [2024-12-05 12:13:46.898498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.901 [2024-12-05 12:13:46.898514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.901 [2024-12-05 12:13:46.898521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.901 [2024-12-05 12:13:46.898689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.901 [2024-12-05 12:13:46.898856] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.901 [2024-12-05 12:13:46.898864] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.901 [2024-12-05 12:13:46.898871] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.901 [2024-12-05 12:13:46.898877] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.901 [2024-12-05 12:13:46.910867] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.901 [2024-12-05 12:13:46.911281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.901 [2024-12-05 12:13:46.911296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.901 [2024-12-05 12:13:46.911303] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.901 [2024-12-05 12:13:46.911477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.901 [2024-12-05 12:13:46.911645] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.901 [2024-12-05 12:13:46.911653] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.901 [2024-12-05 12:13:46.911659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.901 [2024-12-05 12:13:46.911666] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.901 [2024-12-05 12:13:46.923671] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.901 [2024-12-05 12:13:46.924080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.901 [2024-12-05 12:13:46.924096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.901 [2024-12-05 12:13:46.924106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.901 [2024-12-05 12:13:46.924275] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.901 [2024-12-05 12:13:46.924448] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.901 [2024-12-05 12:13:46.924457] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.901 [2024-12-05 12:13:46.924463] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.901 [2024-12-05 12:13:46.924470] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.902 [2024-12-05 12:13:46.936765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.902 [2024-12-05 12:13:46.937173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.902 [2024-12-05 12:13:46.937191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.902 [2024-12-05 12:13:46.937198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.902 [2024-12-05 12:13:46.937378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.902 [2024-12-05 12:13:46.937552] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.902 [2024-12-05 12:13:46.937561] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.902 [2024-12-05 12:13:46.937567] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.902 [2024-12-05 12:13:46.937573] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.902 [2024-12-05 12:13:46.949684] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.902 [2024-12-05 12:13:46.950087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.902 [2024-12-05 12:13:46.950103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.902 [2024-12-05 12:13:46.950109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.902 [2024-12-05 12:13:46.950278] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.902 [2024-12-05 12:13:46.950451] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.902 [2024-12-05 12:13:46.950460] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.902 [2024-12-05 12:13:46.950466] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.902 [2024-12-05 12:13:46.950472] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.902 [2024-12-05 12:13:46.962646] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.902 [2024-12-05 12:13:46.963011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.902 [2024-12-05 12:13:46.963028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.902 [2024-12-05 12:13:46.963035] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.902 [2024-12-05 12:13:46.963203] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.902 [2024-12-05 12:13:46.963379] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.902 [2024-12-05 12:13:46.963388] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.902 [2024-12-05 12:13:46.963395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.902 [2024-12-05 12:13:46.963401] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.902 [2024-12-05 12:13:46.975609] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.902 [2024-12-05 12:13:46.976010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.902 [2024-12-05 12:13:46.976027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.902 [2024-12-05 12:13:46.976034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.902 [2024-12-05 12:13:46.976202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.902 [2024-12-05 12:13:46.976377] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.902 [2024-12-05 12:13:46.976386] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.902 [2024-12-05 12:13:46.976392] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.902 [2024-12-05 12:13:46.976398] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.902 [2024-12-05 12:13:46.988501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.902 [2024-12-05 12:13:46.988830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.902 [2024-12-05 12:13:46.988846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.902 [2024-12-05 12:13:46.988853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.902 [2024-12-05 12:13:46.989022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.902 [2024-12-05 12:13:46.989189] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.902 [2024-12-05 12:13:46.989197] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.902 [2024-12-05 12:13:46.989204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.902 [2024-12-05 12:13:46.989210] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.902 [2024-12-05 12:13:47.001579] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.902 [2024-12-05 12:13:47.001946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.902 [2024-12-05 12:13:47.001962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.902 [2024-12-05 12:13:47.001969] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.902 [2024-12-05 12:13:47.002143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.902 [2024-12-05 12:13:47.002316] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.902 [2024-12-05 12:13:47.002325] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.902 [2024-12-05 12:13:47.002335] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.902 [2024-12-05 12:13:47.002342] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.902 [2024-12-05 12:13:47.014517] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.902 [2024-12-05 12:13:47.014861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.902 [2024-12-05 12:13:47.014876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.902 [2024-12-05 12:13:47.014884] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.902 [2024-12-05 12:13:47.015053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.902 [2024-12-05 12:13:47.015221] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.902 [2024-12-05 12:13:47.015229] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.902 [2024-12-05 12:13:47.015235] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.902 [2024-12-05 12:13:47.015241] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.902 [2024-12-05 12:13:47.027413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.902 [2024-12-05 12:13:47.027807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.902 [2024-12-05 12:13:47.027823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.902 [2024-12-05 12:13:47.027830] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.902 [2024-12-05 12:13:47.027998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.902 [2024-12-05 12:13:47.028165] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.902 [2024-12-05 12:13:47.028173] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.902 [2024-12-05 12:13:47.028180] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.902 [2024-12-05 12:13:47.028186] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.902 [2024-12-05 12:13:47.040242] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.902 [2024-12-05 12:13:47.040525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.902 [2024-12-05 12:13:47.040541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.902 [2024-12-05 12:13:47.040548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.902 [2024-12-05 12:13:47.040717] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.902 [2024-12-05 12:13:47.040884] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.902 [2024-12-05 12:13:47.040892] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.902 [2024-12-05 12:13:47.040898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.902 [2024-12-05 12:13:47.040904] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.902 [2024-12-05 12:13:47.053044] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.902 [2024-12-05 12:13:47.053484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.902 [2024-12-05 12:13:47.053500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.902 [2024-12-05 12:13:47.053508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.902 [2024-12-05 12:13:47.053681] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.902 [2024-12-05 12:13:47.053855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.902 [2024-12-05 12:13:47.053864] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.902 [2024-12-05 12:13:47.053870] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.902 [2024-12-05 12:13:47.053877] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.903 [2024-12-05 12:13:47.066184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.903 [2024-12-05 12:13:47.066574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.903 [2024-12-05 12:13:47.066590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.903 [2024-12-05 12:13:47.066597] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.903 [2024-12-05 12:13:47.066766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.903 [2024-12-05 12:13:47.066934] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.903 [2024-12-05 12:13:47.066942] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.903 [2024-12-05 12:13:47.066948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.903 [2024-12-05 12:13:47.066955] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.903 [2024-12-05 12:13:47.079194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.903 [2024-12-05 12:13:47.079504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.903 [2024-12-05 12:13:47.079520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.903 [2024-12-05 12:13:47.079527] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.903 [2024-12-05 12:13:47.079696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.903 [2024-12-05 12:13:47.079864] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.903 [2024-12-05 12:13:47.079872] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.903 [2024-12-05 12:13:47.079878] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.903 [2024-12-05 12:13:47.079885] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:12.903 [2024-12-05 12:13:47.092068] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:12.903 [2024-12-05 12:13:47.092445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.903 [2024-12-05 12:13:47.092462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:12.903 [2024-12-05 12:13:47.092473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:12.903 [2024-12-05 12:13:47.092647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:12.903 [2024-12-05 12:13:47.092822] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:12.903 [2024-12-05 12:13:47.092830] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:12.903 [2024-12-05 12:13:47.092837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:12.903 [2024-12-05 12:13:47.092843] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.163 [2024-12-05 12:13:47.105095] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.163 [2024-12-05 12:13:47.105488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.163 [2024-12-05 12:13:47.105533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:13.163 [2024-12-05 12:13:47.105557] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:13.163 [2024-12-05 12:13:47.106059] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:13.163 [2024-12-05 12:13:47.106232] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.163 [2024-12-05 12:13:47.106241] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.163 [2024-12-05 12:13:47.106247] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.164 [2024-12-05 12:13:47.106254] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.164 [2024-12-05 12:13:47.118052] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.164 [2024-12-05 12:13:47.118396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.164 [2024-12-05 12:13:47.118412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:13.164 [2024-12-05 12:13:47.118419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:13.164 [2024-12-05 12:13:47.118587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:13.164 [2024-12-05 12:13:47.118755] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.164 [2024-12-05 12:13:47.118763] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.164 [2024-12-05 12:13:47.118769] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.164 [2024-12-05 12:13:47.118775] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.164 [2024-12-05 12:13:47.130932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.164 [2024-12-05 12:13:47.131272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.164 [2024-12-05 12:13:47.131288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:13.164 [2024-12-05 12:13:47.131295] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:13.164 [2024-12-05 12:13:47.131469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:13.164 [2024-12-05 12:13:47.131641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.164 [2024-12-05 12:13:47.131650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.164 [2024-12-05 12:13:47.131656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.164 [2024-12-05 12:13:47.131662] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.164 [2024-12-05 12:13:47.143801] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.164 [2024-12-05 12:13:47.144242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.164 [2024-12-05 12:13:47.144285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:13.164 [2024-12-05 12:13:47.144309] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:13.164 [2024-12-05 12:13:47.144836] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:13.164 [2024-12-05 12:13:47.145004] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.164 [2024-12-05 12:13:47.145012] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.164 [2024-12-05 12:13:47.145018] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.164 [2024-12-05 12:13:47.145025] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.164 [2024-12-05 12:13:47.156759] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.164 [2024-12-05 12:13:47.157175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.164 [2024-12-05 12:13:47.157191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:13.164 [2024-12-05 12:13:47.157198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:13.164 [2024-12-05 12:13:47.157374] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:13.164 [2024-12-05 12:13:47.157542] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.164 [2024-12-05 12:13:47.157551] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.164 [2024-12-05 12:13:47.157557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.164 [2024-12-05 12:13:47.157563] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.164 [2024-12-05 12:13:47.169722] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.164 [2024-12-05 12:13:47.170066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.164 [2024-12-05 12:13:47.170082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:13.164 [2024-12-05 12:13:47.170089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:13.164 [2024-12-05 12:13:47.170257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:13.164 [2024-12-05 12:13:47.170430] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.164 [2024-12-05 12:13:47.170439] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.164 [2024-12-05 12:13:47.170449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.164 [2024-12-05 12:13:47.170455] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.164 [2024-12-05 12:13:47.182730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.164 [2024-12-05 12:13:47.183170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.164 [2024-12-05 12:13:47.183211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:13.164 [2024-12-05 12:13:47.183235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:13.164 [2024-12-05 12:13:47.183831] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:13.164 [2024-12-05 12:13:47.184381] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.164 [2024-12-05 12:13:47.184399] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.164 [2024-12-05 12:13:47.184414] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.164 [2024-12-05 12:13:47.184427] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.164 [2024-12-05 12:13:47.197791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.164 [2024-12-05 12:13:47.198325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.164 [2024-12-05 12:13:47.198381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:13.164 [2024-12-05 12:13:47.198406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:13.164 [2024-12-05 12:13:47.198988] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:13.164 [2024-12-05 12:13:47.199244] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.164 [2024-12-05 12:13:47.199255] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.164 [2024-12-05 12:13:47.199265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.164 [2024-12-05 12:13:47.199275] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.164 [2024-12-05 12:13:47.210807] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.164 [2024-12-05 12:13:47.211208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.164 [2024-12-05 12:13:47.211224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:13.164 [2024-12-05 12:13:47.211231] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:13.164 [2024-12-05 12:13:47.211405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:13.164 [2024-12-05 12:13:47.211574] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.164 [2024-12-05 12:13:47.211582] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.164 [2024-12-05 12:13:47.211588] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.164 [2024-12-05 12:13:47.211595] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.164 [2024-12-05 12:13:47.223759] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.164 [2024-12-05 12:13:47.224184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.164 [2024-12-05 12:13:47.224199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:13.164 [2024-12-05 12:13:47.224206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:13.164 [2024-12-05 12:13:47.224380] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:13.164 [2024-12-05 12:13:47.224548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.164 [2024-12-05 12:13:47.224556] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.164 [2024-12-05 12:13:47.224562] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.164 [2024-12-05 12:13:47.224568] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.164 [2024-12-05 12:13:47.236597] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.164 [2024-12-05 12:13:47.236877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.164 [2024-12-05 12:13:47.236893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:13.164 [2024-12-05 12:13:47.236900] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:13.164 [2024-12-05 12:13:47.237068] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:13.164 [2024-12-05 12:13:47.237236] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.164 [2024-12-05 12:13:47.237244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.165 [2024-12-05 12:13:47.237250] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.165 [2024-12-05 12:13:47.237256] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.165 [2024-12-05 12:13:47.249452] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.165 [2024-12-05 12:13:47.249785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.165 [2024-12-05 12:13:47.249800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:13.165 [2024-12-05 12:13:47.249807] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:13.165 [2024-12-05 12:13:47.249976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:13.165 [2024-12-05 12:13:47.250143] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.165 [2024-12-05 12:13:47.250151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.165 [2024-12-05 12:13:47.250157] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.165 [2024-12-05 12:13:47.250163] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.165 [2024-12-05 12:13:47.262206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.165 [2024-12-05 12:13:47.262570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.165 [2024-12-05 12:13:47.262586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:13.165 [2024-12-05 12:13:47.262596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:13.165 [2024-12-05 12:13:47.262765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:13.165 [2024-12-05 12:13:47.262934] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.165 [2024-12-05 12:13:47.262942] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.165 [2024-12-05 12:13:47.262948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.165 [2024-12-05 12:13:47.262954] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.165 [2024-12-05 12:13:47.275073] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.165 [2024-12-05 12:13:47.275461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.165 [2024-12-05 12:13:47.275477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:13.165 [2024-12-05 12:13:47.275484] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:13.165 [2024-12-05 12:13:47.275653] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:13.165 [2024-12-05 12:13:47.275821] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.165 [2024-12-05 12:13:47.275829] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.165 [2024-12-05 12:13:47.275836] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.165 [2024-12-05 12:13:47.275842] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.165 [2024-12-05 12:13:47.287877] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.165 [2024-12-05 12:13:47.288200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.165 [2024-12-05 12:13:47.288215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:13.165 [2024-12-05 12:13:47.288222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:13.165 [2024-12-05 12:13:47.288396] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:13.165 [2024-12-05 12:13:47.288565] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.165 [2024-12-05 12:13:47.288573] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.165 [2024-12-05 12:13:47.288579] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.165 [2024-12-05 12:13:47.288586] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.165 [2024-12-05 12:13:47.300745] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.165 [2024-12-05 12:13:47.301107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.165 [2024-12-05 12:13:47.301123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:13.165 [2024-12-05 12:13:47.301130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:13.165 [2024-12-05 12:13:47.301298] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:13.165 [2024-12-05 12:13:47.301476] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.165 [2024-12-05 12:13:47.301485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.165 [2024-12-05 12:13:47.301491] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.165 [2024-12-05 12:13:47.301497] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.165 [2024-12-05 12:13:47.313669] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.165 [2024-12-05 12:13:47.314037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.165 [2024-12-05 12:13:47.314053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:13.165 [2024-12-05 12:13:47.314060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:13.165 [2024-12-05 12:13:47.314233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:13.165 [2024-12-05 12:13:47.314414] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.165 [2024-12-05 12:13:47.314424] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.165 [2024-12-05 12:13:47.314431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.165 [2024-12-05 12:13:47.314437] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.165 [2024-12-05 12:13:47.326705] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.165 [2024-12-05 12:13:47.327077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.165 [2024-12-05 12:13:47.327124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:13.165 [2024-12-05 12:13:47.327148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:13.165 [2024-12-05 12:13:47.327624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:13.165 [2024-12-05 12:13:47.327793] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.165 [2024-12-05 12:13:47.327802] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.165 [2024-12-05 12:13:47.327808] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.165 [2024-12-05 12:13:47.327814] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.165 [2024-12-05 12:13:47.339749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.165 [2024-12-05 12:13:47.340133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.165 [2024-12-05 12:13:47.340149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:13.165 [2024-12-05 12:13:47.340157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:13.165 [2024-12-05 12:13:47.340325] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:13.165 [2024-12-05 12:13:47.340500] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.165 [2024-12-05 12:13:47.340509] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.165 [2024-12-05 12:13:47.340518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.165 [2024-12-05 12:13:47.340525] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.165 [2024-12-05 12:13:47.352618] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.165 [2024-12-05 12:13:47.353030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.165 [2024-12-05 12:13:47.353046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:13.165 [2024-12-05 12:13:47.353053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:13.165 [2024-12-05 12:13:47.353221] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:13.165 [2024-12-05 12:13:47.353395] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.165 [2024-12-05 12:13:47.353404] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.165 [2024-12-05 12:13:47.353411] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.165 [2024-12-05 12:13:47.353417] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.427 [2024-12-05 12:13:47.365568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.427 [2024-12-05 12:13:47.365939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.427 [2024-12-05 12:13:47.365955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:13.427 [2024-12-05 12:13:47.365962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:13.427 [2024-12-05 12:13:47.366135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:13.427 [2024-12-05 12:13:47.366307] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.427 [2024-12-05 12:13:47.366315] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.427 [2024-12-05 12:13:47.366322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.427 [2024-12-05 12:13:47.366328] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.427 [2024-12-05 12:13:47.378591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.427 [2024-12-05 12:13:47.378887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.427 [2024-12-05 12:13:47.378902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:13.427 [2024-12-05 12:13:47.378909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:13.427 [2024-12-05 12:13:47.379078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:13.427 [2024-12-05 12:13:47.379246] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.427 [2024-12-05 12:13:47.379254] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.427 [2024-12-05 12:13:47.379261] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.427 [2024-12-05 12:13:47.379267] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.427 [2024-12-05 12:13:47.391462] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.427 [2024-12-05 12:13:47.391845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.427 [2024-12-05 12:13:47.391860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:13.427 [2024-12-05 12:13:47.391867] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:13.427 [2024-12-05 12:13:47.392034] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:13.427 [2024-12-05 12:13:47.392203] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.427 [2024-12-05 12:13:47.392210] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.427 [2024-12-05 12:13:47.392217] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.427 [2024-12-05 12:13:47.392223] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.427 [2024-12-05 12:13:47.404299] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.427 [2024-12-05 12:13:47.404734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.427 [2024-12-05 12:13:47.404777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:13.427 [2024-12-05 12:13:47.404801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:13.427 [2024-12-05 12:13:47.405399] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:13.427 [2024-12-05 12:13:47.405846] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.427 [2024-12-05 12:13:47.405854] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.427 [2024-12-05 12:13:47.405860] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.427 [2024-12-05 12:13:47.405866] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.427 [2024-12-05 12:13:47.419444] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.427 [2024-12-05 12:13:47.419915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.427 [2024-12-05 12:13:47.419936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:13.427 [2024-12-05 12:13:47.419947] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:13.427 [2024-12-05 12:13:47.420202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:13.427 [2024-12-05 12:13:47.420466] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.427 [2024-12-05 12:13:47.420479] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.427 [2024-12-05 12:13:47.420488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.427 [2024-12-05 12:13:47.420497] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.427 [2024-12-05 12:13:47.432473] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.427 [2024-12-05 12:13:47.432873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.427 [2024-12-05 12:13:47.432889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:13.427 [2024-12-05 12:13:47.432899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:13.427 [2024-12-05 12:13:47.433068] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:13.427 [2024-12-05 12:13:47.433235] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.427 [2024-12-05 12:13:47.433243] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.427 [2024-12-05 12:13:47.433249] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.427 [2024-12-05 12:13:47.433255] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.428 [2024-12-05 12:13:47.445234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.428 [2024-12-05 12:13:47.445662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.428 [2024-12-05 12:13:47.445706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:13.428 [2024-12-05 12:13:47.445729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:13.428 [2024-12-05 12:13:47.446298] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:13.428 [2024-12-05 12:13:47.446698] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.428 [2024-12-05 12:13:47.446717] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.428 [2024-12-05 12:13:47.446731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.428 [2024-12-05 12:13:47.446745] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.428 [2024-12-05 12:13:47.460085] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.428 [2024-12-05 12:13:47.460516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.428 [2024-12-05 12:13:47.460538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:13.428 [2024-12-05 12:13:47.460549] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:13.428 [2024-12-05 12:13:47.460803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:13.428 [2024-12-05 12:13:47.461059] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.428 [2024-12-05 12:13:47.461071] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.428 [2024-12-05 12:13:47.461082] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.428 [2024-12-05 12:13:47.461092] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.428 [2024-12-05 12:13:47.473097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.428 [2024-12-05 12:13:47.473437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.428 [2024-12-05 12:13:47.473455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:13.428 [2024-12-05 12:13:47.473462] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:13.428 [2024-12-05 12:13:47.473631] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:13.428 [2024-12-05 12:13:47.473805] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.428 [2024-12-05 12:13:47.473813] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.428 [2024-12-05 12:13:47.473820] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.428 [2024-12-05 12:13:47.473826] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.428 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 216388 Killed "${NVMF_APP[@]}" "$@" 00:30:13.428 12:13:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:30:13.428 12:13:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:30:13.428 12:13:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:30:13.428 12:13:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:13.428 12:13:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:13.428 [2024-12-05 12:13:47.486189] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.428 [2024-12-05 12:13:47.486615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.428 [2024-12-05 12:13:47.486632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:13.428 [2024-12-05 12:13:47.486640] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:13.428 [2024-12-05 12:13:47.486813] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:13.428 [2024-12-05 12:13:47.486986] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.428 [2024-12-05 12:13:47.486995] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.428 [2024-12-05 12:13:47.487001] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.428 [2024-12-05 12:13:47.487007] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.428 12:13:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # nvmfpid=217794 00:30:13.428 12:13:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # waitforlisten 217794 00:30:13.428 12:13:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:13.428 12:13:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 217794 ']' 00:30:13.428 12:13:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:13.428 12:13:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:13.428 12:13:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:13.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:13.428 12:13:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:13.428 12:13:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:13.428 [2024-12-05 12:13:47.499255] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.428 [2024-12-05 12:13:47.499691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.428 [2024-12-05 12:13:47.499708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:13.428 [2024-12-05 12:13:47.499716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:13.428 [2024-12-05 12:13:47.499893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:13.428 [2024-12-05 12:13:47.500065] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.428 [2024-12-05 12:13:47.500073] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.428 [2024-12-05 12:13:47.500079] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.428 [2024-12-05 12:13:47.500086] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.428 [2024-12-05 12:13:47.512376] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.428 [2024-12-05 12:13:47.512812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.428 [2024-12-05 12:13:47.512829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:13.428 [2024-12-05 12:13:47.512837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:13.428 [2024-12-05 12:13:47.513011] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:13.428 [2024-12-05 12:13:47.513185] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.428 [2024-12-05 12:13:47.513193] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.428 [2024-12-05 12:13:47.513200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.428 [2024-12-05 12:13:47.513207] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.428 [2024-12-05 12:13:47.525491] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.428 [2024-12-05 12:13:47.525912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.428 [2024-12-05 12:13:47.525928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:13.428 [2024-12-05 12:13:47.525936] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:13.428 [2024-12-05 12:13:47.526111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:13.428 [2024-12-05 12:13:47.526286] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.428 [2024-12-05 12:13:47.526294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.428 [2024-12-05 12:13:47.526301] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.428 [2024-12-05 12:13:47.526307] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.428 [2024-12-05 12:13:47.533997] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:30:13.428 [2024-12-05 12:13:47.534035] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:13.428 [2024-12-05 12:13:47.538567] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.428 [2024-12-05 12:13:47.538971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.428 [2024-12-05 12:13:47.538988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:13.428 [2024-12-05 12:13:47.538995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:13.428 [2024-12-05 12:13:47.539173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:13.428 [2024-12-05 12:13:47.539347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.428 [2024-12-05 12:13:47.539355] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.428 [2024-12-05 12:13:47.539362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.428 [2024-12-05 12:13:47.539372] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.429 [2024-12-05 12:13:47.551536] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.429 [2024-12-05 12:13:47.551983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.429 [2024-12-05 12:13:47.552000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:13.429 [2024-12-05 12:13:47.552008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:13.429 [2024-12-05 12:13:47.552182] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:13.429 [2024-12-05 12:13:47.552359] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.429 [2024-12-05 12:13:47.552374] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.429 [2024-12-05 12:13:47.552381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.429 [2024-12-05 12:13:47.552388] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.429 [2024-12-05 12:13:47.564614] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.429 [2024-12-05 12:13:47.564957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.429 [2024-12-05 12:13:47.564974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:13.429 [2024-12-05 12:13:47.564982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:13.429 [2024-12-05 12:13:47.565177] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:13.429 [2024-12-05 12:13:47.565376] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.429 [2024-12-05 12:13:47.565385] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.429 [2024-12-05 12:13:47.565392] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.429 [2024-12-05 12:13:47.565399] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.429 [2024-12-05 12:13:47.577627] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.429 [2024-12-05 12:13:47.577962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.429 [2024-12-05 12:13:47.577979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:13.429 [2024-12-05 12:13:47.577986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:13.429 [2024-12-05 12:13:47.578160] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:13.429 [2024-12-05 12:13:47.578335] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.429 [2024-12-05 12:13:47.578346] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.429 [2024-12-05 12:13:47.578353] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.429 [2024-12-05 12:13:47.578360] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.429 [2024-12-05 12:13:47.590613] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.429 [2024-12-05 12:13:47.590974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.429 [2024-12-05 12:13:47.590990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:13.429 [2024-12-05 12:13:47.590998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:13.429 [2024-12-05 12:13:47.591171] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:13.429 [2024-12-05 12:13:47.591345] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.429 [2024-12-05 12:13:47.591354] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.429 [2024-12-05 12:13:47.591360] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.429 [2024-12-05 12:13:47.591371] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.429 [2024-12-05 12:13:47.603572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.429 [2024-12-05 12:13:47.603925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.429 [2024-12-05 12:13:47.603941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:13.429 [2024-12-05 12:13:47.603948] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:13.429 [2024-12-05 12:13:47.604122] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:13.429 [2024-12-05 12:13:47.604295] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.429 [2024-12-05 12:13:47.604303] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.429 [2024-12-05 12:13:47.604310] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.429 [2024-12-05 12:13:47.604316] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.429 [2024-12-05 12:13:47.613006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:13.429 [2024-12-05 12:13:47.616604] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.429 [2024-12-05 12:13:47.617010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.429 [2024-12-05 12:13:47.617026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:13.429 [2024-12-05 12:13:47.617033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:13.429 [2024-12-05 12:13:47.617222] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:13.429 [2024-12-05 12:13:47.617401] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.429 [2024-12-05 12:13:47.617410] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.429 [2024-12-05 12:13:47.617417] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.429 [2024-12-05 12:13:47.617427] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.688 [2024-12-05 12:13:47.629686] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.688 [2024-12-05 12:13:47.630124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.688 [2024-12-05 12:13:47.630140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:13.688 [2024-12-05 12:13:47.630148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:13.688 [2024-12-05 12:13:47.630321] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:13.688 [2024-12-05 12:13:47.630500] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.688 [2024-12-05 12:13:47.630509] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.688 [2024-12-05 12:13:47.630516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.688 [2024-12-05 12:13:47.630523] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.688 [2024-12-05 12:13:47.642606] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.688 [2024-12-05 12:13:47.643034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.688 [2024-12-05 12:13:47.643050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:13.688 [2024-12-05 12:13:47.643058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:13.688 [2024-12-05 12:13:47.643228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:13.688 [2024-12-05 12:13:47.643418] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.688 [2024-12-05 12:13:47.643427] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.688 [2024-12-05 12:13:47.643434] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.688 [2024-12-05 12:13:47.643440] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.688 [2024-12-05 12:13:47.655560] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.688 [2024-12-05 12:13:47.655657] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:13.688 [2024-12-05 12:13:47.655680] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:13.688 [2024-12-05 12:13:47.655688] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:13.688 [2024-12-05 12:13:47.655694] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:13.688 [2024-12-05 12:13:47.655699] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:13.688 [2024-12-05 12:13:47.656001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.688 [2024-12-05 12:13:47.656020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:13.688 [2024-12-05 12:13:47.656027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:13.688 [2024-12-05 12:13:47.656201] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:13.688 [2024-12-05 12:13:47.656381] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.688 [2024-12-05 12:13:47.656393] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.688 [2024-12-05 12:13:47.656400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.688 [2024-12-05 12:13:47.656406] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.688 [2024-12-05 12:13:47.657064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:13.688 [2024-12-05 12:13:47.657155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:13.688 [2024-12-05 12:13:47.657156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:13.688 [2024-12-05 12:13:47.668655] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.688 [2024-12-05 12:13:47.669019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.688 [2024-12-05 12:13:47.669040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:13.688 [2024-12-05 12:13:47.669048] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:13.688 [2024-12-05 12:13:47.669222] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:13.688 [2024-12-05 12:13:47.669401] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.688 [2024-12-05 12:13:47.669410] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.688 [2024-12-05 12:13:47.669418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.688 [2024-12-05 12:13:47.669425] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.688 [2024-12-05 12:13:47.681690] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.688 [2024-12-05 12:13:47.682171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.688 [2024-12-05 12:13:47.682191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:13.688 [2024-12-05 12:13:47.682199] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:13.688 [2024-12-05 12:13:47.682377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:13.688 [2024-12-05 12:13:47.682553] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.688 [2024-12-05 12:13:47.682562] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.688 [2024-12-05 12:13:47.682569] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.688 [2024-12-05 12:13:47.682575] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.688 [2024-12-05 12:13:47.694817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.688 [2024-12-05 12:13:47.695248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.688 [2024-12-05 12:13:47.695268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:13.688 [2024-12-05 12:13:47.695276] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:13.688 [2024-12-05 12:13:47.695454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:13.688 [2024-12-05 12:13:47.695631] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.688 [2024-12-05 12:13:47.695645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.688 [2024-12-05 12:13:47.695652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.689 [2024-12-05 12:13:47.695659] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.689 [2024-12-05 12:13:47.707898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.689 [2024-12-05 12:13:47.708277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.689 [2024-12-05 12:13:47.708298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:13.689 [2024-12-05 12:13:47.708306] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:13.689 [2024-12-05 12:13:47.708484] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:13.689 [2024-12-05 12:13:47.708660] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.689 [2024-12-05 12:13:47.708669] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.689 [2024-12-05 12:13:47.708676] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.689 [2024-12-05 12:13:47.708683] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.689 [2024-12-05 12:13:47.720905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.689 [2024-12-05 12:13:47.721356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.689 [2024-12-05 12:13:47.721379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:13.689 [2024-12-05 12:13:47.721388] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:13.689 [2024-12-05 12:13:47.721562] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:13.689 [2024-12-05 12:13:47.721737] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.689 [2024-12-05 12:13:47.721745] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.689 [2024-12-05 12:13:47.721753] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.689 [2024-12-05 12:13:47.721759] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.689 [2024-12-05 12:13:47.733991] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.689 [2024-12-05 12:13:47.734429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.689 [2024-12-05 12:13:47.734446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:13.689 [2024-12-05 12:13:47.734454] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:13.689 [2024-12-05 12:13:47.734627] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:13.689 [2024-12-05 12:13:47.734801] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.689 [2024-12-05 12:13:47.734809] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.689 [2024-12-05 12:13:47.734816] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.689 [2024-12-05 12:13:47.734827] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.689 [2024-12-05 12:13:47.747071] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.689 [2024-12-05 12:13:47.747506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.689 [2024-12-05 12:13:47.747523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:13.689 [2024-12-05 12:13:47.747530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:13.689 [2024-12-05 12:13:47.747704] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:13.689 [2024-12-05 12:13:47.747879] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.689 [2024-12-05 12:13:47.747887] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.689 [2024-12-05 12:13:47.747895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.689 [2024-12-05 12:13:47.747902] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.689 12:13:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:13.689 12:13:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:30:13.689 12:13:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:30:13.689 12:13:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:13.689 12:13:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:13.689 [2024-12-05 12:13:47.760142] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.689 [2024-12-05 12:13:47.760503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.689 [2024-12-05 12:13:47.760520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:13.689 [2024-12-05 12:13:47.760528] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:13.689 [2024-12-05 12:13:47.760702] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:13.689 [2024-12-05 12:13:47.760876] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.689 [2024-12-05 12:13:47.760885] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.689 [2024-12-05 12:13:47.760892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.689 [2024-12-05 12:13:47.760900] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.689 [2024-12-05 12:13:47.773142] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.689 [2024-12-05 12:13:47.773484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.689 [2024-12-05 12:13:47.773501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:13.689 [2024-12-05 12:13:47.773509] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:13.689 [2024-12-05 12:13:47.773683] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:13.689 [2024-12-05 12:13:47.773858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.689 [2024-12-05 12:13:47.773867] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.689 [2024-12-05 12:13:47.773879] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.689 [2024-12-05 12:13:47.773885] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.689 12:13:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:13.689 12:13:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:13.689 12:13:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.689 12:13:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:13.689 [2024-12-05 12:13:47.786126] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.689 [2024-12-05 12:13:47.786483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.689 [2024-12-05 12:13:47.786500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:13.689 [2024-12-05 12:13:47.786508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:13.689 [2024-12-05 12:13:47.786682] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:13.689 [2024-12-05 12:13:47.786857] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.689 [2024-12-05 12:13:47.786866] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.689 [2024-12-05 12:13:47.786873] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.689 [2024-12-05 12:13:47.786880] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.689 [2024-12-05 12:13:47.789532] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:13.689 12:13:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.689 12:13:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:13.689 12:13:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.689 12:13:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:13.689 [2024-12-05 12:13:47.799105] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.689 [2024-12-05 12:13:47.799536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.689 [2024-12-05 12:13:47.799552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:13.689 [2024-12-05 12:13:47.799560] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:13.689 [2024-12-05 12:13:47.799734] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:13.689 [2024-12-05 12:13:47.799907] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.689 [2024-12-05 12:13:47.799915] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.689 [2024-12-05 12:13:47.799922] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.689 [2024-12-05 12:13:47.799929] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.689 [2024-12-05 12:13:47.812162] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.689 [2024-12-05 12:13:47.812594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.689 [2024-12-05 12:13:47.812611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:13.689 [2024-12-05 12:13:47.812622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:13.689 [2024-12-05 12:13:47.812795] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:13.689 [2024-12-05 12:13:47.812969] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.690 [2024-12-05 12:13:47.812977] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.690 [2024-12-05 12:13:47.812984] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.690 [2024-12-05 12:13:47.812991] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.690 Malloc0 00:30:13.690 12:13:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.690 12:13:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:13.690 12:13:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.690 12:13:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:13.690 [2024-12-05 12:13:47.825243] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.690 [2024-12-05 12:13:47.825670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.690 [2024-12-05 12:13:47.825687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:13.690 [2024-12-05 12:13:47.825695] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:13.690 [2024-12-05 12:13:47.825868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:13.690 [2024-12-05 12:13:47.826043] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.690 [2024-12-05 12:13:47.826051] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.690 [2024-12-05 12:13:47.826058] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.690 [2024-12-05 12:13:47.826064] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.690 12:13:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.690 12:13:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:13.690 12:13:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.690 12:13:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:13.690 [2024-12-05 12:13:47.838293] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.690 [2024-12-05 12:13:47.838725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.690 [2024-12-05 12:13:47.838742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce9510 with addr=10.0.0.2, port=4420 00:30:13.690 [2024-12-05 12:13:47.838749] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9510 is same with the state(6) to be set 00:30:13.690 [2024-12-05 12:13:47.838922] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce9510 (9): Bad file descriptor 00:30:13.690 [2024-12-05 12:13:47.839097] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.690 [2024-12-05 12:13:47.839105] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.690 [2024-12-05 12:13:47.839112] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.690 [2024-12-05 12:13:47.839122] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.690 12:13:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.690 12:13:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:13.690 12:13:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.690 12:13:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:13.690 [2024-12-05 12:13:47.843881] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:13.690 12:13:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.690 12:13:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 216776 00:30:13.690 [2024-12-05 12:13:47.851372] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.948 5054.17 IOPS, 19.74 MiB/s [2024-12-05T11:13:48.144Z] [2024-12-05 12:13:47.954987] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:30:15.819 5798.71 IOPS, 22.65 MiB/s [2024-12-05T11:13:50.947Z] 6486.00 IOPS, 25.34 MiB/s [2024-12-05T11:13:51.878Z] 7028.67 IOPS, 27.46 MiB/s [2024-12-05T11:13:53.253Z] 7480.10 IOPS, 29.22 MiB/s [2024-12-05T11:13:54.191Z] 7848.45 IOPS, 30.66 MiB/s [2024-12-05T11:13:55.129Z] 8147.25 IOPS, 31.83 MiB/s [2024-12-05T11:13:56.064Z] 8407.46 IOPS, 32.84 MiB/s [2024-12-05T11:13:57.001Z] 8624.07 IOPS, 33.69 MiB/s 00:30:22.805 Latency(us) 00:30:22.805 [2024-12-05T11:13:57.001Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:22.805 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:22.805 Verification LBA range: start 0x0 length 0x4000 00:30:22.805 Nvme1n1 : 15.00 8811.37 34.42 11166.18 0.00 6387.58 608.55 15728.64 00:30:22.805 [2024-12-05T11:13:57.001Z] =================================================================================================================== 00:30:22.805 [2024-12-05T11:13:57.001Z] Total : 8811.37 34.42 11166.18 0.00 6387.58 608.55 15728.64 00:30:23.065 12:13:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:30:23.065 12:13:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:23.065 12:13:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:23.065 12:13:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:23.065 12:13:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:23.065 12:13:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:30:23.065 12:13:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:30:23.065 12:13:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # nvmfcleanup 00:30:23.065 12:13:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@99 -- # sync 00:30:23.065 12:13:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:30:23.065 12:13:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@102 -- # set +e 00:30:23.065 12:13:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@103 -- # for i in {1..20} 00:30:23.065 12:13:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:30:23.065 rmmod nvme_tcp 00:30:23.065 rmmod nvme_fabrics 00:30:23.065 rmmod nvme_keyring 00:30:23.065 12:13:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:30:23.065 12:13:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # set -e 00:30:23.065 12:13:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # return 0 00:30:23.065 12:13:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # '[' -n 217794 ']' 00:30:23.065 12:13:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@337 -- # killprocess 217794 00:30:23.065 12:13:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 217794 ']' 00:30:23.065 12:13:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 217794 00:30:23.065 12:13:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:30:23.065 12:13:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:23.065 12:13:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 217794 00:30:23.065 12:13:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:23.065 12:13:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:23.065 12:13:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 217794' 00:30:23.065 killing process with pid 217794 00:30:23.065 12:13:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 217794 00:30:23.066 12:13:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 217794 00:30:23.326 12:13:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:30:23.326 12:13:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # nvmf_fini 00:30:23.326 12:13:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@264 -- # local dev 00:30:23.326 12:13:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@267 -- # remove_target_ns 00:30:23.326 12:13:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:30:23.326 12:13:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:30:23.326 12:13:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_target_ns 00:30:25.232 12:13:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@268 -- # delete_main_bridge 00:30:25.232 12:13:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:30:25.232 12:13:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@130 -- # return 0 00:30:25.232 12:13:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:30:25.232 12:13:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:30:25.232 12:13:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:30:25.232 12:13:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:30:25.232 12:13:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:30:25.232 12:13:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:30:25.232 12:13:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:30:25.232 12:13:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:30:25.232 12:13:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:30:25.232 12:13:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:30:25.232 12:13:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:30:25.232 12:13:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:30:25.232 12:13:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:30:25.232 12:13:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:30:25.232 12:13:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:30:25.232 12:13:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:30:25.232 12:13:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:30:25.232 12:13:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@41 -- # _dev=0 00:30:25.232 12:13:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@41 -- # dev_map=() 00:30:25.232 12:13:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@284 -- # iptr 00:30:25.232 12:13:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@542 -- # iptables-save 00:30:25.232 12:13:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:30:25.232 12:13:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@542 -- # iptables-restore 00:30:25.232 00:30:25.232 real 0m26.731s 00:30:25.232 user 1m2.542s 00:30:25.232 sys 0m6.799s 00:30:25.232 12:13:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:25.232 12:13:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:25.232 ************************************ 00:30:25.232 END TEST nvmf_bdevperf 00:30:25.232 ************************************ 00:30:25.492 12:13:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:25.492 12:13:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:25.492 12:13:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:25.492 12:13:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:25.492 ************************************ 00:30:25.492 START TEST nvmf_target_disconnect 00:30:25.492 ************************************ 00:30:25.492 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:25.492 * Looking for test storage... 00:30:25.492 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:25.492 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:25.492 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:30:25.492 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:25.492 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:25.492 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:25.492 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:25.492 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:25.492 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:30:25.492 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:30:25.492 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:30:25.492 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:30:25.492 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:30:25.492 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:30:25.492 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:30:25.492 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:25.492 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:30:25.492 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:30:25.492 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:25.492 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:25.492 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:30:25.492 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:30:25.492 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:25.492 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:30:25.492 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:30:25.492 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:30:25.492 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:30:25.492 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:25.492 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:30:25.492 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:30:25.492 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:25.492 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:25.492 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:30:25.492 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:25.492 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:25.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:25.492 --rc genhtml_branch_coverage=1 00:30:25.492 --rc genhtml_function_coverage=1 00:30:25.492 --rc genhtml_legend=1 00:30:25.492 --rc geninfo_all_blocks=1 00:30:25.492 --rc geninfo_unexecuted_blocks=1 00:30:25.492 00:30:25.492 ' 00:30:25.492 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:25.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:25.492 --rc genhtml_branch_coverage=1 00:30:25.492 --rc genhtml_function_coverage=1 00:30:25.492 --rc genhtml_legend=1 00:30:25.492 --rc geninfo_all_blocks=1 00:30:25.492 --rc geninfo_unexecuted_blocks=1 00:30:25.492 00:30:25.492 ' 00:30:25.492 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:25.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:25.492 --rc genhtml_branch_coverage=1 00:30:25.492 --rc genhtml_function_coverage=1 00:30:25.492 --rc genhtml_legend=1 00:30:25.492 --rc geninfo_all_blocks=1 00:30:25.492 --rc geninfo_unexecuted_blocks=1 00:30:25.492 00:30:25.492 ' 00:30:25.492 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:25.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:25.492 --rc genhtml_branch_coverage=1 00:30:25.492 --rc genhtml_function_coverage=1 00:30:25.492 --rc genhtml_legend=1 00:30:25.492 --rc geninfo_all_blocks=1 00:30:25.492 --rc geninfo_unexecuted_blocks=1 00:30:25.492 00:30:25.492 ' 00:30:25.492 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:25.492 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:30:25.493 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:25.493 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:25.493 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:25.493 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:25.493 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:25.493 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:30:25.493 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:25.493 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:30:25.753 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:30:25.753 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:30:25.753 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:25.753 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:30:25.753 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:30:25.753 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:25.753 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:25.753 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:30:25.753 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:25.753 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:25.753 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:25.753 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.753 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.753 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.753 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:30:25.753 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.753 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:30:25.753 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:30:25.753 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:30:25.753 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:30:25.753 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@50 -- # : 0 00:30:25.753 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:30:25.753 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:30:25.753 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:30:25.753 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:25.753 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:25.753 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:30:25.753 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:30:25.753 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:30:25.753 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:30:25.753 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@54 -- # have_pci_nics=0 00:30:25.753 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:30:25.753 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:30:25.753 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:30:25.753 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:30:25.753 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:30:25.753 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:25.753 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # prepare_net_devs 00:30:25.753 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # local -g is_hw=no 00:30:25.753 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@260 -- # remove_target_ns 00:30:25.754 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:30:25.754 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:30:25.754 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_target_ns 00:30:25.754 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:30:25.754 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:30:25.754 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # xtrace_disable 00:30:25.754 12:13:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:32.339 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:32.339 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@131 -- # pci_devs=() 00:30:32.339 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@131 -- # local -a pci_devs 00:30:32.339 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@132 -- # pci_net_devs=() 00:30:32.339 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:30:32.339 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@133 -- # pci_drivers=() 00:30:32.339 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@133 -- # local -A pci_drivers 00:30:32.339 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@135 -- # net_devs=() 00:30:32.339 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@135 -- # local -ga net_devs 00:30:32.339 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@136 -- # e810=() 00:30:32.339 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@136 -- # local -ga e810 00:30:32.339 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@137 -- # x722=() 00:30:32.339 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@137 -- # local -ga x722 00:30:32.339 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@138 -- # mlx=() 00:30:32.339 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@138 -- # local -ga mlx 00:30:32.339 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:32.339 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:32.339 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:32.339 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:32.339 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:32.339 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:32.339 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:32.339 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:32.339 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:32.339 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:32.339 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:32.339 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:32.339 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:30:32.339 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:30:32.339 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:30:32.339 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:30:32.339 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:30:32.339 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:30:32.339 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:30:32.339 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:32.339 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:32.339 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:30:32.339 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:30:32.339 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:32.339 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:32.339 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:30:32.339 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:30:32.339 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:32.339 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:32.339 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:30:32.339 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:30:32.339 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:32.339 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:32.339 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:30:32.339 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:30:32.339 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:30:32.339 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:30:32.339 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:30:32.339 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:32.339 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:30:32.339 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:32.339 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # [[ up == up ]] 00:30:32.339 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:30:32.339 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:32.339 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:32.339 Found net devices under 0000:86:00.0: cvl_0_0 00:30:32.339 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:30:32.339 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:30:32.339 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:32.339 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # [[ up == up ]] 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:32.340 Found net devices under 0000:86:00.1: cvl_0_1 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # is_hw=yes 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@257 -- # create_target_ns 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@27 -- # local -gA dev_map 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@28 -- # local -g _dev 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@44 -- # ips=() 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@11 -- # local val=167772161 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:30:32.340 10.0.0.1 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@11 -- # local val=167772162 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:30:32.340 10.0.0.2 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@38 -- # ping_ips 1 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@107 -- # local dev=initiator0 00:30:32.340 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:30:32.341 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:32.341 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.492 ms 00:30:32.341 00:30:32.341 --- 10.0.0.1 ping statistics --- 00:30:32.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:32.341 rtt min/avg/max/mdev = 0.492/0.492/0.492/0.000 ms 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@168 -- # get_net_dev target0 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@107 -- # local dev=target0 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:30:32.341 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:32.341 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:30:32.341 00:30:32.341 --- 10.0.0.2 ping statistics --- 00:30:32.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:32.341 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@98 -- # (( pair++ )) 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@270 -- # return 0 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@107 -- # local dev=initiator0 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@107 -- # local dev=initiator1 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@109 -- # return 1 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@168 -- # dev= 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@169 -- # return 0 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@168 -- # get_net_dev target0 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@107 -- # local dev=target0 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@168 -- # get_net_dev target1 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@107 -- # local dev=target1 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:30:32.341 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@109 -- # return 1 00:30:32.342 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@168 -- # dev= 00:30:32.342 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@169 -- # return 0 00:30:32.342 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:30:32.342 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:32.342 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:30:32.342 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:30:32.342 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:32.342 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:30:32.342 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:30:32.342 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:30:32.342 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:32.342 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:32.342 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:32.342 ************************************ 00:30:32.342 START TEST nvmf_target_disconnect_tc1 00:30:32.342 ************************************ 00:30:32.342 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:30:32.342 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:32.342 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:30:32.342 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:32.342 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:32.342 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:32.342 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:32.342 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:32.342 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:32.342 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:32.342 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:32.342 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:30:32.342 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:32.342 [2024-12-05 12:14:05.910352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.342 [2024-12-05 12:14:05.910427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d8ac0 with addr=10.0.0.2, port=4420 00:30:32.342 [2024-12-05 12:14:05.910450] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:32.342 [2024-12-05 12:14:05.910461] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:32.342 [2024-12-05 12:14:05.910468] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:30:32.342 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:30:32.342 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:30:32.342 Initializing NVMe Controllers 00:30:32.342 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:30:32.342 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:32.342 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:32.342 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:32.342 00:30:32.342 real 0m0.122s 00:30:32.342 user 0m0.050s 00:30:32.342 sys 0m0.072s 00:30:32.342 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:32.342 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:32.342 ************************************ 00:30:32.342 END TEST nvmf_target_disconnect_tc1 00:30:32.342 ************************************ 00:30:32.342 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:30:32.342 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:32.342 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:32.342 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:32.342 ************************************ 00:30:32.342 START TEST nvmf_target_disconnect_tc2 00:30:32.342 ************************************ 00:30:32.342 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:30:32.342 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:30:32.342 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:32.342 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:30:32.342 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:32.342 12:14:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:32.342 12:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@328 -- # nvmfpid=222941 00:30:32.342 12:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@329 -- # waitforlisten 222941 00:30:32.342 12:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:32.342 12:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 222941 ']' 00:30:32.342 12:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:32.342 12:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:32.342 12:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:32.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:32.342 12:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:32.342 12:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:32.342 [2024-12-05 12:14:06.052084] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:30:32.342 [2024-12-05 12:14:06.052136] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:32.342 [2024-12-05 12:14:06.132699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:32.342 [2024-12-05 12:14:06.174838] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:32.342 [2024-12-05 12:14:06.174878] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:32.342 [2024-12-05 12:14:06.174884] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:32.342 [2024-12-05 12:14:06.174890] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:32.342 [2024-12-05 12:14:06.174895] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:32.342 [2024-12-05 12:14:06.176568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:32.342 [2024-12-05 12:14:06.176676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:32.342 [2024-12-05 12:14:06.176783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:32.342 [2024-12-05 12:14:06.176784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:32.342 12:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:32.342 12:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:30:32.342 12:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:30:32.342 12:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:32.342 12:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:32.342 12:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:32.342 12:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:32.342 12:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.342 12:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:32.342 Malloc0 00:30:32.342 12:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.342 12:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:32.342 12:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.342 12:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:32.342 [2024-12-05 12:14:06.347024] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:32.342 12:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.343 12:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:32.343 12:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.343 12:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:32.343 12:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.343 12:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:32.343 12:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.343 12:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:32.343 12:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.343 12:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:32.343 12:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.343 12:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:32.343 [2024-12-05 12:14:06.379305] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:32.343 12:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.343 12:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:32.343 12:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.343 12:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:32.343 12:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.343 12:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=223019 00:30:32.343 12:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:30:32.343 12:14:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:34.430 12:14:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 222941 00:30:34.430 12:14:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:30:34.430 Read completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Read completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Read completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Write completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Read completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Read completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Write completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Write completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Write completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Read completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Read completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Write completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Read completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Read completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Read completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Read completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Read completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Write completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Write completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Write completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Write completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Write completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Read completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Read completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Read completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Write completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Read completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Write completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Write completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Write completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Write completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Write completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Read completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Read completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Read completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Read completed with error (sct=0, sc=8) 00:30:34.430 [2024-12-05 12:14:08.407514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:34.430 starting I/O failed 00:30:34.430 Read completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Read completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Read completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Write completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Write completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Write completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Write completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Write completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Read completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Write completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Read completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Read completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Write completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Read completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Read completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Write completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Read completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Read completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Write completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Write completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Read completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Write completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Write completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Write completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Read completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Write completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Read completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Write completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 [2024-12-05 12:14:08.407721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.430 Read completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Read completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Read completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Read completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Read completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Read completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Read completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Read completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Read completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Read completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Read completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Read completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Read completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Read completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Write completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Read completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Write completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Write completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Write completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Read completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Read completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Write completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Write completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Read completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Write completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Write completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Read completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Write completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Read completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Read completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Write completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Write completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 [2024-12-05 12:14:08.407910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:34.430 Read completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Read completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Read completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Write completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Write completed with error (sct=0, sc=8) 00:30:34.430 starting I/O failed 00:30:34.430 Read completed with error (sct=0, sc=8) 00:30:34.431 starting I/O failed 00:30:34.431 Write completed with error (sct=0, sc=8) 00:30:34.431 starting I/O failed 00:30:34.431 Read completed with error (sct=0, sc=8) 00:30:34.431 starting I/O failed 00:30:34.431 Write completed with error (sct=0, sc=8) 00:30:34.431 starting I/O failed 00:30:34.431 Write completed with error (sct=0, sc=8) 00:30:34.431 starting I/O failed 00:30:34.431 Write completed with error (sct=0, sc=8) 00:30:34.431 starting I/O failed 00:30:34.431 Read completed with error (sct=0, sc=8) 00:30:34.431 starting I/O failed 00:30:34.431 Read completed with error (sct=0, sc=8) 00:30:34.431 starting I/O failed 00:30:34.431 Read completed with error (sct=0, sc=8) 00:30:34.431 starting I/O failed 00:30:34.431 Read completed with error (sct=0, sc=8) 00:30:34.431 starting I/O failed 00:30:34.431 Read completed with error (sct=0, sc=8) 00:30:34.431 starting I/O failed 00:30:34.431 Read completed with error (sct=0, sc=8) 00:30:34.431 starting I/O failed 00:30:34.431 Write completed with error (sct=0, sc=8) 00:30:34.431 starting I/O failed 00:30:34.431 Write completed with error (sct=0, sc=8) 00:30:34.431 starting I/O failed 00:30:34.431 Read completed with error (sct=0, sc=8) 00:30:34.431 starting I/O failed 00:30:34.431 Read completed with error (sct=0, sc=8) 00:30:34.431 starting I/O failed 00:30:34.431 Write completed with error (sct=0, sc=8) 00:30:34.431 starting I/O failed 00:30:34.431 Read completed with error (sct=0, sc=8) 00:30:34.431 starting I/O failed 00:30:34.431 Write completed with error (sct=0, sc=8) 00:30:34.431 starting I/O failed 00:30:34.431 Write completed with error (sct=0, sc=8) 00:30:34.431 starting I/O failed 00:30:34.431 Write completed with error (sct=0, sc=8) 00:30:34.431 starting I/O failed 00:30:34.431 Write completed with error (sct=0, sc=8) 00:30:34.431 starting I/O failed 00:30:34.431 Read completed with error (sct=0, sc=8) 00:30:34.431 starting I/O failed 00:30:34.431 Read completed with error (sct=0, sc=8) 00:30:34.431 starting I/O failed 00:30:34.431 Write completed with error (sct=0, sc=8) 00:30:34.431 starting I/O failed 00:30:34.431 Write completed with error (sct=0, sc=8) 00:30:34.431 starting I/O failed 00:30:34.431 Write completed with error (sct=0, sc=8) 00:30:34.431 starting I/O failed 00:30:34.431 [2024-12-05 12:14:08.408102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:34.431 [2024-12-05 12:14:08.408287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.431 [2024-12-05 12:14:08.408309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.431 qpair failed and we were unable to recover it. 00:30:34.431 [2024-12-05 12:14:08.408502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.431 [2024-12-05 12:14:08.408513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.431 qpair failed and we were unable to recover it. 00:30:34.431 [2024-12-05 12:14:08.408696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.431 [2024-12-05 12:14:08.408707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.431 qpair failed and we were unable to recover it. 00:30:34.431 [2024-12-05 12:14:08.408859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.431 [2024-12-05 12:14:08.408869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.431 qpair failed and we were unable to recover it. 00:30:34.431 [2024-12-05 12:14:08.409152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.431 [2024-12-05 12:14:08.409184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.431 qpair failed and we were unable to recover it. 00:30:34.431 [2024-12-05 12:14:08.409458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.431 [2024-12-05 12:14:08.409491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.431 qpair failed and we were unable to recover it. 00:30:34.431 [2024-12-05 12:14:08.409625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.431 [2024-12-05 12:14:08.409656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.431 qpair failed and we were unable to recover it. 00:30:34.431 [2024-12-05 12:14:08.409960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.431 [2024-12-05 12:14:08.409980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.431 qpair failed and we were unable to recover it. 00:30:34.431 [2024-12-05 12:14:08.410194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.431 [2024-12-05 12:14:08.410205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.431 qpair failed and we were unable to recover it. 00:30:34.431 [2024-12-05 12:14:08.410356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.431 [2024-12-05 12:14:08.410371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.431 qpair failed and we were unable to recover it. 00:30:34.431 [2024-12-05 12:14:08.410500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.431 [2024-12-05 12:14:08.410511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.431 qpair failed and we were unable to recover it. 00:30:34.431 [2024-12-05 12:14:08.410659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.431 [2024-12-05 12:14:08.410670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.431 qpair failed and we were unable to recover it. 00:30:34.431 [2024-12-05 12:14:08.410865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.431 [2024-12-05 12:14:08.410895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.431 qpair failed and we were unable to recover it. 00:30:34.431 [2024-12-05 12:14:08.411212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.431 [2024-12-05 12:14:08.411243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.431 qpair failed and we were unable to recover it. 00:30:34.431 [2024-12-05 12:14:08.411504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.431 [2024-12-05 12:14:08.411537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.431 qpair failed and we were unable to recover it. 00:30:34.431 [2024-12-05 12:14:08.411667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.431 [2024-12-05 12:14:08.411698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.431 qpair failed and we were unable to recover it. 00:30:34.431 [2024-12-05 12:14:08.411998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.431 [2024-12-05 12:14:08.412030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.431 qpair failed and we were unable to recover it. 00:30:34.431 [2024-12-05 12:14:08.412285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.431 [2024-12-05 12:14:08.412316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.431 qpair failed and we were unable to recover it. 00:30:34.431 [2024-12-05 12:14:08.412475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.431 [2024-12-05 12:14:08.412508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.431 qpair failed and we were unable to recover it. 00:30:34.431 [2024-12-05 12:14:08.412789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.431 [2024-12-05 12:14:08.412800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.431 qpair failed and we were unable to recover it. 00:30:34.431 [2024-12-05 12:14:08.412945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.431 [2024-12-05 12:14:08.412955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.431 qpair failed and we were unable to recover it. 00:30:34.431 [2024-12-05 12:14:08.413123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.431 [2024-12-05 12:14:08.413133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.431 qpair failed and we were unable to recover it. 00:30:34.431 [2024-12-05 12:14:08.413325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.431 [2024-12-05 12:14:08.413336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.431 qpair failed and we were unable to recover it. 00:30:34.431 [2024-12-05 12:14:08.413470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.431 [2024-12-05 12:14:08.413481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.431 qpair failed and we were unable to recover it. 00:30:34.431 [2024-12-05 12:14:08.413715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.431 [2024-12-05 12:14:08.413747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.431 qpair failed and we were unable to recover it. 00:30:34.431 [2024-12-05 12:14:08.413938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.431 [2024-12-05 12:14:08.413969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.431 qpair failed and we were unable to recover it. 00:30:34.431 [2024-12-05 12:14:08.414092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.431 [2024-12-05 12:14:08.414123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.431 qpair failed and we were unable to recover it. 00:30:34.431 [2024-12-05 12:14:08.414413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.431 [2024-12-05 12:14:08.414446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.431 qpair failed and we were unable to recover it. 00:30:34.432 [2024-12-05 12:14:08.414702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.432 [2024-12-05 12:14:08.414733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.432 qpair failed and we were unable to recover it. 00:30:34.432 [2024-12-05 12:14:08.414923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.432 [2024-12-05 12:14:08.414955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.432 qpair failed and we were unable to recover it. 00:30:34.432 [2024-12-05 12:14:08.415139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.432 [2024-12-05 12:14:08.415169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.432 qpair failed and we were unable to recover it. 00:30:34.432 [2024-12-05 12:14:08.415443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.432 [2024-12-05 12:14:08.415476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.432 qpair failed and we were unable to recover it. 00:30:34.432 [2024-12-05 12:14:08.415690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.432 [2024-12-05 12:14:08.415722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.432 qpair failed and we were unable to recover it. 00:30:34.432 [2024-12-05 12:14:08.415970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.432 [2024-12-05 12:14:08.416002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.432 qpair failed and we were unable to recover it. 00:30:34.432 [2024-12-05 12:14:08.416212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.432 [2024-12-05 12:14:08.416249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.432 qpair failed and we were unable to recover it. 00:30:34.432 [2024-12-05 12:14:08.416506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.432 [2024-12-05 12:14:08.416539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.432 qpair failed and we were unable to recover it. 00:30:34.432 [2024-12-05 12:14:08.416783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.432 [2024-12-05 12:14:08.416815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.432 qpair failed and we were unable to recover it. 00:30:34.432 [2024-12-05 12:14:08.416945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.432 [2024-12-05 12:14:08.416976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.432 qpair failed and we were unable to recover it. 00:30:34.432 [2024-12-05 12:14:08.417155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.432 [2024-12-05 12:14:08.417188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.432 qpair failed and we were unable to recover it. 00:30:34.432 [2024-12-05 12:14:08.417424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.432 [2024-12-05 12:14:08.417457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.432 qpair failed and we were unable to recover it. 00:30:34.432 [2024-12-05 12:14:08.417577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.432 [2024-12-05 12:14:08.417608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.432 qpair failed and we were unable to recover it. 00:30:34.432 [2024-12-05 12:14:08.417780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.432 [2024-12-05 12:14:08.417812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.432 qpair failed and we were unable to recover it. 00:30:34.432 [2024-12-05 12:14:08.418094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.432 [2024-12-05 12:14:08.418126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.432 qpair failed and we were unable to recover it. 00:30:34.432 [2024-12-05 12:14:08.418330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.432 [2024-12-05 12:14:08.418362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.432 qpair failed and we were unable to recover it. 00:30:34.432 [2024-12-05 12:14:08.418618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.432 [2024-12-05 12:14:08.418650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.432 qpair failed and we were unable to recover it. 00:30:34.432 [2024-12-05 12:14:08.418941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.432 [2024-12-05 12:14:08.418973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.432 qpair failed and we were unable to recover it. 00:30:34.432 [2024-12-05 12:14:08.419157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.432 [2024-12-05 12:14:08.419189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.432 qpair failed and we were unable to recover it. 00:30:34.432 [2024-12-05 12:14:08.419459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.432 [2024-12-05 12:14:08.419492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.432 qpair failed and we were unable to recover it. 00:30:34.432 [2024-12-05 12:14:08.419754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.432 [2024-12-05 12:14:08.419789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.432 qpair failed and we were unable to recover it. 00:30:34.432 [2024-12-05 12:14:08.420069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.432 [2024-12-05 12:14:08.420101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.432 qpair failed and we were unable to recover it. 00:30:34.432 [2024-12-05 12:14:08.420337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.432 [2024-12-05 12:14:08.420378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.432 qpair failed and we were unable to recover it. 00:30:34.432 [2024-12-05 12:14:08.420562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.432 [2024-12-05 12:14:08.420594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.432 qpair failed and we were unable to recover it. 00:30:34.432 [2024-12-05 12:14:08.420718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.432 [2024-12-05 12:14:08.420752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.432 qpair failed and we were unable to recover it. 00:30:34.432 [2024-12-05 12:14:08.420968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.432 [2024-12-05 12:14:08.421001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.432 qpair failed and we were unable to recover it. 00:30:34.432 [2024-12-05 12:14:08.421129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.432 [2024-12-05 12:14:08.421160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.432 qpair failed and we were unable to recover it. 00:30:34.432 [2024-12-05 12:14:08.421431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.432 [2024-12-05 12:14:08.421464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.432 qpair failed and we were unable to recover it. 00:30:34.432 [2024-12-05 12:14:08.421668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.432 [2024-12-05 12:14:08.421698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.432 qpair failed and we were unable to recover it. 00:30:34.432 [2024-12-05 12:14:08.421895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.432 [2024-12-05 12:14:08.421927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.432 qpair failed and we were unable to recover it. 00:30:34.432 [2024-12-05 12:14:08.422190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.432 [2024-12-05 12:14:08.422222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.432 qpair failed and we were unable to recover it. 00:30:34.432 [2024-12-05 12:14:08.422484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.432 [2024-12-05 12:14:08.422517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.432 qpair failed and we were unable to recover it. 00:30:34.432 [2024-12-05 12:14:08.422763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.432 [2024-12-05 12:14:08.422794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.432 qpair failed and we were unable to recover it. 00:30:34.432 [2024-12-05 12:14:08.423059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.432 [2024-12-05 12:14:08.423091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.432 qpair failed and we were unable to recover it. 00:30:34.432 [2024-12-05 12:14:08.423304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.432 [2024-12-05 12:14:08.423337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.432 qpair failed and we were unable to recover it. 00:30:34.432 [2024-12-05 12:14:08.423617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.432 [2024-12-05 12:14:08.423650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.432 qpair failed and we were unable to recover it. 00:30:34.432 [2024-12-05 12:14:08.423830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.432 [2024-12-05 12:14:08.423861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.432 qpair failed and we were unable to recover it. 00:30:34.432 [2024-12-05 12:14:08.424127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.432 [2024-12-05 12:14:08.424159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.432 qpair failed and we were unable to recover it. 00:30:34.432 [2024-12-05 12:14:08.424402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.433 [2024-12-05 12:14:08.424435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.433 qpair failed and we were unable to recover it. 00:30:34.433 [2024-12-05 12:14:08.424606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.433 [2024-12-05 12:14:08.424638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.433 qpair failed and we were unable to recover it. 00:30:34.433 [2024-12-05 12:14:08.424830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.433 [2024-12-05 12:14:08.424860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.433 qpair failed and we were unable to recover it. 00:30:34.433 [2024-12-05 12:14:08.425077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.433 [2024-12-05 12:14:08.425110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.433 qpair failed and we were unable to recover it. 00:30:34.433 [2024-12-05 12:14:08.425294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.433 [2024-12-05 12:14:08.425326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.433 qpair failed and we were unable to recover it. 00:30:34.433 [2024-12-05 12:14:08.425531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.433 [2024-12-05 12:14:08.425563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.433 qpair failed and we were unable to recover it. 00:30:34.433 [2024-12-05 12:14:08.425777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.433 [2024-12-05 12:14:08.425809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.433 qpair failed and we were unable to recover it. 00:30:34.433 [2024-12-05 12:14:08.426006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.433 [2024-12-05 12:14:08.426038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.433 qpair failed and we were unable to recover it. 00:30:34.433 [2024-12-05 12:14:08.426268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.433 [2024-12-05 12:14:08.426299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.433 qpair failed and we were unable to recover it. 00:30:34.433 [2024-12-05 12:14:08.426475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.433 [2024-12-05 12:14:08.426514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.433 qpair failed and we were unable to recover it. 00:30:34.433 [2024-12-05 12:14:08.426762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.433 [2024-12-05 12:14:08.426794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.433 qpair failed and we were unable to recover it. 00:30:34.433 [2024-12-05 12:14:08.426978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.433 [2024-12-05 12:14:08.427009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.433 qpair failed and we were unable to recover it. 00:30:34.433 [2024-12-05 12:14:08.427291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.433 [2024-12-05 12:14:08.427322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.433 qpair failed and we were unable to recover it. 00:30:34.433 [2024-12-05 12:14:08.427522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.433 [2024-12-05 12:14:08.427555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.433 qpair failed and we were unable to recover it. 00:30:34.433 [2024-12-05 12:14:08.427726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.433 [2024-12-05 12:14:08.427758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.433 qpair failed and we were unable to recover it. 00:30:34.433 [2024-12-05 12:14:08.427952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.433 [2024-12-05 12:14:08.427983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.433 qpair failed and we were unable to recover it. 00:30:34.433 [2024-12-05 12:14:08.428231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.433 [2024-12-05 12:14:08.428263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.433 qpair failed and we were unable to recover it. 00:30:34.433 [2024-12-05 12:14:08.428501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.433 [2024-12-05 12:14:08.428535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.433 qpair failed and we were unable to recover it. 00:30:34.433 [2024-12-05 12:14:08.428718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.433 [2024-12-05 12:14:08.428750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.433 qpair failed and we were unable to recover it. 00:30:34.433 [2024-12-05 12:14:08.428965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.433 [2024-12-05 12:14:08.428996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.433 qpair failed and we were unable to recover it. 00:30:34.433 [2024-12-05 12:14:08.429260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.433 [2024-12-05 12:14:08.429293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.433 qpair failed and we were unable to recover it. 00:30:34.433 [2024-12-05 12:14:08.429527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.433 [2024-12-05 12:14:08.429562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.433 qpair failed and we were unable to recover it. 00:30:34.433 [2024-12-05 12:14:08.429698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.433 [2024-12-05 12:14:08.429729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.433 qpair failed and we were unable to recover it. 00:30:34.433 [2024-12-05 12:14:08.429992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.433 [2024-12-05 12:14:08.430026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.433 qpair failed and we were unable to recover it. 00:30:34.433 [2024-12-05 12:14:08.430311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.433 [2024-12-05 12:14:08.430343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.433 qpair failed and we were unable to recover it. 00:30:34.433 [2024-12-05 12:14:08.430491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.433 [2024-12-05 12:14:08.430523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.433 qpair failed and we were unable to recover it. 00:30:34.433 [2024-12-05 12:14:08.430788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.433 [2024-12-05 12:14:08.430820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.433 qpair failed and we were unable to recover it. 00:30:34.433 [2024-12-05 12:14:08.431075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.433 [2024-12-05 12:14:08.431107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.433 qpair failed and we were unable to recover it. 00:30:34.433 [2024-12-05 12:14:08.431296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.433 [2024-12-05 12:14:08.431327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.433 qpair failed and we were unable to recover it. 00:30:34.433 [2024-12-05 12:14:08.431445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.433 [2024-12-05 12:14:08.431479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.433 qpair failed and we were unable to recover it. 00:30:34.433 [2024-12-05 12:14:08.431695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.433 [2024-12-05 12:14:08.431726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.433 qpair failed and we were unable to recover it. 00:30:34.433 [2024-12-05 12:14:08.431979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.433 [2024-12-05 12:14:08.432012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.433 qpair failed and we were unable to recover it. 00:30:34.433 [2024-12-05 12:14:08.432251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.433 [2024-12-05 12:14:08.432282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.433 qpair failed and we were unable to recover it. 00:30:34.433 [2024-12-05 12:14:08.432471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.433 [2024-12-05 12:14:08.432504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.433 qpair failed and we were unable to recover it. 00:30:34.433 [2024-12-05 12:14:08.432697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.433 [2024-12-05 12:14:08.432729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.433 qpair failed and we were unable to recover it. 00:30:34.433 [2024-12-05 12:14:08.432951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.433 [2024-12-05 12:14:08.432982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.433 qpair failed and we were unable to recover it. 00:30:34.433 [2024-12-05 12:14:08.433158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.433 [2024-12-05 12:14:08.433196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.433 qpair failed and we were unable to recover it. 00:30:34.433 [2024-12-05 12:14:08.433457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.433 [2024-12-05 12:14:08.433490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.433 qpair failed and we were unable to recover it. 00:30:34.433 [2024-12-05 12:14:08.433685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.433 [2024-12-05 12:14:08.433719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.433 qpair failed and we were unable to recover it. 00:30:34.434 [2024-12-05 12:14:08.434004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.434 [2024-12-05 12:14:08.434036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.434 qpair failed and we were unable to recover it. 00:30:34.434 [2024-12-05 12:14:08.434347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.434 [2024-12-05 12:14:08.434388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.434 qpair failed and we were unable to recover it. 00:30:34.434 [2024-12-05 12:14:08.434578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.434 [2024-12-05 12:14:08.434610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.434 qpair failed and we were unable to recover it. 00:30:34.434 [2024-12-05 12:14:08.434821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.434 [2024-12-05 12:14:08.434854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.434 qpair failed and we were unable to recover it. 00:30:34.434 [2024-12-05 12:14:08.435037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.434 [2024-12-05 12:14:08.435070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.434 qpair failed and we were unable to recover it. 00:30:34.434 [2024-12-05 12:14:08.435330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.434 [2024-12-05 12:14:08.435362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.434 qpair failed and we were unable to recover it. 00:30:34.434 [2024-12-05 12:14:08.435508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.434 [2024-12-05 12:14:08.435541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.434 qpair failed and we were unable to recover it. 00:30:34.434 [2024-12-05 12:14:08.435796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.434 [2024-12-05 12:14:08.435828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.434 qpair failed and we were unable to recover it. 00:30:34.434 [2024-12-05 12:14:08.436107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.434 [2024-12-05 12:14:08.436139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.434 qpair failed and we were unable to recover it. 00:30:34.434 [2024-12-05 12:14:08.436275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.434 [2024-12-05 12:14:08.436307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.434 qpair failed and we were unable to recover it. 00:30:34.434 [2024-12-05 12:14:08.436523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.434 [2024-12-05 12:14:08.436555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.434 qpair failed and we were unable to recover it. 00:30:34.434 [2024-12-05 12:14:08.436757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.434 [2024-12-05 12:14:08.436790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.434 qpair failed and we were unable to recover it. 00:30:34.434 [2024-12-05 12:14:08.436921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.434 [2024-12-05 12:14:08.436954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.434 qpair failed and we were unable to recover it. 00:30:34.434 [2024-12-05 12:14:08.437192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.434 [2024-12-05 12:14:08.437226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.434 qpair failed and we were unable to recover it. 00:30:34.434 [2024-12-05 12:14:08.437418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.434 [2024-12-05 12:14:08.437451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.434 qpair failed and we were unable to recover it. 00:30:34.434 [2024-12-05 12:14:08.437717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.434 [2024-12-05 12:14:08.437748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.434 qpair failed and we were unable to recover it. 00:30:34.434 [2024-12-05 12:14:08.437930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.434 [2024-12-05 12:14:08.437961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.434 qpair failed and we were unable to recover it. 00:30:34.434 [2024-12-05 12:14:08.438199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.434 [2024-12-05 12:14:08.438231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.434 qpair failed and we were unable to recover it. 00:30:34.434 [2024-12-05 12:14:08.438494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.434 [2024-12-05 12:14:08.438527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.434 qpair failed and we were unable to recover it. 00:30:34.434 [2024-12-05 12:14:08.438715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.434 [2024-12-05 12:14:08.438746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.434 qpair failed and we were unable to recover it. 00:30:34.434 [2024-12-05 12:14:08.438938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.434 [2024-12-05 12:14:08.438970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.434 qpair failed and we were unable to recover it. 00:30:34.434 [2024-12-05 12:14:08.439145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.434 [2024-12-05 12:14:08.439176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.434 qpair failed and we were unable to recover it. 00:30:34.434 [2024-12-05 12:14:08.439309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.434 [2024-12-05 12:14:08.439340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.434 qpair failed and we were unable to recover it. 00:30:34.434 [2024-12-05 12:14:08.439695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.434 [2024-12-05 12:14:08.439783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.434 qpair failed and we were unable to recover it. 00:30:34.434 [2024-12-05 12:14:08.440121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.434 [2024-12-05 12:14:08.440157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.434 qpair failed and we were unable to recover it. 00:30:34.434 [2024-12-05 12:14:08.440435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.434 [2024-12-05 12:14:08.440471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.434 qpair failed and we were unable to recover it. 00:30:34.434 [2024-12-05 12:14:08.440724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.434 [2024-12-05 12:14:08.440756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.434 qpair failed and we were unable to recover it. 00:30:34.434 [2024-12-05 12:14:08.440941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.434 [2024-12-05 12:14:08.440973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.434 qpair failed and we were unable to recover it. 00:30:34.434 [2024-12-05 12:14:08.441152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.434 [2024-12-05 12:14:08.441184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.434 qpair failed and we were unable to recover it. 00:30:34.434 [2024-12-05 12:14:08.441381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.434 [2024-12-05 12:14:08.441414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.434 qpair failed and we were unable to recover it. 00:30:34.434 [2024-12-05 12:14:08.441684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.434 [2024-12-05 12:14:08.441716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.434 qpair failed and we were unable to recover it. 00:30:34.434 [2024-12-05 12:14:08.441988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.434 [2024-12-05 12:14:08.442021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.434 qpair failed and we were unable to recover it. 00:30:34.434 [2024-12-05 12:14:08.442194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.434 [2024-12-05 12:14:08.442226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.434 qpair failed and we were unable to recover it. 00:30:34.435 [2024-12-05 12:14:08.442500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.435 [2024-12-05 12:14:08.442533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.435 qpair failed and we were unable to recover it. 00:30:34.435 [2024-12-05 12:14:08.442684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.435 [2024-12-05 12:14:08.442714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.435 qpair failed and we were unable to recover it. 00:30:34.435 [2024-12-05 12:14:08.442903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.435 [2024-12-05 12:14:08.442938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.435 qpair failed and we were unable to recover it. 00:30:34.435 [2024-12-05 12:14:08.443200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.435 [2024-12-05 12:14:08.443231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.435 qpair failed and we were unable to recover it. 00:30:34.435 [2024-12-05 12:14:08.443429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.435 [2024-12-05 12:14:08.443461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.435 qpair failed and we were unable to recover it. 00:30:34.435 [2024-12-05 12:14:08.443687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.435 [2024-12-05 12:14:08.443719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.435 qpair failed and we were unable to recover it. 00:30:34.435 [2024-12-05 12:14:08.443861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.435 [2024-12-05 12:14:08.443893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.435 qpair failed and we were unable to recover it. 00:30:34.435 [2024-12-05 12:14:08.444159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.435 [2024-12-05 12:14:08.444192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.435 qpair failed and we were unable to recover it. 00:30:34.435 [2024-12-05 12:14:08.444393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.435 [2024-12-05 12:14:08.444426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.435 qpair failed and we were unable to recover it. 00:30:34.435 [2024-12-05 12:14:08.444598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.435 [2024-12-05 12:14:08.444629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.435 qpair failed and we were unable to recover it. 00:30:34.435 [2024-12-05 12:14:08.444820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.435 [2024-12-05 12:14:08.444851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.435 qpair failed and we were unable to recover it. 00:30:34.435 [2024-12-05 12:14:08.445041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.435 [2024-12-05 12:14:08.445073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.435 qpair failed and we were unable to recover it. 00:30:34.435 [2024-12-05 12:14:08.445306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.435 [2024-12-05 12:14:08.445337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.435 qpair failed and we were unable to recover it. 00:30:34.435 [2024-12-05 12:14:08.445584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.435 [2024-12-05 12:14:08.445616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.435 qpair failed and we were unable to recover it. 00:30:34.435 [2024-12-05 12:14:08.445845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.435 [2024-12-05 12:14:08.445877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.435 qpair failed and we were unable to recover it. 00:30:34.435 [2024-12-05 12:14:08.445994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.435 [2024-12-05 12:14:08.446026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.435 qpair failed and we were unable to recover it. 00:30:34.435 [2024-12-05 12:14:08.446268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.435 [2024-12-05 12:14:08.446300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.435 qpair failed and we were unable to recover it. 00:30:34.435 [2024-12-05 12:14:08.446468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.435 [2024-12-05 12:14:08.446500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.435 qpair failed and we were unable to recover it. 00:30:34.435 [2024-12-05 12:14:08.446695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.435 [2024-12-05 12:14:08.446743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.435 qpair failed and we were unable to recover it. 00:30:34.435 [2024-12-05 12:14:08.446914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.435 [2024-12-05 12:14:08.446947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.435 qpair failed and we were unable to recover it. 00:30:34.435 [2024-12-05 12:14:08.447128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.435 [2024-12-05 12:14:08.447159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.435 qpair failed and we were unable to recover it. 00:30:34.435 [2024-12-05 12:14:08.447347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.435 [2024-12-05 12:14:08.447390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.435 qpair failed and we were unable to recover it. 00:30:34.435 [2024-12-05 12:14:08.447575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.435 [2024-12-05 12:14:08.447608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.435 qpair failed and we were unable to recover it. 00:30:34.435 [2024-12-05 12:14:08.447805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.435 [2024-12-05 12:14:08.447836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.435 qpair failed and we were unable to recover it. 00:30:34.435 [2024-12-05 12:14:08.448103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.435 [2024-12-05 12:14:08.448134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.435 qpair failed and we were unable to recover it. 00:30:34.435 [2024-12-05 12:14:08.448421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.435 [2024-12-05 12:14:08.448454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.435 qpair failed and we were unable to recover it. 00:30:34.435 [2024-12-05 12:14:08.448636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.435 [2024-12-05 12:14:08.448668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.435 qpair failed and we were unable to recover it. 00:30:34.435 [2024-12-05 12:14:08.448920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.435 [2024-12-05 12:14:08.448952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.435 qpair failed and we were unable to recover it. 00:30:34.435 [2024-12-05 12:14:08.449247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.435 [2024-12-05 12:14:08.449278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.435 qpair failed and we were unable to recover it. 00:30:34.435 [2024-12-05 12:14:08.449550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.435 [2024-12-05 12:14:08.449583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.435 qpair failed and we were unable to recover it. 00:30:34.435 [2024-12-05 12:14:08.449779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.435 [2024-12-05 12:14:08.449812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.435 qpair failed and we were unable to recover it. 00:30:34.435 [2024-12-05 12:14:08.450001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.435 [2024-12-05 12:14:08.450031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.435 qpair failed and we were unable to recover it. 00:30:34.435 [2024-12-05 12:14:08.450184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.435 [2024-12-05 12:14:08.450216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.435 qpair failed and we were unable to recover it. 00:30:34.435 [2024-12-05 12:14:08.450409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.435 [2024-12-05 12:14:08.450442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.435 qpair failed and we were unable to recover it. 00:30:34.435 [2024-12-05 12:14:08.450730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.435 [2024-12-05 12:14:08.450762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.435 qpair failed and we were unable to recover it. 00:30:34.435 [2024-12-05 12:14:08.450888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.435 [2024-12-05 12:14:08.450919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.435 qpair failed and we were unable to recover it. 00:30:34.435 [2024-12-05 12:14:08.451177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.435 [2024-12-05 12:14:08.451209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.435 qpair failed and we were unable to recover it. 00:30:34.435 [2024-12-05 12:14:08.451430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.435 [2024-12-05 12:14:08.451464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.435 qpair failed and we were unable to recover it. 00:30:34.436 [2024-12-05 12:14:08.451607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.436 [2024-12-05 12:14:08.451638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.436 qpair failed and we were unable to recover it. 00:30:34.436 [2024-12-05 12:14:08.451754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.436 [2024-12-05 12:14:08.451786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.436 qpair failed and we were unable to recover it. 00:30:34.436 [2024-12-05 12:14:08.452054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.436 [2024-12-05 12:14:08.452085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.436 qpair failed and we were unable to recover it. 00:30:34.436 [2024-12-05 12:14:08.452289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.436 [2024-12-05 12:14:08.452320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.436 qpair failed and we were unable to recover it. 00:30:34.436 [2024-12-05 12:14:08.452571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.436 [2024-12-05 12:14:08.452604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.436 qpair failed and we were unable to recover it. 00:30:34.436 [2024-12-05 12:14:08.452866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.436 [2024-12-05 12:14:08.452898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.436 qpair failed and we were unable to recover it. 00:30:34.436 [2024-12-05 12:14:08.453110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.436 [2024-12-05 12:14:08.453141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.436 qpair failed and we were unable to recover it. 00:30:34.436 [2024-12-05 12:14:08.453413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.436 [2024-12-05 12:14:08.453447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.436 qpair failed and we were unable to recover it. 00:30:34.436 [2024-12-05 12:14:08.453656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.436 [2024-12-05 12:14:08.453689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.436 qpair failed and we were unable to recover it. 00:30:34.436 [2024-12-05 12:14:08.453959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.436 [2024-12-05 12:14:08.453990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.436 qpair failed and we were unable to recover it. 00:30:34.436 [2024-12-05 12:14:08.454175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.436 [2024-12-05 12:14:08.454206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.436 qpair failed and we were unable to recover it. 00:30:34.436 [2024-12-05 12:14:08.454499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.436 [2024-12-05 12:14:08.454532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.436 qpair failed and we were unable to recover it. 00:30:34.436 [2024-12-05 12:14:08.454725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.436 [2024-12-05 12:14:08.454756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.436 qpair failed and we were unable to recover it. 00:30:34.436 [2024-12-05 12:14:08.454882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.436 [2024-12-05 12:14:08.454914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.436 qpair failed and we were unable to recover it. 00:30:34.436 [2024-12-05 12:14:08.455120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.436 [2024-12-05 12:14:08.455151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.436 qpair failed and we were unable to recover it. 00:30:34.436 [2024-12-05 12:14:08.455402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.436 [2024-12-05 12:14:08.455435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.436 qpair failed and we were unable to recover it. 00:30:34.436 [2024-12-05 12:14:08.455624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.436 [2024-12-05 12:14:08.455656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.436 qpair failed and we were unable to recover it. 00:30:34.436 [2024-12-05 12:14:08.455898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.436 [2024-12-05 12:14:08.455930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.436 qpair failed and we were unable to recover it. 00:30:34.436 [2024-12-05 12:14:08.456223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.436 [2024-12-05 12:14:08.456255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.436 qpair failed and we were unable to recover it. 00:30:34.436 [2024-12-05 12:14:08.456500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.436 [2024-12-05 12:14:08.456532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.436 qpair failed and we were unable to recover it. 00:30:34.436 [2024-12-05 12:14:08.456656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.436 [2024-12-05 12:14:08.456694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.436 qpair failed and we were unable to recover it. 00:30:34.436 [2024-12-05 12:14:08.456933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.436 [2024-12-05 12:14:08.456964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.436 qpair failed and we were unable to recover it. 00:30:34.436 [2024-12-05 12:14:08.457154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.436 [2024-12-05 12:14:08.457186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.436 qpair failed and we were unable to recover it. 00:30:34.436 [2024-12-05 12:14:08.457475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.436 [2024-12-05 12:14:08.457508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.436 qpair failed and we were unable to recover it. 00:30:34.436 [2024-12-05 12:14:08.457749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.436 [2024-12-05 12:14:08.457781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.436 qpair failed and we were unable to recover it. 00:30:34.436 [2024-12-05 12:14:08.457975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.436 [2024-12-05 12:14:08.458006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.436 qpair failed and we were unable to recover it. 00:30:34.436 [2024-12-05 12:14:08.458280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.436 [2024-12-05 12:14:08.458312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.436 qpair failed and we were unable to recover it. 00:30:34.436 [2024-12-05 12:14:08.458565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.436 [2024-12-05 12:14:08.458598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.436 qpair failed and we were unable to recover it. 00:30:34.436 [2024-12-05 12:14:08.458735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.436 [2024-12-05 12:14:08.458767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.436 qpair failed and we were unable to recover it. 00:30:34.436 [2024-12-05 12:14:08.458890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.436 [2024-12-05 12:14:08.458919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.436 qpair failed and we were unable to recover it. 00:30:34.436 [2024-12-05 12:14:08.459106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.436 [2024-12-05 12:14:08.459137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.436 qpair failed and we were unable to recover it. 00:30:34.436 [2024-12-05 12:14:08.459415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.436 [2024-12-05 12:14:08.459449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.436 qpair failed and we were unable to recover it. 00:30:34.436 [2024-12-05 12:14:08.459588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.436 [2024-12-05 12:14:08.459619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.436 qpair failed and we were unable to recover it. 00:30:34.436 [2024-12-05 12:14:08.459905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.436 [2024-12-05 12:14:08.459937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.436 qpair failed and we were unable to recover it. 00:30:34.436 [2024-12-05 12:14:08.460125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.436 [2024-12-05 12:14:08.460157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.436 qpair failed and we were unable to recover it. 00:30:34.436 [2024-12-05 12:14:08.460404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.436 [2024-12-05 12:14:08.460436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.436 qpair failed and we were unable to recover it. 00:30:34.436 [2024-12-05 12:14:08.460653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.436 [2024-12-05 12:14:08.460684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.436 qpair failed and we were unable to recover it. 00:30:34.436 [2024-12-05 12:14:08.460803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.436 [2024-12-05 12:14:08.460836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.436 qpair failed and we were unable to recover it. 00:30:34.437 [2024-12-05 12:14:08.461022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.437 [2024-12-05 12:14:08.461053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.437 qpair failed and we were unable to recover it. 00:30:34.437 [2024-12-05 12:14:08.461309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.437 [2024-12-05 12:14:08.461341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.437 qpair failed and we were unable to recover it. 00:30:34.437 [2024-12-05 12:14:08.461536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.437 [2024-12-05 12:14:08.461569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.437 qpair failed and we were unable to recover it. 00:30:34.437 [2024-12-05 12:14:08.461744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.437 [2024-12-05 12:14:08.461774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.437 qpair failed and we were unable to recover it. 00:30:34.437 [2024-12-05 12:14:08.461952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.437 [2024-12-05 12:14:08.461983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.437 qpair failed and we were unable to recover it. 00:30:34.437 [2024-12-05 12:14:08.462216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.437 [2024-12-05 12:14:08.462248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.437 qpair failed and we were unable to recover it. 00:30:34.437 [2024-12-05 12:14:08.462515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.437 [2024-12-05 12:14:08.462546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.437 qpair failed and we were unable to recover it. 00:30:34.437 [2024-12-05 12:14:08.462726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.437 [2024-12-05 12:14:08.462758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.437 qpair failed and we were unable to recover it. 00:30:34.437 [2024-12-05 12:14:08.462954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.437 [2024-12-05 12:14:08.462985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.437 qpair failed and we were unable to recover it. 00:30:34.437 [2024-12-05 12:14:08.463234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.437 [2024-12-05 12:14:08.463267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.437 qpair failed and we were unable to recover it. 00:30:34.437 [2024-12-05 12:14:08.463447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.437 [2024-12-05 12:14:08.463480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.437 qpair failed and we were unable to recover it. 00:30:34.437 [2024-12-05 12:14:08.463670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.437 [2024-12-05 12:14:08.463703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.437 qpair failed and we were unable to recover it. 00:30:34.437 [2024-12-05 12:14:08.463959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.437 [2024-12-05 12:14:08.463990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.437 qpair failed and we were unable to recover it. 00:30:34.437 [2024-12-05 12:14:08.464167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.437 [2024-12-05 12:14:08.464198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.437 qpair failed and we were unable to recover it. 00:30:34.437 [2024-12-05 12:14:08.464443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.437 [2024-12-05 12:14:08.464475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.437 qpair failed and we were unable to recover it. 00:30:34.437 [2024-12-05 12:14:08.464665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.437 [2024-12-05 12:14:08.464698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.437 qpair failed and we were unable to recover it. 00:30:34.437 [2024-12-05 12:14:08.464985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.437 [2024-12-05 12:14:08.465016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.437 qpair failed and we were unable to recover it. 00:30:34.437 [2024-12-05 12:14:08.465198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.437 [2024-12-05 12:14:08.465230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.437 qpair failed and we were unable to recover it. 00:30:34.437 [2024-12-05 12:14:08.465470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.437 [2024-12-05 12:14:08.465502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.437 qpair failed and we were unable to recover it. 00:30:34.437 [2024-12-05 12:14:08.465694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.437 [2024-12-05 12:14:08.465725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.437 qpair failed and we were unable to recover it. 00:30:34.437 [2024-12-05 12:14:08.465961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.437 [2024-12-05 12:14:08.465993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.437 qpair failed and we were unable to recover it. 00:30:34.437 [2024-12-05 12:14:08.466227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.437 [2024-12-05 12:14:08.466258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.437 qpair failed and we were unable to recover it. 00:30:34.437 [2024-12-05 12:14:08.466499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.437 [2024-12-05 12:14:08.466536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.437 qpair failed and we were unable to recover it. 00:30:34.437 [2024-12-05 12:14:08.466666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.437 [2024-12-05 12:14:08.466697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.437 qpair failed and we were unable to recover it. 00:30:34.437 [2024-12-05 12:14:08.466880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.437 [2024-12-05 12:14:08.466911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.437 qpair failed and we were unable to recover it. 00:30:34.437 [2024-12-05 12:14:08.467177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.437 [2024-12-05 12:14:08.467209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.437 qpair failed and we were unable to recover it. 00:30:34.437 [2024-12-05 12:14:08.467418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.437 [2024-12-05 12:14:08.467451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.437 qpair failed and we were unable to recover it. 00:30:34.437 [2024-12-05 12:14:08.467697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.437 [2024-12-05 12:14:08.467728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.437 qpair failed and we were unable to recover it. 00:30:34.437 [2024-12-05 12:14:08.467920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.437 [2024-12-05 12:14:08.467951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.437 qpair failed and we were unable to recover it. 00:30:34.437 [2024-12-05 12:14:08.468192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.437 [2024-12-05 12:14:08.468223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.437 qpair failed and we were unable to recover it. 00:30:34.437 [2024-12-05 12:14:08.468402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.437 [2024-12-05 12:14:08.468434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.437 qpair failed and we were unable to recover it. 00:30:34.437 [2024-12-05 12:14:08.468704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.437 [2024-12-05 12:14:08.468735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.437 qpair failed and we were unable to recover it. 00:30:34.437 [2024-12-05 12:14:08.468861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.437 [2024-12-05 12:14:08.468892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.437 qpair failed and we were unable to recover it. 00:30:34.437 [2024-12-05 12:14:08.469086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.437 [2024-12-05 12:14:08.469119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.437 qpair failed and we were unable to recover it. 00:30:34.437 [2024-12-05 12:14:08.469404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.437 [2024-12-05 12:14:08.469438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.437 qpair failed and we were unable to recover it. 00:30:34.437 [2024-12-05 12:14:08.469688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.437 [2024-12-05 12:14:08.469719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.437 qpair failed and we were unable to recover it. 00:30:34.437 [2024-12-05 12:14:08.469997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.437 [2024-12-05 12:14:08.470043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.437 qpair failed and we were unable to recover it. 00:30:34.437 [2024-12-05 12:14:08.470233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.437 [2024-12-05 12:14:08.470265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.437 qpair failed and we were unable to recover it. 00:30:34.438 [2024-12-05 12:14:08.470516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.438 [2024-12-05 12:14:08.470549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.438 qpair failed and we were unable to recover it. 00:30:34.438 [2024-12-05 12:14:08.470678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.438 [2024-12-05 12:14:08.470711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.438 qpair failed and we were unable to recover it. 00:30:34.438 [2024-12-05 12:14:08.470920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.438 [2024-12-05 12:14:08.470951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.438 qpair failed and we were unable to recover it. 00:30:34.438 [2024-12-05 12:14:08.471129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.438 [2024-12-05 12:14:08.471160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.438 qpair failed and we were unable to recover it. 00:30:34.438 [2024-12-05 12:14:08.471407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.438 [2024-12-05 12:14:08.471440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.438 qpair failed and we were unable to recover it. 00:30:34.438 [2024-12-05 12:14:08.471705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.438 [2024-12-05 12:14:08.471738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.438 qpair failed and we were unable to recover it. 00:30:34.438 [2024-12-05 12:14:08.472051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.438 [2024-12-05 12:14:08.472081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.438 qpair failed and we were unable to recover it. 00:30:34.438 [2024-12-05 12:14:08.472327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.438 [2024-12-05 12:14:08.472358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.438 qpair failed and we were unable to recover it. 00:30:34.438 [2024-12-05 12:14:08.472578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.438 [2024-12-05 12:14:08.472611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.438 qpair failed and we were unable to recover it. 00:30:34.438 [2024-12-05 12:14:08.472796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.438 [2024-12-05 12:14:08.472826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.438 qpair failed and we were unable to recover it. 00:30:34.438 [2024-12-05 12:14:08.473070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.438 [2024-12-05 12:14:08.473101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.438 qpair failed and we were unable to recover it. 00:30:34.438 [2024-12-05 12:14:08.473289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.438 [2024-12-05 12:14:08.473321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.438 qpair failed and we were unable to recover it. 00:30:34.438 [2024-12-05 12:14:08.473528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.438 [2024-12-05 12:14:08.473560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.438 qpair failed and we were unable to recover it. 00:30:34.438 [2024-12-05 12:14:08.473826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.438 [2024-12-05 12:14:08.473858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.438 qpair failed and we were unable to recover it. 00:30:34.438 [2024-12-05 12:14:08.474044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.438 [2024-12-05 12:14:08.474076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.438 qpair failed and we were unable to recover it. 00:30:34.438 [2024-12-05 12:14:08.474276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.438 [2024-12-05 12:14:08.474307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.438 qpair failed and we were unable to recover it. 00:30:34.438 [2024-12-05 12:14:08.474508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.438 [2024-12-05 12:14:08.474540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.438 qpair failed and we were unable to recover it. 00:30:34.438 [2024-12-05 12:14:08.474803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.438 [2024-12-05 12:14:08.474835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.438 qpair failed and we were unable to recover it. 00:30:34.438 [2024-12-05 12:14:08.475019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.438 [2024-12-05 12:14:08.475051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.438 qpair failed and we were unable to recover it. 00:30:34.438 [2024-12-05 12:14:08.475340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.438 [2024-12-05 12:14:08.475380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.438 qpair failed and we were unable to recover it. 00:30:34.438 [2024-12-05 12:14:08.475579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.438 [2024-12-05 12:14:08.475610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.438 qpair failed and we were unable to recover it. 00:30:34.438 [2024-12-05 12:14:08.475875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.438 [2024-12-05 12:14:08.475907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.438 qpair failed and we were unable to recover it. 00:30:34.438 [2024-12-05 12:14:08.476082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.438 [2024-12-05 12:14:08.476113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.438 qpair failed and we were unable to recover it. 00:30:34.438 [2024-12-05 12:14:08.476320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.438 [2024-12-05 12:14:08.476352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.438 qpair failed and we were unable to recover it. 00:30:34.438 [2024-12-05 12:14:08.476553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.438 [2024-12-05 12:14:08.476590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.438 qpair failed and we were unable to recover it. 00:30:34.438 [2024-12-05 12:14:08.476794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.438 [2024-12-05 12:14:08.476825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.438 qpair failed and we were unable to recover it. 00:30:34.438 [2024-12-05 12:14:08.476960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.438 [2024-12-05 12:14:08.476990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.438 qpair failed and we were unable to recover it. 00:30:34.438 [2024-12-05 12:14:08.477205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.438 [2024-12-05 12:14:08.477237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.438 qpair failed and we were unable to recover it. 00:30:34.438 [2024-12-05 12:14:08.477429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.438 [2024-12-05 12:14:08.477462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.438 qpair failed and we were unable to recover it. 00:30:34.438 [2024-12-05 12:14:08.477716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.438 [2024-12-05 12:14:08.477749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.438 qpair failed and we were unable to recover it. 00:30:34.438 [2024-12-05 12:14:08.477939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.438 [2024-12-05 12:14:08.477971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.438 qpair failed and we were unable to recover it. 00:30:34.438 [2024-12-05 12:14:08.478181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.438 [2024-12-05 12:14:08.478212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.438 qpair failed and we were unable to recover it. 00:30:34.438 [2024-12-05 12:14:08.478410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.438 [2024-12-05 12:14:08.478442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.438 qpair failed and we were unable to recover it. 00:30:34.438 [2024-12-05 12:14:08.478620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.438 [2024-12-05 12:14:08.478653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.438 qpair failed and we were unable to recover it. 00:30:34.438 [2024-12-05 12:14:08.478922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.438 [2024-12-05 12:14:08.478955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.438 qpair failed and we were unable to recover it. 00:30:34.438 [2024-12-05 12:14:08.479175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.438 [2024-12-05 12:14:08.479207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.438 qpair failed and we were unable to recover it. 00:30:34.438 [2024-12-05 12:14:08.479467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.438 [2024-12-05 12:14:08.479499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.438 qpair failed and we were unable to recover it. 00:30:34.438 [2024-12-05 12:14:08.479754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.438 [2024-12-05 12:14:08.479787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.439 qpair failed and we were unable to recover it. 00:30:34.439 [2024-12-05 12:14:08.480113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.439 [2024-12-05 12:14:08.480146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.439 qpair failed and we were unable to recover it. 00:30:34.439 [2024-12-05 12:14:08.480445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.439 [2024-12-05 12:14:08.480478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.439 qpair failed and we were unable to recover it. 00:30:34.439 [2024-12-05 12:14:08.480673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.439 [2024-12-05 12:14:08.480705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.439 qpair failed and we were unable to recover it. 00:30:34.439 [2024-12-05 12:14:08.480949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.439 [2024-12-05 12:14:08.480979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.439 qpair failed and we were unable to recover it. 00:30:34.439 [2024-12-05 12:14:08.481169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.439 [2024-12-05 12:14:08.481200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.439 qpair failed and we were unable to recover it. 00:30:34.439 [2024-12-05 12:14:08.481336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.439 [2024-12-05 12:14:08.481375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.439 qpair failed and we were unable to recover it. 00:30:34.439 [2024-12-05 12:14:08.481666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.439 [2024-12-05 12:14:08.481696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.439 qpair failed and we were unable to recover it. 00:30:34.439 [2024-12-05 12:14:08.481908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.439 [2024-12-05 12:14:08.481940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.439 qpair failed and we were unable to recover it. 00:30:34.439 [2024-12-05 12:14:08.482129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.439 [2024-12-05 12:14:08.482160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.439 qpair failed and we were unable to recover it. 00:30:34.439 [2024-12-05 12:14:08.482349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.439 [2024-12-05 12:14:08.482390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.439 qpair failed and we were unable to recover it. 00:30:34.439 [2024-12-05 12:14:08.482588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.439 [2024-12-05 12:14:08.482619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.439 qpair failed and we were unable to recover it. 00:30:34.439 [2024-12-05 12:14:08.482814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.439 [2024-12-05 12:14:08.482845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.439 qpair failed and we were unable to recover it. 00:30:34.439 [2024-12-05 12:14:08.483091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.439 [2024-12-05 12:14:08.483120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.439 qpair failed and we were unable to recover it. 00:30:34.439 [2024-12-05 12:14:08.483302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.439 [2024-12-05 12:14:08.483334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.439 qpair failed and we were unable to recover it. 00:30:34.439 [2024-12-05 12:14:08.483598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.439 [2024-12-05 12:14:08.483632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.439 qpair failed and we were unable to recover it. 00:30:34.439 [2024-12-05 12:14:08.483907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.439 [2024-12-05 12:14:08.483939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.439 qpair failed and we were unable to recover it. 00:30:34.439 [2024-12-05 12:14:08.484223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.439 [2024-12-05 12:14:08.484257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.439 qpair failed and we were unable to recover it. 00:30:34.439 [2024-12-05 12:14:08.484395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.439 [2024-12-05 12:14:08.484428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.439 qpair failed and we were unable to recover it. 00:30:34.439 [2024-12-05 12:14:08.484637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.439 [2024-12-05 12:14:08.484668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.439 qpair failed and we were unable to recover it. 00:30:34.439 [2024-12-05 12:14:08.484935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.439 [2024-12-05 12:14:08.484969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.439 qpair failed and we were unable to recover it. 00:30:34.439 [2024-12-05 12:14:08.485186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.439 [2024-12-05 12:14:08.485220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.439 qpair failed and we were unable to recover it. 00:30:34.439 [2024-12-05 12:14:08.485407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.439 [2024-12-05 12:14:08.485441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.439 qpair failed and we were unable to recover it. 00:30:34.439 [2024-12-05 12:14:08.485588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.439 [2024-12-05 12:14:08.485620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.439 qpair failed and we were unable to recover it. 00:30:34.439 [2024-12-05 12:14:08.485865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.439 [2024-12-05 12:14:08.485896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.439 qpair failed and we were unable to recover it. 00:30:34.439 [2024-12-05 12:14:08.486146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.439 [2024-12-05 12:14:08.486178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.439 qpair failed and we were unable to recover it. 00:30:34.439 [2024-12-05 12:14:08.486363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.439 [2024-12-05 12:14:08.486404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.439 qpair failed and we were unable to recover it. 00:30:34.439 [2024-12-05 12:14:08.486687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.439 [2024-12-05 12:14:08.486725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.439 qpair failed and we were unable to recover it. 00:30:34.439 [2024-12-05 12:14:08.486956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.439 [2024-12-05 12:14:08.486989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.439 qpair failed and we were unable to recover it. 00:30:34.439 [2024-12-05 12:14:08.487181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.439 [2024-12-05 12:14:08.487215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.439 qpair failed and we were unable to recover it. 00:30:34.439 [2024-12-05 12:14:08.487417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.439 [2024-12-05 12:14:08.487450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.439 qpair failed and we were unable to recover it. 00:30:34.439 [2024-12-05 12:14:08.487625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.439 [2024-12-05 12:14:08.487655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.439 qpair failed and we were unable to recover it. 00:30:34.439 [2024-12-05 12:14:08.487870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.439 [2024-12-05 12:14:08.487900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.439 qpair failed and we were unable to recover it. 00:30:34.439 [2024-12-05 12:14:08.488113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.439 [2024-12-05 12:14:08.488146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.439 qpair failed and we were unable to recover it. 00:30:34.439 [2024-12-05 12:14:08.488343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.440 [2024-12-05 12:14:08.488384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.440 qpair failed and we were unable to recover it. 00:30:34.440 [2024-12-05 12:14:08.488635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.440 [2024-12-05 12:14:08.488666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.440 qpair failed and we were unable to recover it. 00:30:34.440 [2024-12-05 12:14:08.488853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.440 [2024-12-05 12:14:08.488884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.440 qpair failed and we were unable to recover it. 00:30:34.440 [2024-12-05 12:14:08.489168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.440 [2024-12-05 12:14:08.489200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.440 qpair failed and we were unable to recover it. 00:30:34.440 [2024-12-05 12:14:08.489397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.440 [2024-12-05 12:14:08.489430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.440 qpair failed and we were unable to recover it. 00:30:34.440 [2024-12-05 12:14:08.489619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.440 [2024-12-05 12:14:08.489651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.440 qpair failed and we were unable to recover it. 00:30:34.440 [2024-12-05 12:14:08.489913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.440 [2024-12-05 12:14:08.489945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.440 qpair failed and we were unable to recover it. 00:30:34.440 [2024-12-05 12:14:08.490243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.440 [2024-12-05 12:14:08.490275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.440 qpair failed and we were unable to recover it. 00:30:34.440 [2024-12-05 12:14:08.490479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.440 [2024-12-05 12:14:08.490510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.440 qpair failed and we were unable to recover it. 00:30:34.440 [2024-12-05 12:14:08.490792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.440 [2024-12-05 12:14:08.490824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.440 qpair failed and we were unable to recover it. 00:30:34.440 [2024-12-05 12:14:08.490932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.440 [2024-12-05 12:14:08.490964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.440 qpair failed and we were unable to recover it. 00:30:34.440 [2024-12-05 12:14:08.491144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.440 [2024-12-05 12:14:08.491176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.440 qpair failed and we were unable to recover it. 00:30:34.440 [2024-12-05 12:14:08.491405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.440 [2024-12-05 12:14:08.491439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.440 qpair failed and we were unable to recover it. 00:30:34.440 [2024-12-05 12:14:08.491569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.440 [2024-12-05 12:14:08.491601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.440 qpair failed and we were unable to recover it. 00:30:34.440 [2024-12-05 12:14:08.491845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.440 [2024-12-05 12:14:08.491877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.440 qpair failed and we were unable to recover it. 00:30:34.440 [2024-12-05 12:14:08.492152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.440 [2024-12-05 12:14:08.492182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.440 qpair failed and we were unable to recover it. 00:30:34.440 [2024-12-05 12:14:08.492357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.440 [2024-12-05 12:14:08.492396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.440 qpair failed and we were unable to recover it. 00:30:34.440 [2024-12-05 12:14:08.492586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.440 [2024-12-05 12:14:08.492617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.440 qpair failed and we were unable to recover it. 00:30:34.440 [2024-12-05 12:14:08.492807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.440 [2024-12-05 12:14:08.492838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.440 qpair failed and we were unable to recover it. 00:30:34.440 [2024-12-05 12:14:08.492971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.440 [2024-12-05 12:14:08.493002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.440 qpair failed and we were unable to recover it. 00:30:34.440 [2024-12-05 12:14:08.493216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.440 [2024-12-05 12:14:08.493248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.440 qpair failed and we were unable to recover it. 00:30:34.440 [2024-12-05 12:14:08.493538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.440 [2024-12-05 12:14:08.493571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.440 qpair failed and we were unable to recover it. 00:30:34.440 [2024-12-05 12:14:08.493706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.440 [2024-12-05 12:14:08.493738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.440 qpair failed and we were unable to recover it. 00:30:34.440 [2024-12-05 12:14:08.493856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.440 [2024-12-05 12:14:08.493887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.440 qpair failed and we were unable to recover it. 00:30:34.440 [2024-12-05 12:14:08.494090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.440 [2024-12-05 12:14:08.494120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.440 qpair failed and we were unable to recover it. 00:30:34.440 [2024-12-05 12:14:08.494385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.440 [2024-12-05 12:14:08.494418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.440 qpair failed and we were unable to recover it. 00:30:34.440 [2024-12-05 12:14:08.494604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.440 [2024-12-05 12:14:08.494634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.440 qpair failed and we were unable to recover it. 00:30:34.440 [2024-12-05 12:14:08.494738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.440 [2024-12-05 12:14:08.494769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.440 qpair failed and we were unable to recover it. 00:30:34.440 [2024-12-05 12:14:08.495029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.440 [2024-12-05 12:14:08.495060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.440 qpair failed and we were unable to recover it. 00:30:34.440 [2024-12-05 12:14:08.495183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.440 [2024-12-05 12:14:08.495213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.440 qpair failed and we were unable to recover it. 00:30:34.440 [2024-12-05 12:14:08.495404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.440 [2024-12-05 12:14:08.495435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.440 qpair failed and we were unable to recover it. 00:30:34.440 [2024-12-05 12:14:08.495565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.440 [2024-12-05 12:14:08.495595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.440 qpair failed and we were unable to recover it. 00:30:34.440 [2024-12-05 12:14:08.495868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.440 [2024-12-05 12:14:08.495901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.440 qpair failed and we were unable to recover it. 00:30:34.440 [2024-12-05 12:14:08.496171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.440 [2024-12-05 12:14:08.496213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.440 qpair failed and we were unable to recover it. 00:30:34.440 [2024-12-05 12:14:08.496319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.440 [2024-12-05 12:14:08.496351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.440 qpair failed and we were unable to recover it. 00:30:34.440 [2024-12-05 12:14:08.496553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.440 [2024-12-05 12:14:08.496586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.440 qpair failed and we were unable to recover it. 00:30:34.440 [2024-12-05 12:14:08.496850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.440 [2024-12-05 12:14:08.496881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.440 qpair failed and we were unable to recover it. 00:30:34.440 [2024-12-05 12:14:08.497086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.440 [2024-12-05 12:14:08.497116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.440 qpair failed and we were unable to recover it. 00:30:34.440 [2024-12-05 12:14:08.497331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.441 [2024-12-05 12:14:08.497364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.441 qpair failed and we were unable to recover it. 00:30:34.441 [2024-12-05 12:14:08.497594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.441 [2024-12-05 12:14:08.497626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.441 qpair failed and we were unable to recover it. 00:30:34.441 [2024-12-05 12:14:08.497844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.441 [2024-12-05 12:14:08.497876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.441 qpair failed and we were unable to recover it. 00:30:34.441 [2024-12-05 12:14:08.498167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.441 [2024-12-05 12:14:08.498199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.441 qpair failed and we were unable to recover it. 00:30:34.441 [2024-12-05 12:14:08.498396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.441 [2024-12-05 12:14:08.498429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.441 qpair failed and we were unable to recover it. 00:30:34.441 [2024-12-05 12:14:08.498606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.441 [2024-12-05 12:14:08.498638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.441 qpair failed and we were unable to recover it. 00:30:34.441 [2024-12-05 12:14:08.498767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.441 [2024-12-05 12:14:08.498798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.441 qpair failed and we were unable to recover it. 00:30:34.441 [2024-12-05 12:14:08.499063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.441 [2024-12-05 12:14:08.499094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.441 qpair failed and we were unable to recover it. 00:30:34.441 [2024-12-05 12:14:08.499357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.441 [2024-12-05 12:14:08.499397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.441 qpair failed and we were unable to recover it. 00:30:34.441 [2024-12-05 12:14:08.499623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.441 [2024-12-05 12:14:08.499655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.441 qpair failed and we were unable to recover it. 00:30:34.441 [2024-12-05 12:14:08.499850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.441 [2024-12-05 12:14:08.499880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.441 qpair failed and we were unable to recover it. 00:30:34.441 [2024-12-05 12:14:08.500061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.441 [2024-12-05 12:14:08.500092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.441 qpair failed and we were unable to recover it. 00:30:34.441 [2024-12-05 12:14:08.500218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.441 [2024-12-05 12:14:08.500250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.441 qpair failed and we were unable to recover it. 00:30:34.441 [2024-12-05 12:14:08.500446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.441 [2024-12-05 12:14:08.500488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.441 qpair failed and we were unable to recover it. 00:30:34.441 [2024-12-05 12:14:08.500663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.441 [2024-12-05 12:14:08.500695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.441 qpair failed and we were unable to recover it. 00:30:34.441 [2024-12-05 12:14:08.500871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.441 [2024-12-05 12:14:08.500902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.441 qpair failed and we were unable to recover it. 00:30:34.441 [2024-12-05 12:14:08.501029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.441 [2024-12-05 12:14:08.501058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.441 qpair failed and we were unable to recover it. 00:30:34.441 [2024-12-05 12:14:08.501323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.441 [2024-12-05 12:14:08.501355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.441 qpair failed and we were unable to recover it. 00:30:34.441 [2024-12-05 12:14:08.501619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.441 [2024-12-05 12:14:08.501651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.441 qpair failed and we were unable to recover it. 00:30:34.441 [2024-12-05 12:14:08.501800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.441 [2024-12-05 12:14:08.501831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.441 qpair failed and we were unable to recover it. 00:30:34.441 [2024-12-05 12:14:08.502076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.441 [2024-12-05 12:14:08.502107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.441 qpair failed and we were unable to recover it. 00:30:34.441 [2024-12-05 12:14:08.502219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.441 [2024-12-05 12:14:08.502250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.441 qpair failed and we were unable to recover it. 00:30:34.441 [2024-12-05 12:14:08.502467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.441 [2024-12-05 12:14:08.502500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.441 qpair failed and we were unable to recover it. 00:30:34.441 [2024-12-05 12:14:08.502767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.441 [2024-12-05 12:14:08.502799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.441 qpair failed and we were unable to recover it. 00:30:34.441 [2024-12-05 12:14:08.503078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.441 [2024-12-05 12:14:08.503109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.441 qpair failed and we were unable to recover it. 00:30:34.441 [2024-12-05 12:14:08.503284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.441 [2024-12-05 12:14:08.503316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.441 qpair failed and we were unable to recover it. 00:30:34.441 [2024-12-05 12:14:08.503597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.441 [2024-12-05 12:14:08.503629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.441 qpair failed and we were unable to recover it. 00:30:34.441 [2024-12-05 12:14:08.503825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.441 [2024-12-05 12:14:08.503858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.441 qpair failed and we were unable to recover it. 00:30:34.441 [2024-12-05 12:14:08.503995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.441 [2024-12-05 12:14:08.504025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.441 qpair failed and we were unable to recover it. 00:30:34.441 [2024-12-05 12:14:08.504294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.441 [2024-12-05 12:14:08.504326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.441 qpair failed and we were unable to recover it. 00:30:34.441 [2024-12-05 12:14:08.504584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.441 [2024-12-05 12:14:08.504617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.441 qpair failed and we were unable to recover it. 00:30:34.441 [2024-12-05 12:14:08.504725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.441 [2024-12-05 12:14:08.504757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.441 qpair failed and we were unable to recover it. 00:30:34.441 [2024-12-05 12:14:08.504894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.441 [2024-12-05 12:14:08.504924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.441 qpair failed and we were unable to recover it. 00:30:34.441 [2024-12-05 12:14:08.505149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.441 [2024-12-05 12:14:08.505179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.441 qpair failed and we were unable to recover it. 00:30:34.441 [2024-12-05 12:14:08.505309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.441 [2024-12-05 12:14:08.505338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.441 qpair failed and we were unable to recover it. 00:30:34.441 [2024-12-05 12:14:08.505596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.441 [2024-12-05 12:14:08.505634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.441 qpair failed and we were unable to recover it. 00:30:34.441 [2024-12-05 12:14:08.505770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.441 [2024-12-05 12:14:08.505801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.441 qpair failed and we were unable to recover it. 00:30:34.441 [2024-12-05 12:14:08.506060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.441 [2024-12-05 12:14:08.506092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.442 qpair failed and we were unable to recover it. 00:30:34.442 [2024-12-05 12:14:08.506359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.442 [2024-12-05 12:14:08.506402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.442 qpair failed and we were unable to recover it. 00:30:34.442 [2024-12-05 12:14:08.506595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.442 [2024-12-05 12:14:08.506627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.442 qpair failed and we were unable to recover it. 00:30:34.442 [2024-12-05 12:14:08.506875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.442 [2024-12-05 12:14:08.506907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.442 qpair failed and we were unable to recover it. 00:30:34.442 [2024-12-05 12:14:08.507118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.442 [2024-12-05 12:14:08.507167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.442 qpair failed and we were unable to recover it. 00:30:34.442 [2024-12-05 12:14:08.507434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.442 [2024-12-05 12:14:08.507468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.442 qpair failed and we were unable to recover it. 00:30:34.442 [2024-12-05 12:14:08.507577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.442 [2024-12-05 12:14:08.507608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.442 qpair failed and we were unable to recover it. 00:30:34.442 [2024-12-05 12:14:08.507882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.442 [2024-12-05 12:14:08.507914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.442 qpair failed and we were unable to recover it. 00:30:34.442 [2024-12-05 12:14:08.508108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.442 [2024-12-05 12:14:08.508140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.442 qpair failed and we were unable to recover it. 00:30:34.442 [2024-12-05 12:14:08.508408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.442 [2024-12-05 12:14:08.508440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.442 qpair failed and we were unable to recover it. 00:30:34.442 [2024-12-05 12:14:08.508582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.442 [2024-12-05 12:14:08.508613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.442 qpair failed and we were unable to recover it. 00:30:34.442 [2024-12-05 12:14:08.508854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.442 [2024-12-05 12:14:08.508885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.442 qpair failed and we were unable to recover it. 00:30:34.442 [2024-12-05 12:14:08.509206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.442 [2024-12-05 12:14:08.509238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.442 qpair failed and we were unable to recover it. 00:30:34.442 [2024-12-05 12:14:08.509492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.442 [2024-12-05 12:14:08.509524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.442 qpair failed and we were unable to recover it. 00:30:34.442 [2024-12-05 12:14:08.509824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.442 [2024-12-05 12:14:08.509856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.442 qpair failed and we were unable to recover it. 00:30:34.442 [2024-12-05 12:14:08.510126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.442 [2024-12-05 12:14:08.510157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.442 qpair failed and we were unable to recover it. 00:30:34.442 [2024-12-05 12:14:08.510435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.442 [2024-12-05 12:14:08.510467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.442 qpair failed and we were unable to recover it. 00:30:34.442 [2024-12-05 12:14:08.510646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.442 [2024-12-05 12:14:08.510676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.442 qpair failed and we were unable to recover it. 00:30:34.442 [2024-12-05 12:14:08.510951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.442 [2024-12-05 12:14:08.510982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.442 qpair failed and we were unable to recover it. 00:30:34.442 [2024-12-05 12:14:08.511249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.442 [2024-12-05 12:14:08.511281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.442 qpair failed and we were unable to recover it. 00:30:34.442 [2024-12-05 12:14:08.511499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.442 [2024-12-05 12:14:08.511530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.442 qpair failed and we were unable to recover it. 00:30:34.442 [2024-12-05 12:14:08.511797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.442 [2024-12-05 12:14:08.511829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.442 qpair failed and we were unable to recover it. 00:30:34.442 [2024-12-05 12:14:08.512093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.442 [2024-12-05 12:14:08.512125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.442 qpair failed and we were unable to recover it. 00:30:34.442 [2024-12-05 12:14:08.512321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.442 [2024-12-05 12:14:08.512351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.442 qpair failed and we were unable to recover it. 00:30:34.442 [2024-12-05 12:14:08.512557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.442 [2024-12-05 12:14:08.512588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.442 qpair failed and we were unable to recover it. 00:30:34.442 [2024-12-05 12:14:08.512967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.442 [2024-12-05 12:14:08.513055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.442 qpair failed and we were unable to recover it. 00:30:34.442 [2024-12-05 12:14:08.513334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.442 [2024-12-05 12:14:08.513384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.442 qpair failed and we were unable to recover it. 00:30:34.442 [2024-12-05 12:14:08.513621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.442 [2024-12-05 12:14:08.513655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.442 qpair failed and we were unable to recover it. 00:30:34.442 [2024-12-05 12:14:08.513954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.442 [2024-12-05 12:14:08.513986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.442 qpair failed and we were unable to recover it. 00:30:34.442 [2024-12-05 12:14:08.514182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.442 [2024-12-05 12:14:08.514213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.442 qpair failed and we were unable to recover it. 00:30:34.442 [2024-12-05 12:14:08.514504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.442 [2024-12-05 12:14:08.514538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.442 qpair failed and we were unable to recover it. 00:30:34.442 [2024-12-05 12:14:08.514811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.442 [2024-12-05 12:14:08.514843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.442 qpair failed and we were unable to recover it. 00:30:34.442 [2024-12-05 12:14:08.515103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.442 [2024-12-05 12:14:08.515135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.442 qpair failed and we were unable to recover it. 00:30:34.442 [2024-12-05 12:14:08.515335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.442 [2024-12-05 12:14:08.515377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.442 qpair failed and we were unable to recover it. 00:30:34.442 [2024-12-05 12:14:08.515561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.442 [2024-12-05 12:14:08.515593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.442 qpair failed and we were unable to recover it. 00:30:34.442 [2024-12-05 12:14:08.515837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.442 [2024-12-05 12:14:08.515868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.442 qpair failed and we were unable to recover it. 00:30:34.442 [2024-12-05 12:14:08.516048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.442 [2024-12-05 12:14:08.516080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.442 qpair failed and we were unable to recover it. 00:30:34.442 [2024-12-05 12:14:08.516326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.442 [2024-12-05 12:14:08.516358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.442 qpair failed and we were unable to recover it. 00:30:34.442 [2024-12-05 12:14:08.516612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.442 [2024-12-05 12:14:08.516660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.442 qpair failed and we were unable to recover it. 00:30:34.443 [2024-12-05 12:14:08.516854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.443 [2024-12-05 12:14:08.516885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.443 qpair failed and we were unable to recover it. 00:30:34.443 [2024-12-05 12:14:08.517157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.443 [2024-12-05 12:14:08.517189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.443 qpair failed and we were unable to recover it. 00:30:34.443 [2024-12-05 12:14:08.517452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.443 [2024-12-05 12:14:08.517485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.443 qpair failed and we were unable to recover it. 00:30:34.443 [2024-12-05 12:14:08.517756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.443 [2024-12-05 12:14:08.517788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.443 qpair failed and we were unable to recover it. 00:30:34.443 [2024-12-05 12:14:08.518096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.443 [2024-12-05 12:14:08.518128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.443 qpair failed and we were unable to recover it. 00:30:34.443 [2024-12-05 12:14:08.518309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.443 [2024-12-05 12:14:08.518340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.443 qpair failed and we were unable to recover it. 00:30:34.443 [2024-12-05 12:14:08.518551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.443 [2024-12-05 12:14:08.518584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.443 qpair failed and we were unable to recover it. 00:30:34.443 [2024-12-05 12:14:08.518804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.443 [2024-12-05 12:14:08.518837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.443 qpair failed and we were unable to recover it. 00:30:34.443 [2024-12-05 12:14:08.519039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.443 [2024-12-05 12:14:08.519070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.443 qpair failed and we were unable to recover it. 00:30:34.443 [2024-12-05 12:14:08.519288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.443 [2024-12-05 12:14:08.519319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.443 qpair failed and we were unable to recover it. 00:30:34.443 [2024-12-05 12:14:08.519578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.443 [2024-12-05 12:14:08.519611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.443 qpair failed and we were unable to recover it. 00:30:34.443 [2024-12-05 12:14:08.519810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.443 [2024-12-05 12:14:08.519842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.443 qpair failed and we were unable to recover it. 00:30:34.443 [2024-12-05 12:14:08.520045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.443 [2024-12-05 12:14:08.520077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.443 qpair failed and we were unable to recover it. 00:30:34.443 [2024-12-05 12:14:08.520355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.443 [2024-12-05 12:14:08.520398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.443 qpair failed and we were unable to recover it. 00:30:34.443 [2024-12-05 12:14:08.520647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.443 [2024-12-05 12:14:08.520680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.443 qpair failed and we were unable to recover it. 00:30:34.443 [2024-12-05 12:14:08.520901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.443 [2024-12-05 12:14:08.520932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.443 qpair failed and we were unable to recover it. 00:30:34.443 [2024-12-05 12:14:08.521193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.443 [2024-12-05 12:14:08.521225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.443 qpair failed and we were unable to recover it. 00:30:34.443 [2024-12-05 12:14:08.521415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.443 [2024-12-05 12:14:08.521448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.443 qpair failed and we were unable to recover it. 00:30:34.443 [2024-12-05 12:14:08.521631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.443 [2024-12-05 12:14:08.521663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.443 qpair failed and we were unable to recover it. 00:30:34.443 [2024-12-05 12:14:08.521876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.443 [2024-12-05 12:14:08.521908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.443 qpair failed and we were unable to recover it. 00:30:34.443 [2024-12-05 12:14:08.522110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.443 [2024-12-05 12:14:08.522142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.443 qpair failed and we were unable to recover it. 00:30:34.443 [2024-12-05 12:14:08.522322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.443 [2024-12-05 12:14:08.522354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.443 qpair failed and we were unable to recover it. 00:30:34.443 [2024-12-05 12:14:08.522563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.443 [2024-12-05 12:14:08.522595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.443 qpair failed and we were unable to recover it. 00:30:34.443 [2024-12-05 12:14:08.522812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.443 [2024-12-05 12:14:08.522844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.443 qpair failed and we were unable to recover it. 00:30:34.443 [2024-12-05 12:14:08.522978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.443 [2024-12-05 12:14:08.523012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.443 qpair failed and we were unable to recover it. 00:30:34.443 [2024-12-05 12:14:08.523282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.443 [2024-12-05 12:14:08.523315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.443 qpair failed and we were unable to recover it. 00:30:34.443 [2024-12-05 12:14:08.523577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.443 [2024-12-05 12:14:08.523613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.443 qpair failed and we were unable to recover it. 00:30:34.443 [2024-12-05 12:14:08.523793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.443 [2024-12-05 12:14:08.523825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.443 qpair failed and we were unable to recover it. 00:30:34.443 [2024-12-05 12:14:08.524145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.443 [2024-12-05 12:14:08.524176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.443 qpair failed and we were unable to recover it. 00:30:34.443 [2024-12-05 12:14:08.524432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.443 [2024-12-05 12:14:08.524465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.443 qpair failed and we were unable to recover it. 00:30:34.443 [2024-12-05 12:14:08.524760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.443 [2024-12-05 12:14:08.524792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.443 qpair failed and we were unable to recover it. 00:30:34.443 [2024-12-05 12:14:08.525100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.443 [2024-12-05 12:14:08.525131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.443 qpair failed and we were unable to recover it. 00:30:34.443 [2024-12-05 12:14:08.525341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.443 [2024-12-05 12:14:08.525383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.443 qpair failed and we were unable to recover it. 00:30:34.443 [2024-12-05 12:14:08.525524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.443 [2024-12-05 12:14:08.525555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.443 qpair failed and we were unable to recover it. 00:30:34.444 [2024-12-05 12:14:08.525749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.444 [2024-12-05 12:14:08.525782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.444 qpair failed and we were unable to recover it. 00:30:34.444 [2024-12-05 12:14:08.525982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.444 [2024-12-05 12:14:08.526013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.444 qpair failed and we were unable to recover it. 00:30:34.444 [2024-12-05 12:14:08.526126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.444 [2024-12-05 12:14:08.526158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.444 qpair failed and we were unable to recover it. 00:30:34.444 [2024-12-05 12:14:08.526406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.444 [2024-12-05 12:14:08.526439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.444 qpair failed and we were unable to recover it. 00:30:34.444 [2024-12-05 12:14:08.526562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.444 [2024-12-05 12:14:08.526595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.444 qpair failed and we were unable to recover it. 00:30:34.444 [2024-12-05 12:14:08.526753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.444 [2024-12-05 12:14:08.526791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.444 qpair failed and we were unable to recover it. 00:30:34.444 [2024-12-05 12:14:08.526891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.444 [2024-12-05 12:14:08.526924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.444 qpair failed and we were unable to recover it. 00:30:34.444 [2024-12-05 12:14:08.527074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.444 [2024-12-05 12:14:08.527107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.444 qpair failed and we were unable to recover it. 00:30:34.444 [2024-12-05 12:14:08.527308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.444 [2024-12-05 12:14:08.527340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.444 qpair failed and we were unable to recover it. 00:30:34.444 [2024-12-05 12:14:08.527667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.444 [2024-12-05 12:14:08.527742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.444 qpair failed and we were unable to recover it. 00:30:34.444 [2024-12-05 12:14:08.527926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.444 [2024-12-05 12:14:08.527963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.444 qpair failed and we were unable to recover it. 00:30:34.444 [2024-12-05 12:14:08.528216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.444 [2024-12-05 12:14:08.528249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.444 qpair failed and we were unable to recover it. 00:30:34.444 [2024-12-05 12:14:08.528396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.444 [2024-12-05 12:14:08.528431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.444 qpair failed and we were unable to recover it. 00:30:34.444 [2024-12-05 12:14:08.528555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.444 [2024-12-05 12:14:08.528586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.444 qpair failed and we were unable to recover it. 00:30:34.444 [2024-12-05 12:14:08.528779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.444 [2024-12-05 12:14:08.528811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.444 qpair failed and we were unable to recover it. 00:30:34.444 [2024-12-05 12:14:08.528944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.444 [2024-12-05 12:14:08.528976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.444 qpair failed and we were unable to recover it. 00:30:34.444 [2024-12-05 12:14:08.529192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.444 [2024-12-05 12:14:08.529226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.444 qpair failed and we were unable to recover it. 00:30:34.444 [2024-12-05 12:14:08.529425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.444 [2024-12-05 12:14:08.529458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.444 qpair failed and we were unable to recover it. 00:30:34.444 [2024-12-05 12:14:08.529718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.444 [2024-12-05 12:14:08.529750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.444 qpair failed and we were unable to recover it. 00:30:34.444 [2024-12-05 12:14:08.529947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.444 [2024-12-05 12:14:08.529979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.444 qpair failed and we were unable to recover it. 00:30:34.444 [2024-12-05 12:14:08.530178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.444 [2024-12-05 12:14:08.530210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.444 qpair failed and we were unable to recover it. 00:30:34.444 [2024-12-05 12:14:08.530463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.444 [2024-12-05 12:14:08.530496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.444 qpair failed and we were unable to recover it. 00:30:34.444 [2024-12-05 12:14:08.530695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.444 [2024-12-05 12:14:08.530728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.444 qpair failed and we were unable to recover it. 00:30:34.444 [2024-12-05 12:14:08.530912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.444 [2024-12-05 12:14:08.530944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.444 qpair failed and we were unable to recover it. 00:30:34.444 [2024-12-05 12:14:08.531198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.444 [2024-12-05 12:14:08.531229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.444 qpair failed and we were unable to recover it. 00:30:34.444 [2024-12-05 12:14:08.531365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.444 [2024-12-05 12:14:08.531409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.444 qpair failed and we were unable to recover it. 00:30:34.444 [2024-12-05 12:14:08.531533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.444 [2024-12-05 12:14:08.531565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.444 qpair failed and we were unable to recover it. 00:30:34.444 [2024-12-05 12:14:08.531838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.444 [2024-12-05 12:14:08.531870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.444 qpair failed and we were unable to recover it. 00:30:34.444 [2024-12-05 12:14:08.532058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.444 [2024-12-05 12:14:08.532089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.444 qpair failed and we were unable to recover it. 00:30:34.444 [2024-12-05 12:14:08.532202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.444 [2024-12-05 12:14:08.532235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.444 qpair failed and we were unable to recover it. 00:30:34.444 [2024-12-05 12:14:08.532384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.444 [2024-12-05 12:14:08.532416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.444 qpair failed and we were unable to recover it. 00:30:34.444 [2024-12-05 12:14:08.532598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.444 [2024-12-05 12:14:08.532631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.444 qpair failed and we were unable to recover it. 00:30:34.444 [2024-12-05 12:14:08.532805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.444 [2024-12-05 12:14:08.532882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.444 qpair failed and we were unable to recover it. 00:30:34.444 [2024-12-05 12:14:08.533038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.444 [2024-12-05 12:14:08.533074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.444 qpair failed and we were unable to recover it. 00:30:34.444 [2024-12-05 12:14:08.533362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.444 [2024-12-05 12:14:08.533412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.444 qpair failed and we were unable to recover it. 00:30:34.444 [2024-12-05 12:14:08.533614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.444 [2024-12-05 12:14:08.533646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.444 qpair failed and we were unable to recover it. 00:30:34.444 [2024-12-05 12:14:08.533844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.444 [2024-12-05 12:14:08.533875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.444 qpair failed and we were unable to recover it. 00:30:34.444 [2024-12-05 12:14:08.534075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.444 [2024-12-05 12:14:08.534107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.444 qpair failed and we were unable to recover it. 00:30:34.445 [2024-12-05 12:14:08.534218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.445 [2024-12-05 12:14:08.534249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.445 qpair failed and we were unable to recover it. 00:30:34.445 [2024-12-05 12:14:08.534458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.445 [2024-12-05 12:14:08.534492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.445 qpair failed and we were unable to recover it. 00:30:34.445 [2024-12-05 12:14:08.534634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.445 [2024-12-05 12:14:08.534667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.445 qpair failed and we were unable to recover it. 00:30:34.445 [2024-12-05 12:14:08.534856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.445 [2024-12-05 12:14:08.534887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.445 qpair failed and we were unable to recover it. 00:30:34.445 [2024-12-05 12:14:08.535075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.445 [2024-12-05 12:14:08.535107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.445 qpair failed and we were unable to recover it. 00:30:34.445 [2024-12-05 12:14:08.535233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.445 [2024-12-05 12:14:08.535264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.445 qpair failed and we were unable to recover it. 00:30:34.445 [2024-12-05 12:14:08.535383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.445 [2024-12-05 12:14:08.535417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.445 qpair failed and we were unable to recover it. 00:30:34.445 [2024-12-05 12:14:08.535689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.445 [2024-12-05 12:14:08.535721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.445 qpair failed and we were unable to recover it. 00:30:34.445 [2024-12-05 12:14:08.535929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.445 [2024-12-05 12:14:08.535963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.445 qpair failed and we were unable to recover it. 00:30:34.445 [2024-12-05 12:14:08.536172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.445 [2024-12-05 12:14:08.536204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.445 qpair failed and we were unable to recover it. 00:30:34.445 [2024-12-05 12:14:08.536344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.445 [2024-12-05 12:14:08.536386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.445 qpair failed and we were unable to recover it. 00:30:34.445 [2024-12-05 12:14:08.536577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.445 [2024-12-05 12:14:08.536609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.445 qpair failed and we were unable to recover it. 00:30:34.445 [2024-12-05 12:14:08.536749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.445 [2024-12-05 12:14:08.536780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.445 qpair failed and we were unable to recover it. 00:30:34.445 [2024-12-05 12:14:08.536979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.445 [2024-12-05 12:14:08.537011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.445 qpair failed and we were unable to recover it. 00:30:34.445 [2024-12-05 12:14:08.537204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.445 [2024-12-05 12:14:08.537236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.445 qpair failed and we were unable to recover it. 00:30:34.445 [2024-12-05 12:14:08.537449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.445 [2024-12-05 12:14:08.537482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.445 qpair failed and we were unable to recover it. 00:30:34.445 [2024-12-05 12:14:08.537665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.445 [2024-12-05 12:14:08.537698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.445 qpair failed and we were unable to recover it. 00:30:34.445 [2024-12-05 12:14:08.537836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.445 [2024-12-05 12:14:08.537866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.445 qpair failed and we were unable to recover it. 00:30:34.445 [2024-12-05 12:14:08.538063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.445 [2024-12-05 12:14:08.538095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.445 qpair failed and we were unable to recover it. 00:30:34.445 [2024-12-05 12:14:08.538387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.445 [2024-12-05 12:14:08.538422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.445 qpair failed and we were unable to recover it. 00:30:34.445 [2024-12-05 12:14:08.538561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.445 [2024-12-05 12:14:08.538593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.445 qpair failed and we were unable to recover it. 00:30:34.445 [2024-12-05 12:14:08.538787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.445 [2024-12-05 12:14:08.538829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.445 qpair failed and we were unable to recover it. 00:30:34.445 [2024-12-05 12:14:08.539082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.445 [2024-12-05 12:14:08.539114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.445 qpair failed and we were unable to recover it. 00:30:34.445 [2024-12-05 12:14:08.539242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.445 [2024-12-05 12:14:08.539274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.445 qpair failed and we were unable to recover it. 00:30:34.445 [2024-12-05 12:14:08.539495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.445 [2024-12-05 12:14:08.539529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.445 qpair failed and we were unable to recover it. 00:30:34.445 [2024-12-05 12:14:08.539718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.445 [2024-12-05 12:14:08.539750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.445 qpair failed and we were unable to recover it. 00:30:34.445 [2024-12-05 12:14:08.539881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.445 [2024-12-05 12:14:08.539912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.445 qpair failed and we were unable to recover it. 00:30:34.445 [2024-12-05 12:14:08.540132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.445 [2024-12-05 12:14:08.540166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.445 qpair failed and we were unable to recover it. 00:30:34.445 [2024-12-05 12:14:08.540352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.445 [2024-12-05 12:14:08.540395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.445 qpair failed and we were unable to recover it. 00:30:34.445 [2024-12-05 12:14:08.540668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.445 [2024-12-05 12:14:08.540702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.445 qpair failed and we were unable to recover it. 00:30:34.445 [2024-12-05 12:14:08.540813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.445 [2024-12-05 12:14:08.540845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.445 qpair failed and we were unable to recover it. 00:30:34.445 [2024-12-05 12:14:08.541097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.445 [2024-12-05 12:14:08.541128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.445 qpair failed and we were unable to recover it. 00:30:34.445 [2024-12-05 12:14:08.541328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.445 [2024-12-05 12:14:08.541360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.445 qpair failed and we were unable to recover it. 00:30:34.445 [2024-12-05 12:14:08.541549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.445 [2024-12-05 12:14:08.541582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.445 qpair failed and we were unable to recover it. 00:30:34.445 [2024-12-05 12:14:08.541856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.445 [2024-12-05 12:14:08.541888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.445 qpair failed and we were unable to recover it. 00:30:34.445 [2024-12-05 12:14:08.542109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.445 [2024-12-05 12:14:08.542141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.445 qpair failed and we were unable to recover it. 00:30:34.445 [2024-12-05 12:14:08.542391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.445 [2024-12-05 12:14:08.542425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.445 qpair failed and we were unable to recover it. 00:30:34.445 [2024-12-05 12:14:08.542694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.445 [2024-12-05 12:14:08.542726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.445 qpair failed and we were unable to recover it. 00:30:34.445 [2024-12-05 12:14:08.542920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.445 [2024-12-05 12:14:08.542952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.446 qpair failed and we were unable to recover it. 00:30:34.446 [2024-12-05 12:14:08.543156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.446 [2024-12-05 12:14:08.543187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.446 qpair failed and we were unable to recover it. 00:30:34.446 [2024-12-05 12:14:08.543320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.446 [2024-12-05 12:14:08.543351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.446 qpair failed and we were unable to recover it. 00:30:34.446 [2024-12-05 12:14:08.543579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.446 [2024-12-05 12:14:08.543611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.446 qpair failed and we were unable to recover it. 00:30:34.446 [2024-12-05 12:14:08.543753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.446 [2024-12-05 12:14:08.543783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.446 qpair failed and we were unable to recover it. 00:30:34.446 [2024-12-05 12:14:08.543889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.446 [2024-12-05 12:14:08.543922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.446 qpair failed and we were unable to recover it. 00:30:34.446 [2024-12-05 12:14:08.544171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.446 [2024-12-05 12:14:08.544203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.446 qpair failed and we were unable to recover it. 00:30:34.446 [2024-12-05 12:14:08.544334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.446 [2024-12-05 12:14:08.544364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.446 qpair failed and we were unable to recover it. 00:30:34.446 [2024-12-05 12:14:08.544595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.446 [2024-12-05 12:14:08.544625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.446 qpair failed and we were unable to recover it. 00:30:34.446 [2024-12-05 12:14:08.544818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.446 [2024-12-05 12:14:08.544851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.446 qpair failed and we were unable to recover it. 00:30:34.446 [2024-12-05 12:14:08.545001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.446 [2024-12-05 12:14:08.545033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.446 qpair failed and we were unable to recover it. 00:30:34.446 [2024-12-05 12:14:08.545228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.446 [2024-12-05 12:14:08.545260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.446 qpair failed and we were unable to recover it. 00:30:34.446 [2024-12-05 12:14:08.545391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.446 [2024-12-05 12:14:08.545424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.446 qpair failed and we were unable to recover it. 00:30:34.446 [2024-12-05 12:14:08.545624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.446 [2024-12-05 12:14:08.545655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.446 qpair failed and we were unable to recover it. 00:30:34.446 [2024-12-05 12:14:08.545844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.446 [2024-12-05 12:14:08.545875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.446 qpair failed and we were unable to recover it. 00:30:34.446 [2024-12-05 12:14:08.546099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.446 [2024-12-05 12:14:08.546131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.446 qpair failed and we were unable to recover it. 00:30:34.446 [2024-12-05 12:14:08.546431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.446 [2024-12-05 12:14:08.546464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.446 qpair failed and we were unable to recover it. 00:30:34.446 [2024-12-05 12:14:08.546667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.446 [2024-12-05 12:14:08.546699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.446 qpair failed and we were unable to recover it. 00:30:34.446 [2024-12-05 12:14:08.546896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.446 [2024-12-05 12:14:08.546930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.446 qpair failed and we were unable to recover it. 00:30:34.446 [2024-12-05 12:14:08.547072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.446 [2024-12-05 12:14:08.547105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.446 qpair failed and we were unable to recover it. 00:30:34.446 [2024-12-05 12:14:08.547228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.446 [2024-12-05 12:14:08.547259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.446 qpair failed and we were unable to recover it. 00:30:34.446 [2024-12-05 12:14:08.547450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.446 [2024-12-05 12:14:08.547482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.446 qpair failed and we were unable to recover it. 00:30:34.446 [2024-12-05 12:14:08.547619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.446 [2024-12-05 12:14:08.547652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.446 qpair failed and we were unable to recover it. 00:30:34.446 [2024-12-05 12:14:08.547834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.446 [2024-12-05 12:14:08.547866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.446 qpair failed and we were unable to recover it. 00:30:34.446 [2024-12-05 12:14:08.548055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.446 [2024-12-05 12:14:08.548094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.446 qpair failed and we were unable to recover it. 00:30:34.446 [2024-12-05 12:14:08.548278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.446 [2024-12-05 12:14:08.548309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.446 qpair failed and we were unable to recover it. 00:30:34.446 [2024-12-05 12:14:08.548533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.446 [2024-12-05 12:14:08.548566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.446 qpair failed and we were unable to recover it. 00:30:34.446 [2024-12-05 12:14:08.548759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.446 [2024-12-05 12:14:08.548790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.446 qpair failed and we were unable to recover it. 00:30:34.446 [2024-12-05 12:14:08.548969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.446 [2024-12-05 12:14:08.549001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.446 qpair failed and we were unable to recover it. 00:30:34.446 [2024-12-05 12:14:08.549133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.446 [2024-12-05 12:14:08.549165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.446 qpair failed and we were unable to recover it. 00:30:34.446 [2024-12-05 12:14:08.549402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.446 [2024-12-05 12:14:08.549436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.446 qpair failed and we were unable to recover it. 00:30:34.446 [2024-12-05 12:14:08.549557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.446 [2024-12-05 12:14:08.549587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.446 qpair failed and we were unable to recover it. 00:30:34.446 [2024-12-05 12:14:08.549830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.446 [2024-12-05 12:14:08.549861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.446 qpair failed and we were unable to recover it. 00:30:34.446 [2024-12-05 12:14:08.550060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.446 [2024-12-05 12:14:08.550090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.446 qpair failed and we were unable to recover it. 00:30:34.446 [2024-12-05 12:14:08.550274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.446 [2024-12-05 12:14:08.550305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.446 qpair failed and we were unable to recover it. 00:30:34.446 [2024-12-05 12:14:08.550509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.446 [2024-12-05 12:14:08.550542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.446 qpair failed and we were unable to recover it. 00:30:34.446 [2024-12-05 12:14:08.550802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.446 [2024-12-05 12:14:08.550835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.446 qpair failed and we were unable to recover it. 00:30:34.446 [2024-12-05 12:14:08.551031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.446 [2024-12-05 12:14:08.551063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.446 qpair failed and we were unable to recover it. 00:30:34.446 [2024-12-05 12:14:08.551250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.446 [2024-12-05 12:14:08.551282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.446 qpair failed and we were unable to recover it. 00:30:34.446 [2024-12-05 12:14:08.551532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.447 [2024-12-05 12:14:08.551567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.447 qpair failed and we were unable to recover it. 00:30:34.447 [2024-12-05 12:14:08.551701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.447 [2024-12-05 12:14:08.551733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.447 qpair failed and we were unable to recover it. 00:30:34.447 [2024-12-05 12:14:08.551842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.447 [2024-12-05 12:14:08.551874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.447 qpair failed and we were unable to recover it. 00:30:34.447 [2024-12-05 12:14:08.552086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.447 [2024-12-05 12:14:08.552118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.447 qpair failed and we were unable to recover it. 00:30:34.447 [2024-12-05 12:14:08.552234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.447 [2024-12-05 12:14:08.552267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.447 qpair failed and we were unable to recover it. 00:30:34.447 [2024-12-05 12:14:08.552408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.447 [2024-12-05 12:14:08.552441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.447 qpair failed and we were unable to recover it. 00:30:34.447 [2024-12-05 12:14:08.552691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.447 [2024-12-05 12:14:08.552725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.447 qpair failed and we were unable to recover it. 00:30:34.447 [2024-12-05 12:14:08.552973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.447 [2024-12-05 12:14:08.553003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.447 qpair failed and we were unable to recover it. 00:30:34.447 [2024-12-05 12:14:08.553118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.447 [2024-12-05 12:14:08.553149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.447 qpair failed and we were unable to recover it. 00:30:34.447 [2024-12-05 12:14:08.553396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.447 [2024-12-05 12:14:08.553430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.447 qpair failed and we were unable to recover it. 00:30:34.447 [2024-12-05 12:14:08.553571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.447 [2024-12-05 12:14:08.553602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.447 qpair failed and we were unable to recover it. 00:30:34.447 [2024-12-05 12:14:08.553722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.447 [2024-12-05 12:14:08.553754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.447 qpair failed and we were unable to recover it. 00:30:34.447 [2024-12-05 12:14:08.553958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.447 [2024-12-05 12:14:08.553996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.447 qpair failed and we were unable to recover it. 00:30:34.447 [2024-12-05 12:14:08.554131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.447 [2024-12-05 12:14:08.554163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.447 qpair failed and we were unable to recover it. 00:30:34.447 [2024-12-05 12:14:08.554411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.447 [2024-12-05 12:14:08.554444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.447 qpair failed and we were unable to recover it. 00:30:34.447 [2024-12-05 12:14:08.554640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.447 [2024-12-05 12:14:08.554673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.447 qpair failed and we were unable to recover it. 00:30:34.447 [2024-12-05 12:14:08.554897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.447 [2024-12-05 12:14:08.554928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.447 qpair failed and we were unable to recover it. 00:30:34.447 [2024-12-05 12:14:08.555064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.447 [2024-12-05 12:14:08.555096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.447 qpair failed and we were unable to recover it. 00:30:34.447 [2024-12-05 12:14:08.555292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.447 [2024-12-05 12:14:08.555324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.447 qpair failed and we were unable to recover it. 00:30:34.447 [2024-12-05 12:14:08.555545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.447 [2024-12-05 12:14:08.555579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.447 qpair failed and we were unable to recover it. 00:30:34.447 [2024-12-05 12:14:08.555803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.447 [2024-12-05 12:14:08.555836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.447 qpair failed and we were unable to recover it. 00:30:34.447 [2024-12-05 12:14:08.556021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.447 [2024-12-05 12:14:08.556052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.447 qpair failed and we were unable to recover it. 00:30:34.447 [2024-12-05 12:14:08.556250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.447 [2024-12-05 12:14:08.556284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.447 qpair failed and we were unable to recover it. 00:30:34.447 [2024-12-05 12:14:08.556407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.447 [2024-12-05 12:14:08.556440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.447 qpair failed and we were unable to recover it. 00:30:34.447 [2024-12-05 12:14:08.556570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.447 [2024-12-05 12:14:08.556601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.447 qpair failed and we were unable to recover it. 00:30:34.447 [2024-12-05 12:14:08.556780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.447 [2024-12-05 12:14:08.556812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.447 qpair failed and we were unable to recover it. 00:30:34.447 [2024-12-05 12:14:08.556980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.447 [2024-12-05 12:14:08.557071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.447 qpair failed and we were unable to recover it. 00:30:34.447 [2024-12-05 12:14:08.557356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.447 [2024-12-05 12:14:08.557411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.447 qpair failed and we were unable to recover it. 00:30:34.447 [2024-12-05 12:14:08.557602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.447 [2024-12-05 12:14:08.557635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.447 qpair failed and we were unable to recover it. 00:30:34.447 [2024-12-05 12:14:08.557811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.447 [2024-12-05 12:14:08.557843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.447 qpair failed and we were unable to recover it. 00:30:34.447 [2024-12-05 12:14:08.558057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.447 [2024-12-05 12:14:08.558091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.447 qpair failed and we were unable to recover it. 00:30:34.447 [2024-12-05 12:14:08.558237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.447 [2024-12-05 12:14:08.558269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.447 qpair failed and we were unable to recover it. 00:30:34.447 [2024-12-05 12:14:08.558476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.447 [2024-12-05 12:14:08.558509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.447 qpair failed and we were unable to recover it. 00:30:34.447 [2024-12-05 12:14:08.558705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.447 [2024-12-05 12:14:08.558737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.447 qpair failed and we were unable to recover it. 00:30:34.447 [2024-12-05 12:14:08.558927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.447 [2024-12-05 12:14:08.558960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.447 qpair failed and we were unable to recover it. 00:30:34.447 [2024-12-05 12:14:08.559092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.447 [2024-12-05 12:14:08.559123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.447 qpair failed and we were unable to recover it. 00:30:34.447 [2024-12-05 12:14:08.559322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.447 [2024-12-05 12:14:08.559354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.447 qpair failed and we were unable to recover it. 00:30:34.447 [2024-12-05 12:14:08.559495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.447 [2024-12-05 12:14:08.559527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.447 qpair failed and we were unable to recover it. 00:30:34.447 [2024-12-05 12:14:08.559670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.447 [2024-12-05 12:14:08.559701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.448 qpair failed and we were unable to recover it. 00:30:34.448 [2024-12-05 12:14:08.559818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.448 [2024-12-05 12:14:08.559858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.448 qpair failed and we were unable to recover it. 00:30:34.448 [2024-12-05 12:14:08.560090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.448 [2024-12-05 12:14:08.560122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.448 qpair failed and we were unable to recover it. 00:30:34.448 [2024-12-05 12:14:08.560248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.448 [2024-12-05 12:14:08.560279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.448 qpair failed and we were unable to recover it. 00:30:34.448 [2024-12-05 12:14:08.560553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.448 [2024-12-05 12:14:08.560586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.448 qpair failed and we were unable to recover it. 00:30:34.448 [2024-12-05 12:14:08.560771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.448 [2024-12-05 12:14:08.560802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.448 qpair failed and we were unable to recover it. 00:30:34.448 [2024-12-05 12:14:08.560926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.448 [2024-12-05 12:14:08.560958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.448 qpair failed and we were unable to recover it. 00:30:34.448 [2024-12-05 12:14:08.561173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.448 [2024-12-05 12:14:08.561205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.448 qpair failed and we were unable to recover it. 00:30:34.448 [2024-12-05 12:14:08.561413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.448 [2024-12-05 12:14:08.561446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.448 qpair failed and we were unable to recover it. 00:30:34.448 [2024-12-05 12:14:08.561570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.448 [2024-12-05 12:14:08.561606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.448 qpair failed and we were unable to recover it. 00:30:34.448 [2024-12-05 12:14:08.561737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.448 [2024-12-05 12:14:08.561769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.448 qpair failed and we were unable to recover it. 00:30:34.448 [2024-12-05 12:14:08.561895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.448 [2024-12-05 12:14:08.561926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.448 qpair failed and we were unable to recover it. 00:30:34.448 [2024-12-05 12:14:08.562175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.448 [2024-12-05 12:14:08.562206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.448 qpair failed and we were unable to recover it. 00:30:34.448 [2024-12-05 12:14:08.562402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.448 [2024-12-05 12:14:08.562436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.448 qpair failed and we were unable to recover it. 00:30:34.448 [2024-12-05 12:14:08.562670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.448 [2024-12-05 12:14:08.562701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.448 qpair failed and we were unable to recover it. 00:30:34.448 [2024-12-05 12:14:08.562895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.448 [2024-12-05 12:14:08.562928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.448 qpair failed and we were unable to recover it. 00:30:34.448 [2024-12-05 12:14:08.563102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.448 [2024-12-05 12:14:08.563133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.448 qpair failed and we were unable to recover it. 00:30:34.448 [2024-12-05 12:14:08.563240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.448 [2024-12-05 12:14:08.563272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.448 qpair failed and we were unable to recover it. 00:30:34.448 [2024-12-05 12:14:08.563536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.448 [2024-12-05 12:14:08.563569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.448 qpair failed and we were unable to recover it. 00:30:34.448 [2024-12-05 12:14:08.563685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.448 [2024-12-05 12:14:08.563718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.448 qpair failed and we were unable to recover it. 00:30:34.448 [2024-12-05 12:14:08.563838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.448 [2024-12-05 12:14:08.563869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.448 qpair failed and we were unable to recover it. 00:30:34.448 [2024-12-05 12:14:08.563979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.448 [2024-12-05 12:14:08.564011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.448 qpair failed and we were unable to recover it. 00:30:34.448 [2024-12-05 12:14:08.564138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.448 [2024-12-05 12:14:08.564171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.448 qpair failed and we were unable to recover it. 00:30:34.448 [2024-12-05 12:14:08.564304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.448 [2024-12-05 12:14:08.564336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.448 qpair failed and we were unable to recover it. 00:30:34.448 [2024-12-05 12:14:08.564456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.448 [2024-12-05 12:14:08.564489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.448 qpair failed and we were unable to recover it. 00:30:34.448 [2024-12-05 12:14:08.564601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.448 [2024-12-05 12:14:08.564632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.448 qpair failed and we were unable to recover it. 00:30:34.448 [2024-12-05 12:14:08.564767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.448 [2024-12-05 12:14:08.564799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.448 qpair failed and we were unable to recover it. 00:30:34.448 [2024-12-05 12:14:08.564924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.448 [2024-12-05 12:14:08.564955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.448 qpair failed and we were unable to recover it. 00:30:34.448 [2024-12-05 12:14:08.565186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.448 [2024-12-05 12:14:08.565259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.448 qpair failed and we were unable to recover it. 00:30:34.448 [2024-12-05 12:14:08.565491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.448 [2024-12-05 12:14:08.565529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.448 qpair failed and we were unable to recover it. 00:30:34.448 [2024-12-05 12:14:08.565708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.448 [2024-12-05 12:14:08.565742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.448 qpair failed and we were unable to recover it. 00:30:34.448 [2024-12-05 12:14:08.565866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.448 [2024-12-05 12:14:08.565898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.448 qpair failed and we were unable to recover it. 00:30:34.448 [2024-12-05 12:14:08.566020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.448 [2024-12-05 12:14:08.566051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.448 qpair failed and we were unable to recover it. 00:30:34.448 [2024-12-05 12:14:08.566298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.448 [2024-12-05 12:14:08.566330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.448 qpair failed and we were unable to recover it. 00:30:34.448 [2024-12-05 12:14:08.566519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.448 [2024-12-05 12:14:08.566552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.448 qpair failed and we were unable to recover it. 00:30:34.449 [2024-12-05 12:14:08.566757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.449 [2024-12-05 12:14:08.566789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.449 qpair failed and we were unable to recover it. 00:30:34.449 [2024-12-05 12:14:08.567062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.449 [2024-12-05 12:14:08.567093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.449 qpair failed and we were unable to recover it. 00:30:34.449 [2024-12-05 12:14:08.567292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.449 [2024-12-05 12:14:08.567324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.449 qpair failed and we were unable to recover it. 00:30:34.449 [2024-12-05 12:14:08.567532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.449 [2024-12-05 12:14:08.567565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.449 qpair failed and we were unable to recover it. 00:30:34.449 [2024-12-05 12:14:08.567696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.449 [2024-12-05 12:14:08.567729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.449 qpair failed and we were unable to recover it. 00:30:34.449 [2024-12-05 12:14:08.567911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.449 [2024-12-05 12:14:08.567942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.449 qpair failed and we were unable to recover it. 00:30:34.449 [2024-12-05 12:14:08.568118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.449 [2024-12-05 12:14:08.568159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.449 qpair failed and we were unable to recover it. 00:30:34.449 [2024-12-05 12:14:08.568348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.449 [2024-12-05 12:14:08.568396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.449 qpair failed and we were unable to recover it. 00:30:34.449 [2024-12-05 12:14:08.568516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.449 [2024-12-05 12:14:08.568548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.449 qpair failed and we were unable to recover it. 00:30:34.449 [2024-12-05 12:14:08.568659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.449 [2024-12-05 12:14:08.568691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.449 qpair failed and we were unable to recover it. 00:30:34.449 [2024-12-05 12:14:08.568798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.449 [2024-12-05 12:14:08.568831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.449 qpair failed and we were unable to recover it. 00:30:34.449 [2024-12-05 12:14:08.569105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.449 [2024-12-05 12:14:08.569137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.449 qpair failed and we were unable to recover it. 00:30:34.449 [2024-12-05 12:14:08.569329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.449 [2024-12-05 12:14:08.569360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.449 qpair failed and we were unable to recover it. 00:30:34.449 [2024-12-05 12:14:08.569568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.449 [2024-12-05 12:14:08.569599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.449 qpair failed and we were unable to recover it. 00:30:34.449 [2024-12-05 12:14:08.569779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.449 [2024-12-05 12:14:08.569811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.449 qpair failed and we were unable to recover it. 00:30:34.449 [2024-12-05 12:14:08.569930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.449 [2024-12-05 12:14:08.569962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.449 qpair failed and we were unable to recover it. 00:30:34.449 [2024-12-05 12:14:08.570159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.449 [2024-12-05 12:14:08.570193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.449 qpair failed and we were unable to recover it. 00:30:34.449 [2024-12-05 12:14:08.570331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.449 [2024-12-05 12:14:08.570362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.449 qpair failed and we were unable to recover it. 00:30:34.449 [2024-12-05 12:14:08.570543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.449 [2024-12-05 12:14:08.570576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.449 qpair failed and we were unable to recover it. 00:30:34.449 [2024-12-05 12:14:08.570762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.449 [2024-12-05 12:14:08.570793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.449 qpair failed and we were unable to recover it. 00:30:34.449 [2024-12-05 12:14:08.570980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.449 [2024-12-05 12:14:08.571011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.449 qpair failed and we were unable to recover it. 00:30:34.449 [2024-12-05 12:14:08.571255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.449 [2024-12-05 12:14:08.571287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.449 qpair failed and we were unable to recover it. 00:30:34.449 [2024-12-05 12:14:08.571532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.449 [2024-12-05 12:14:08.571564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.449 qpair failed and we were unable to recover it. 00:30:34.449 [2024-12-05 12:14:08.571695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.449 [2024-12-05 12:14:08.571730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.449 qpair failed and we were unable to recover it. 00:30:34.449 [2024-12-05 12:14:08.571916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.449 [2024-12-05 12:14:08.571947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.449 qpair failed and we were unable to recover it. 00:30:34.449 [2024-12-05 12:14:08.572081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.449 [2024-12-05 12:14:08.572112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.449 qpair failed and we were unable to recover it. 00:30:34.449 [2024-12-05 12:14:08.572306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.449 [2024-12-05 12:14:08.572339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.449 qpair failed and we were unable to recover it. 00:30:34.449 [2024-12-05 12:14:08.572557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.449 [2024-12-05 12:14:08.572590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.449 qpair failed and we were unable to recover it. 00:30:34.449 [2024-12-05 12:14:08.572740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.449 [2024-12-05 12:14:08.572772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.449 qpair failed and we were unable to recover it. 00:30:34.449 [2024-12-05 12:14:08.572987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.449 [2024-12-05 12:14:08.573020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.449 qpair failed and we were unable to recover it. 00:30:34.449 [2024-12-05 12:14:08.573127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.449 [2024-12-05 12:14:08.573158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.449 qpair failed and we were unable to recover it. 00:30:34.449 [2024-12-05 12:14:08.573284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.449 [2024-12-05 12:14:08.573316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.449 qpair failed and we were unable to recover it. 00:30:34.449 [2024-12-05 12:14:08.573454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.449 [2024-12-05 12:14:08.573486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.449 qpair failed and we were unable to recover it. 00:30:34.449 [2024-12-05 12:14:08.573778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.449 [2024-12-05 12:14:08.573810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.449 qpair failed and we were unable to recover it. 00:30:34.449 [2024-12-05 12:14:08.573987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.449 [2024-12-05 12:14:08.574018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.449 qpair failed and we were unable to recover it. 00:30:34.449 [2024-12-05 12:14:08.574149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.449 [2024-12-05 12:14:08.574180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.449 qpair failed and we were unable to recover it. 00:30:34.449 [2024-12-05 12:14:08.574318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.449 [2024-12-05 12:14:08.574349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.449 qpair failed and we were unable to recover it. 00:30:34.449 [2024-12-05 12:14:08.574509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.449 [2024-12-05 12:14:08.574542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.449 qpair failed and we were unable to recover it. 00:30:34.450 [2024-12-05 12:14:08.574737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.450 [2024-12-05 12:14:08.574770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.450 qpair failed and we were unable to recover it. 00:30:34.450 [2024-12-05 12:14:08.574948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.450 [2024-12-05 12:14:08.574979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.450 qpair failed and we were unable to recover it. 00:30:34.450 [2024-12-05 12:14:08.575228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.450 [2024-12-05 12:14:08.575260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.450 qpair failed and we were unable to recover it. 00:30:34.450 [2024-12-05 12:14:08.575557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.450 [2024-12-05 12:14:08.575591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.450 qpair failed and we were unable to recover it. 00:30:34.450 [2024-12-05 12:14:08.575725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.450 [2024-12-05 12:14:08.575757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.450 qpair failed and we were unable to recover it. 00:30:34.450 [2024-12-05 12:14:08.576025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.450 [2024-12-05 12:14:08.576057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.450 qpair failed and we were unable to recover it. 00:30:34.450 [2024-12-05 12:14:08.576180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.450 [2024-12-05 12:14:08.576212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.450 qpair failed and we were unable to recover it. 00:30:34.450 [2024-12-05 12:14:08.576393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.450 [2024-12-05 12:14:08.576427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.450 qpair failed and we were unable to recover it. 00:30:34.450 [2024-12-05 12:14:08.576569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.450 [2024-12-05 12:14:08.576606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.450 qpair failed and we were unable to recover it. 00:30:34.450 [2024-12-05 12:14:08.576876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.450 [2024-12-05 12:14:08.576907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.450 qpair failed and we were unable to recover it. 00:30:34.450 [2024-12-05 12:14:08.577023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.450 [2024-12-05 12:14:08.577055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.450 qpair failed and we were unable to recover it. 00:30:34.450 [2024-12-05 12:14:08.577243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.450 [2024-12-05 12:14:08.577275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.450 qpair failed and we were unable to recover it. 00:30:34.450 [2024-12-05 12:14:08.577461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.450 [2024-12-05 12:14:08.577493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.450 qpair failed and we were unable to recover it. 00:30:34.450 [2024-12-05 12:14:08.577615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.450 [2024-12-05 12:14:08.577646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.450 qpair failed and we were unable to recover it. 00:30:34.450 [2024-12-05 12:14:08.577834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.450 [2024-12-05 12:14:08.577868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.450 qpair failed and we were unable to recover it. 00:30:34.450 [2024-12-05 12:14:08.578053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.450 [2024-12-05 12:14:08.578084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.450 qpair failed and we were unable to recover it. 00:30:34.450 [2024-12-05 12:14:08.578268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.450 [2024-12-05 12:14:08.578300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.450 qpair failed and we were unable to recover it. 00:30:34.450 [2024-12-05 12:14:08.578505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.450 [2024-12-05 12:14:08.578538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.450 qpair failed and we were unable to recover it. 00:30:34.450 [2024-12-05 12:14:08.578813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.450 [2024-12-05 12:14:08.578845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.450 qpair failed and we were unable to recover it. 00:30:34.450 [2024-12-05 12:14:08.579096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.450 [2024-12-05 12:14:08.579128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.450 qpair failed and we were unable to recover it. 00:30:34.450 [2024-12-05 12:14:08.579339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.450 [2024-12-05 12:14:08.579381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.450 qpair failed and we were unable to recover it. 00:30:34.450 [2024-12-05 12:14:08.579500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.450 [2024-12-05 12:14:08.579531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.450 qpair failed and we were unable to recover it. 00:30:34.450 [2024-12-05 12:14:08.579788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.450 [2024-12-05 12:14:08.579820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.450 qpair failed and we were unable to recover it. 00:30:34.450 [2024-12-05 12:14:08.580006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.450 [2024-12-05 12:14:08.580038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.450 qpair failed and we were unable to recover it. 00:30:34.450 [2024-12-05 12:14:08.580173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.450 [2024-12-05 12:14:08.580205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.450 qpair failed and we were unable to recover it. 00:30:34.450 [2024-12-05 12:14:08.580392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.450 [2024-12-05 12:14:08.580424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.450 qpair failed and we were unable to recover it. 00:30:34.450 [2024-12-05 12:14:08.580620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.450 [2024-12-05 12:14:08.580651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.450 qpair failed and we were unable to recover it. 00:30:34.450 [2024-12-05 12:14:08.580896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.450 [2024-12-05 12:14:08.580928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.450 qpair failed and we were unable to recover it. 00:30:34.450 [2024-12-05 12:14:08.581115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.450 [2024-12-05 12:14:08.581146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.450 qpair failed and we were unable to recover it. 00:30:34.450 [2024-12-05 12:14:08.581414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.450 [2024-12-05 12:14:08.581446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.450 qpair failed and we were unable to recover it. 00:30:34.450 [2024-12-05 12:14:08.581633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.450 [2024-12-05 12:14:08.581665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.450 qpair failed and we were unable to recover it. 00:30:34.450 [2024-12-05 12:14:08.581795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.450 [2024-12-05 12:14:08.581826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.450 qpair failed and we were unable to recover it. 00:30:34.450 [2024-12-05 12:14:08.582033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.450 [2024-12-05 12:14:08.582064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.450 qpair failed and we were unable to recover it. 00:30:34.450 [2024-12-05 12:14:08.582192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.450 [2024-12-05 12:14:08.582225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.450 qpair failed and we were unable to recover it. 00:30:34.450 [2024-12-05 12:14:08.582417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.450 [2024-12-05 12:14:08.582450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.450 qpair failed and we were unable to recover it. 00:30:34.450 [2024-12-05 12:14:08.582770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.450 [2024-12-05 12:14:08.582843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.450 qpair failed and we were unable to recover it. 00:30:34.450 [2024-12-05 12:14:08.583128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.450 [2024-12-05 12:14:08.583164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.450 qpair failed and we were unable to recover it. 00:30:34.450 [2024-12-05 12:14:08.583351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.450 [2024-12-05 12:14:08.583395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.450 qpair failed and we were unable to recover it. 00:30:34.451 [2024-12-05 12:14:08.583667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.451 [2024-12-05 12:14:08.583699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.451 qpair failed and we were unable to recover it. 00:30:34.451 [2024-12-05 12:14:08.583826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.451 [2024-12-05 12:14:08.583858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.451 qpair failed and we were unable to recover it. 00:30:34.451 [2024-12-05 12:14:08.584045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.451 [2024-12-05 12:14:08.584077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.451 qpair failed and we were unable to recover it. 00:30:34.451 [2024-12-05 12:14:08.584260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.451 [2024-12-05 12:14:08.584292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.451 qpair failed and we were unable to recover it. 00:30:34.451 [2024-12-05 12:14:08.584465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.451 [2024-12-05 12:14:08.584499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.451 qpair failed and we were unable to recover it. 00:30:34.451 [2024-12-05 12:14:08.584691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.451 [2024-12-05 12:14:08.584722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.451 qpair failed and we were unable to recover it. 00:30:34.451 [2024-12-05 12:14:08.584913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.451 [2024-12-05 12:14:08.584944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.451 qpair failed and we were unable to recover it. 00:30:34.451 [2024-12-05 12:14:08.585121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.451 [2024-12-05 12:14:08.585153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.451 qpair failed and we were unable to recover it. 00:30:34.451 [2024-12-05 12:14:08.585271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.451 [2024-12-05 12:14:08.585302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.451 qpair failed and we were unable to recover it. 00:30:34.451 [2024-12-05 12:14:08.585480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.451 [2024-12-05 12:14:08.585512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.451 qpair failed and we were unable to recover it. 00:30:34.451 [2024-12-05 12:14:08.585632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.451 [2024-12-05 12:14:08.585662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.451 qpair failed and we were unable to recover it. 00:30:34.451 [2024-12-05 12:14:08.585861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.451 [2024-12-05 12:14:08.585893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.451 qpair failed and we were unable to recover it. 00:30:34.451 [2024-12-05 12:14:08.586037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.451 [2024-12-05 12:14:08.586069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.451 qpair failed and we were unable to recover it. 00:30:34.451 [2024-12-05 12:14:08.586247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.451 [2024-12-05 12:14:08.586278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.451 qpair failed and we were unable to recover it. 00:30:34.451 [2024-12-05 12:14:08.586481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.451 [2024-12-05 12:14:08.586514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.451 qpair failed and we were unable to recover it. 00:30:34.451 [2024-12-05 12:14:08.586635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.451 [2024-12-05 12:14:08.586668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.451 qpair failed and we were unable to recover it. 00:30:34.451 [2024-12-05 12:14:08.586793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.451 [2024-12-05 12:14:08.586824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.451 qpair failed and we were unable to recover it. 00:30:34.451 [2024-12-05 12:14:08.587007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.451 [2024-12-05 12:14:08.587040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.451 qpair failed and we were unable to recover it. 00:30:34.451 [2024-12-05 12:14:08.587239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.451 [2024-12-05 12:14:08.587270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.451 qpair failed and we were unable to recover it. 00:30:34.451 [2024-12-05 12:14:08.587538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.451 [2024-12-05 12:14:08.587571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.451 qpair failed and we were unable to recover it. 00:30:34.451 [2024-12-05 12:14:08.587678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.451 [2024-12-05 12:14:08.587710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.451 qpair failed and we were unable to recover it. 00:30:34.451 [2024-12-05 12:14:08.587925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.451 [2024-12-05 12:14:08.587956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.451 qpair failed and we were unable to recover it. 00:30:34.451 [2024-12-05 12:14:08.588233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.451 [2024-12-05 12:14:08.588265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.451 qpair failed and we were unable to recover it. 00:30:34.451 [2024-12-05 12:14:08.588485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.451 [2024-12-05 12:14:08.588517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.451 qpair failed and we were unable to recover it. 00:30:34.451 [2024-12-05 12:14:08.588649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.451 [2024-12-05 12:14:08.588686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.451 qpair failed and we were unable to recover it. 00:30:34.451 [2024-12-05 12:14:08.588815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.451 [2024-12-05 12:14:08.588847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.451 qpair failed and we were unable to recover it. 00:30:34.451 [2024-12-05 12:14:08.589020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.451 [2024-12-05 12:14:08.589053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.451 qpair failed and we were unable to recover it. 00:30:34.451 [2024-12-05 12:14:08.589224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.451 [2024-12-05 12:14:08.589254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.451 qpair failed and we were unable to recover it. 00:30:34.451 [2024-12-05 12:14:08.589391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.451 [2024-12-05 12:14:08.589424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.451 qpair failed and we were unable to recover it. 00:30:34.451 [2024-12-05 12:14:08.589557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.451 [2024-12-05 12:14:08.589590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.451 qpair failed and we were unable to recover it. 00:30:34.451 [2024-12-05 12:14:08.589772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.451 [2024-12-05 12:14:08.589804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.451 qpair failed and we were unable to recover it. 00:30:34.451 [2024-12-05 12:14:08.589924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.451 [2024-12-05 12:14:08.589954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.451 qpair failed and we were unable to recover it. 00:30:34.451 [2024-12-05 12:14:08.590086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.451 [2024-12-05 12:14:08.590119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.451 qpair failed and we were unable to recover it. 00:30:34.451 [2024-12-05 12:14:08.590308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.451 [2024-12-05 12:14:08.590340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.451 qpair failed and we were unable to recover it. 00:30:34.451 [2024-12-05 12:14:08.590538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.451 [2024-12-05 12:14:08.590569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.451 qpair failed and we were unable to recover it. 00:30:34.451 [2024-12-05 12:14:08.590777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.451 [2024-12-05 12:14:08.590808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.451 qpair failed and we were unable to recover it. 00:30:34.451 [2024-12-05 12:14:08.590990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.451 [2024-12-05 12:14:08.591021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.451 qpair failed and we were unable to recover it. 00:30:34.451 [2024-12-05 12:14:08.591131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.451 [2024-12-05 12:14:08.591162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.452 qpair failed and we were unable to recover it. 00:30:34.452 [2024-12-05 12:14:08.591298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.452 [2024-12-05 12:14:08.591329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.452 qpair failed and we were unable to recover it. 00:30:34.452 [2024-12-05 12:14:08.591611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.452 [2024-12-05 12:14:08.591645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.452 qpair failed and we were unable to recover it. 00:30:34.452 [2024-12-05 12:14:08.591771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.452 [2024-12-05 12:14:08.591803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.452 qpair failed and we were unable to recover it. 00:30:34.452 [2024-12-05 12:14:08.591928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.452 [2024-12-05 12:14:08.591959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.452 qpair failed and we were unable to recover it. 00:30:34.452 [2024-12-05 12:14:08.592080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.452 [2024-12-05 12:14:08.592112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.452 qpair failed and we were unable to recover it. 00:30:34.452 [2024-12-05 12:14:08.592284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.452 [2024-12-05 12:14:08.592317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.452 qpair failed and we were unable to recover it. 00:30:34.452 [2024-12-05 12:14:08.592531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.452 [2024-12-05 12:14:08.592563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.452 qpair failed and we were unable to recover it. 00:30:34.452 [2024-12-05 12:14:08.592850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.452 [2024-12-05 12:14:08.592882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.452 qpair failed and we were unable to recover it. 00:30:34.452 [2024-12-05 12:14:08.593071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.452 [2024-12-05 12:14:08.593102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.452 qpair failed and we were unable to recover it. 00:30:34.452 [2024-12-05 12:14:08.593222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.452 [2024-12-05 12:14:08.593254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.452 qpair failed and we were unable to recover it. 00:30:34.452 [2024-12-05 12:14:08.593438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.452 [2024-12-05 12:14:08.593471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.452 qpair failed and we were unable to recover it. 00:30:34.452 [2024-12-05 12:14:08.593658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.452 [2024-12-05 12:14:08.593690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.452 qpair failed and we were unable to recover it. 00:30:34.452 [2024-12-05 12:14:08.593878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.452 [2024-12-05 12:14:08.593912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.452 qpair failed and we were unable to recover it. 00:30:34.452 [2024-12-05 12:14:08.594025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.452 [2024-12-05 12:14:08.594056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.452 qpair failed and we were unable to recover it. 00:30:34.452 [2024-12-05 12:14:08.594249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.452 [2024-12-05 12:14:08.594281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.452 qpair failed and we were unable to recover it. 00:30:34.452 [2024-12-05 12:14:08.594396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.452 [2024-12-05 12:14:08.594429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.452 qpair failed and we were unable to recover it. 00:30:34.452 [2024-12-05 12:14:08.594672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.452 [2024-12-05 12:14:08.594704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.452 qpair failed and we were unable to recover it. 00:30:34.452 [2024-12-05 12:14:08.594849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.452 [2024-12-05 12:14:08.594882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.452 qpair failed and we were unable to recover it. 00:30:34.452 [2024-12-05 12:14:08.595011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.452 [2024-12-05 12:14:08.595043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.452 qpair failed and we were unable to recover it. 00:30:34.452 [2024-12-05 12:14:08.595291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.452 [2024-12-05 12:14:08.595322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.452 qpair failed and we were unable to recover it. 00:30:34.452 [2024-12-05 12:14:08.595461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.452 [2024-12-05 12:14:08.595494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.452 qpair failed and we were unable to recover it. 00:30:34.452 [2024-12-05 12:14:08.595806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.452 [2024-12-05 12:14:08.595839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.452 qpair failed and we were unable to recover it. 00:30:34.452 [2024-12-05 12:14:08.596015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.452 [2024-12-05 12:14:08.596046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.452 qpair failed and we were unable to recover it. 00:30:34.452 [2024-12-05 12:14:08.596338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.452 [2024-12-05 12:14:08.596378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.452 qpair failed and we were unable to recover it. 00:30:34.452 [2024-12-05 12:14:08.596633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.452 [2024-12-05 12:14:08.596665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.452 qpair failed and we were unable to recover it. 00:30:34.452 [2024-12-05 12:14:08.596841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.452 [2024-12-05 12:14:08.596872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.452 qpair failed and we were unable to recover it. 00:30:34.452 [2024-12-05 12:14:08.597061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.452 [2024-12-05 12:14:08.597094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.452 qpair failed and we were unable to recover it. 00:30:34.452 [2024-12-05 12:14:08.597289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.452 [2024-12-05 12:14:08.597323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.452 qpair failed and we were unable to recover it. 00:30:34.452 [2024-12-05 12:14:08.597471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.452 [2024-12-05 12:14:08.597504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.452 qpair failed and we were unable to recover it. 00:30:34.452 [2024-12-05 12:14:08.597750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.452 [2024-12-05 12:14:08.597782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.452 qpair failed and we were unable to recover it. 00:30:34.452 [2024-12-05 12:14:08.597969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.452 [2024-12-05 12:14:08.598001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.452 qpair failed and we were unable to recover it. 00:30:34.452 [2024-12-05 12:14:08.598118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.452 [2024-12-05 12:14:08.598149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.452 qpair failed and we were unable to recover it. 00:30:34.452 [2024-12-05 12:14:08.598260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.452 [2024-12-05 12:14:08.598291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.452 qpair failed and we were unable to recover it. 00:30:34.452 [2024-12-05 12:14:08.598425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.452 [2024-12-05 12:14:08.598458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.452 qpair failed and we were unable to recover it. 00:30:34.452 [2024-12-05 12:14:08.598639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.452 [2024-12-05 12:14:08.598672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.453 qpair failed and we were unable to recover it. 00:30:34.453 [2024-12-05 12:14:08.598799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.453 [2024-12-05 12:14:08.598830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.453 qpair failed and we were unable to recover it. 00:30:34.453 [2024-12-05 12:14:08.599005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.453 [2024-12-05 12:14:08.599037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.453 qpair failed and we were unable to recover it. 00:30:34.453 [2024-12-05 12:14:08.599143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.453 [2024-12-05 12:14:08.599175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.453 qpair failed and we were unable to recover it. 00:30:34.453 [2024-12-05 12:14:08.599361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.453 [2024-12-05 12:14:08.599411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.453 qpair failed and we were unable to recover it. 00:30:34.453 [2024-12-05 12:14:08.599524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.453 [2024-12-05 12:14:08.599556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.453 qpair failed and we were unable to recover it. 00:30:34.453 [2024-12-05 12:14:08.599665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.453 [2024-12-05 12:14:08.599697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.453 qpair failed and we were unable to recover it. 00:30:34.453 [2024-12-05 12:14:08.599815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.453 [2024-12-05 12:14:08.599846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.453 qpair failed and we were unable to recover it. 00:30:34.453 [2024-12-05 12:14:08.599953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.453 [2024-12-05 12:14:08.599985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.453 qpair failed and we were unable to recover it. 00:30:34.453 [2024-12-05 12:14:08.600113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.453 [2024-12-05 12:14:08.600144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.453 qpair failed and we were unable to recover it. 00:30:34.453 [2024-12-05 12:14:08.600268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.453 [2024-12-05 12:14:08.600299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.453 qpair failed and we were unable to recover it. 00:30:34.453 [2024-12-05 12:14:08.600482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.453 [2024-12-05 12:14:08.600515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.453 qpair failed and we were unable to recover it. 00:30:34.453 [2024-12-05 12:14:08.600759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.453 [2024-12-05 12:14:08.600790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.453 qpair failed and we were unable to recover it. 00:30:34.453 [2024-12-05 12:14:08.600909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.453 [2024-12-05 12:14:08.600940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.453 qpair failed and we were unable to recover it. 00:30:34.453 [2024-12-05 12:14:08.601060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.453 [2024-12-05 12:14:08.601092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.453 qpair failed and we were unable to recover it. 00:30:34.453 [2024-12-05 12:14:08.601282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.453 [2024-12-05 12:14:08.601314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.453 qpair failed and we were unable to recover it. 00:30:34.453 [2024-12-05 12:14:08.601507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.453 [2024-12-05 12:14:08.601540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.453 qpair failed and we were unable to recover it. 00:30:34.453 [2024-12-05 12:14:08.601661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.453 [2024-12-05 12:14:08.601693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.453 qpair failed and we were unable to recover it. 00:30:34.453 [2024-12-05 12:14:08.601829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.453 [2024-12-05 12:14:08.601861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.453 qpair failed and we were unable to recover it. 00:30:34.453 [2024-12-05 12:14:08.601968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.453 [2024-12-05 12:14:08.602000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.453 qpair failed and we were unable to recover it. 00:30:34.453 [2024-12-05 12:14:08.602176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.453 [2024-12-05 12:14:08.602219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.453 qpair failed and we were unable to recover it. 00:30:34.453 [2024-12-05 12:14:08.602397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.453 [2024-12-05 12:14:08.602430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.453 qpair failed and we were unable to recover it. 00:30:34.453 [2024-12-05 12:14:08.602534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.453 [2024-12-05 12:14:08.602565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.453 qpair failed and we were unable to recover it. 00:30:34.453 [2024-12-05 12:14:08.602742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.453 [2024-12-05 12:14:08.602775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.453 qpair failed and we were unable to recover it. 00:30:34.453 [2024-12-05 12:14:08.602886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.453 [2024-12-05 12:14:08.602918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.453 qpair failed and we were unable to recover it. 00:30:34.453 [2024-12-05 12:14:08.603042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.453 [2024-12-05 12:14:08.603074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.453 qpair failed and we were unable to recover it. 00:30:34.453 [2024-12-05 12:14:08.603195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.453 [2024-12-05 12:14:08.603226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.453 qpair failed and we were unable to recover it. 00:30:34.453 [2024-12-05 12:14:08.603376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.453 [2024-12-05 12:14:08.603410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.453 qpair failed and we were unable to recover it. 00:30:34.453 [2024-12-05 12:14:08.603533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.453 [2024-12-05 12:14:08.603565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.453 qpair failed and we were unable to recover it. 00:30:34.453 [2024-12-05 12:14:08.603673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.453 [2024-12-05 12:14:08.603705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.453 qpair failed and we were unable to recover it. 00:30:34.453 [2024-12-05 12:14:08.603817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.453 [2024-12-05 12:14:08.603848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.453 qpair failed and we were unable to recover it. 00:30:34.453 [2024-12-05 12:14:08.604098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.453 [2024-12-05 12:14:08.604130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.453 qpair failed and we were unable to recover it. 00:30:34.453 [2024-12-05 12:14:08.604269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.453 [2024-12-05 12:14:08.604301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.453 qpair failed and we were unable to recover it. 00:30:34.453 [2024-12-05 12:14:08.604409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.453 [2024-12-05 12:14:08.604440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.453 qpair failed and we were unable to recover it. 00:30:34.453 [2024-12-05 12:14:08.604660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.453 [2024-12-05 12:14:08.604692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.453 qpair failed and we were unable to recover it. 00:30:34.453 [2024-12-05 12:14:08.604960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.453 [2024-12-05 12:14:08.604990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.453 qpair failed and we were unable to recover it. 00:30:34.453 [2024-12-05 12:14:08.605104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.453 [2024-12-05 12:14:08.605136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.453 qpair failed and we were unable to recover it. 00:30:34.453 [2024-12-05 12:14:08.605254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.453 [2024-12-05 12:14:08.605286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.453 qpair failed and we were unable to recover it. 00:30:34.454 [2024-12-05 12:14:08.605411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.454 [2024-12-05 12:14:08.605442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.454 qpair failed and we were unable to recover it. 00:30:34.454 [2024-12-05 12:14:08.605571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.454 [2024-12-05 12:14:08.605602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.454 qpair failed and we were unable to recover it. 00:30:34.454 [2024-12-05 12:14:08.605706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.454 [2024-12-05 12:14:08.605736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.454 qpair failed and we were unable to recover it. 00:30:34.454 [2024-12-05 12:14:08.605922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.454 [2024-12-05 12:14:08.605954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.454 qpair failed and we were unable to recover it. 00:30:34.454 [2024-12-05 12:14:08.606071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.454 [2024-12-05 12:14:08.606103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.454 qpair failed and we were unable to recover it. 00:30:34.454 [2024-12-05 12:14:08.606239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.454 [2024-12-05 12:14:08.606272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.454 qpair failed and we were unable to recover it. 00:30:34.454 [2024-12-05 12:14:08.606443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.454 [2024-12-05 12:14:08.606476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.454 qpair failed and we were unable to recover it. 00:30:34.454 [2024-12-05 12:14:08.606603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.454 [2024-12-05 12:14:08.606635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.454 qpair failed and we were unable to recover it. 00:30:34.454 [2024-12-05 12:14:08.606858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.454 [2024-12-05 12:14:08.606890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.454 qpair failed and we were unable to recover it. 00:30:34.454 [2024-12-05 12:14:08.607003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.454 [2024-12-05 12:14:08.607034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.454 qpair failed and we were unable to recover it. 00:30:34.454 [2024-12-05 12:14:08.607303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.454 [2024-12-05 12:14:08.607336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.454 qpair failed and we were unable to recover it. 00:30:34.454 [2024-12-05 12:14:08.607472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.454 [2024-12-05 12:14:08.607506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.454 qpair failed and we were unable to recover it. 00:30:34.454 [2024-12-05 12:14:08.607623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.454 [2024-12-05 12:14:08.607654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.454 qpair failed and we were unable to recover it. 00:30:34.454 [2024-12-05 12:14:08.607766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.454 [2024-12-05 12:14:08.607798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.454 qpair failed and we were unable to recover it. 00:30:34.454 [2024-12-05 12:14:08.607993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.454 [2024-12-05 12:14:08.608025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.454 qpair failed and we were unable to recover it. 00:30:34.454 [2024-12-05 12:14:08.608222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.454 [2024-12-05 12:14:08.608254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.454 qpair failed and we were unable to recover it. 00:30:34.454 [2024-12-05 12:14:08.608381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.454 [2024-12-05 12:14:08.608414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.454 qpair failed and we were unable to recover it. 00:30:34.454 [2024-12-05 12:14:08.608630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.454 [2024-12-05 12:14:08.608661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.454 qpair failed and we were unable to recover it. 00:30:34.454 [2024-12-05 12:14:08.608847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.454 [2024-12-05 12:14:08.608879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.454 qpair failed and we were unable to recover it. 00:30:34.454 [2024-12-05 12:14:08.609009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.454 [2024-12-05 12:14:08.609040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.454 qpair failed and we were unable to recover it. 00:30:34.454 [2024-12-05 12:14:08.609158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.454 [2024-12-05 12:14:08.609189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.454 qpair failed and we were unable to recover it. 00:30:34.454 [2024-12-05 12:14:08.609384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.454 [2024-12-05 12:14:08.609417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.454 qpair failed and we were unable to recover it. 00:30:34.454 [2024-12-05 12:14:08.609537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.454 [2024-12-05 12:14:08.609569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.454 qpair failed and we were unable to recover it. 00:30:34.454 [2024-12-05 12:14:08.609756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.454 [2024-12-05 12:14:08.609794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.454 qpair failed and we were unable to recover it. 00:30:34.454 [2024-12-05 12:14:08.609970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.454 [2024-12-05 12:14:08.610002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.737 qpair failed and we were unable to recover it. 00:30:34.737 [2024-12-05 12:14:08.610120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.737 [2024-12-05 12:14:08.610151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.737 qpair failed and we were unable to recover it. 00:30:34.737 [2024-12-05 12:14:08.610393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.737 [2024-12-05 12:14:08.610425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.737 qpair failed and we were unable to recover it. 00:30:34.737 [2024-12-05 12:14:08.610544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.737 [2024-12-05 12:14:08.610573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.737 qpair failed and we were unable to recover it. 00:30:34.737 [2024-12-05 12:14:08.610745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.737 [2024-12-05 12:14:08.610777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.737 qpair failed and we were unable to recover it. 00:30:34.737 [2024-12-05 12:14:08.610881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.738 [2024-12-05 12:14:08.610912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.738 qpair failed and we were unable to recover it. 00:30:34.738 [2024-12-05 12:14:08.611030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.738 [2024-12-05 12:14:08.611062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.738 qpair failed and we were unable to recover it. 00:30:34.738 [2024-12-05 12:14:08.611193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.738 [2024-12-05 12:14:08.611225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.738 qpair failed and we were unable to recover it. 00:30:34.738 [2024-12-05 12:14:08.611336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.738 [2024-12-05 12:14:08.611374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.738 qpair failed and we were unable to recover it. 00:30:34.738 [2024-12-05 12:14:08.611481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.738 [2024-12-05 12:14:08.611513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.738 qpair failed and we were unable to recover it. 00:30:34.738 [2024-12-05 12:14:08.611615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.738 [2024-12-05 12:14:08.611645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.738 qpair failed and we were unable to recover it. 00:30:34.738 [2024-12-05 12:14:08.611763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.738 [2024-12-05 12:14:08.611794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.738 qpair failed and we were unable to recover it. 00:30:34.738 [2024-12-05 12:14:08.611900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.738 [2024-12-05 12:14:08.611931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.738 qpair failed and we were unable to recover it. 00:30:34.738 [2024-12-05 12:14:08.612060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.738 [2024-12-05 12:14:08.612092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.738 qpair failed and we were unable to recover it. 00:30:34.738 [2024-12-05 12:14:08.612216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.738 [2024-12-05 12:14:08.612247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.738 qpair failed and we were unable to recover it. 00:30:34.738 [2024-12-05 12:14:08.612408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.738 [2024-12-05 12:14:08.612440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.738 qpair failed and we were unable to recover it. 00:30:34.738 [2024-12-05 12:14:08.612622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.738 [2024-12-05 12:14:08.612654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.738 qpair failed and we were unable to recover it. 00:30:34.738 [2024-12-05 12:14:08.612780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.738 [2024-12-05 12:14:08.612811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.738 qpair failed and we were unable to recover it. 00:30:34.738 [2024-12-05 12:14:08.612983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.738 [2024-12-05 12:14:08.613013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.738 qpair failed and we were unable to recover it. 00:30:34.738 [2024-12-05 12:14:08.613184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.738 [2024-12-05 12:14:08.613215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.738 qpair failed and we were unable to recover it. 00:30:34.738 [2024-12-05 12:14:08.613403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.738 [2024-12-05 12:14:08.613435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.738 qpair failed and we were unable to recover it. 00:30:34.738 [2024-12-05 12:14:08.613630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.738 [2024-12-05 12:14:08.613663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.738 qpair failed and we were unable to recover it. 00:30:34.738 [2024-12-05 12:14:08.613838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.738 [2024-12-05 12:14:08.613870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.738 qpair failed and we were unable to recover it. 00:30:34.738 [2024-12-05 12:14:08.613990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.738 [2024-12-05 12:14:08.614022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.738 qpair failed and we were unable to recover it. 00:30:34.738 [2024-12-05 12:14:08.614234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.738 [2024-12-05 12:14:08.614265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.738 qpair failed and we were unable to recover it. 00:30:34.738 [2024-12-05 12:14:08.614393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.738 [2024-12-05 12:14:08.614425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.738 qpair failed and we were unable to recover it. 00:30:34.738 [2024-12-05 12:14:08.614543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.738 [2024-12-05 12:14:08.614581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.738 qpair failed and we were unable to recover it. 00:30:34.738 [2024-12-05 12:14:08.614845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.738 [2024-12-05 12:14:08.614877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.738 qpair failed and we were unable to recover it. 00:30:34.738 [2024-12-05 12:14:08.615000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.738 [2024-12-05 12:14:08.615032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.738 qpair failed and we were unable to recover it. 00:30:34.738 [2024-12-05 12:14:08.615233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.738 [2024-12-05 12:14:08.615264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.738 qpair failed and we were unable to recover it. 00:30:34.738 [2024-12-05 12:14:08.615435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.738 [2024-12-05 12:14:08.615468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.738 qpair failed and we were unable to recover it. 00:30:34.738 [2024-12-05 12:14:08.615586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.738 [2024-12-05 12:14:08.615618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.738 qpair failed and we were unable to recover it. 00:30:34.738 [2024-12-05 12:14:08.615737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.738 [2024-12-05 12:14:08.615768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.738 qpair failed and we were unable to recover it. 00:30:34.738 [2024-12-05 12:14:08.615873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.738 [2024-12-05 12:14:08.615905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.738 qpair failed and we were unable to recover it. 00:30:34.738 [2024-12-05 12:14:08.616025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.738 [2024-12-05 12:14:08.616056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.738 qpair failed and we were unable to recover it. 00:30:34.739 [2024-12-05 12:14:08.616177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.739 [2024-12-05 12:14:08.616208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.739 qpair failed and we were unable to recover it. 00:30:34.739 [2024-12-05 12:14:08.616391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.739 [2024-12-05 12:14:08.616424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.739 qpair failed and we were unable to recover it. 00:30:34.739 [2024-12-05 12:14:08.616541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.739 [2024-12-05 12:14:08.616572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.739 qpair failed and we were unable to recover it. 00:30:34.739 [2024-12-05 12:14:08.616753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.739 [2024-12-05 12:14:08.616784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.739 qpair failed and we were unable to recover it. 00:30:34.739 [2024-12-05 12:14:08.616957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.739 [2024-12-05 12:14:08.616990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.739 qpair failed and we were unable to recover it. 00:30:34.739 [2024-12-05 12:14:08.617134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.739 [2024-12-05 12:14:08.617165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.739 qpair failed and we were unable to recover it. 00:30:34.739 [2024-12-05 12:14:08.617266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.739 [2024-12-05 12:14:08.617298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.739 qpair failed and we were unable to recover it. 00:30:34.739 [2024-12-05 12:14:08.617467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.739 [2024-12-05 12:14:08.617501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.739 qpair failed and we were unable to recover it. 00:30:34.739 [2024-12-05 12:14:08.617682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.739 [2024-12-05 12:14:08.617713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.739 qpair failed and we were unable to recover it. 00:30:34.739 [2024-12-05 12:14:08.617838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.739 [2024-12-05 12:14:08.617870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.739 qpair failed and we were unable to recover it. 00:30:34.739 [2024-12-05 12:14:08.617984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.739 [2024-12-05 12:14:08.618015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.739 qpair failed and we were unable to recover it. 00:30:34.739 [2024-12-05 12:14:08.618261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.739 [2024-12-05 12:14:08.618293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.739 qpair failed and we were unable to recover it. 00:30:34.739 [2024-12-05 12:14:08.618399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.739 [2024-12-05 12:14:08.618432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.739 qpair failed and we were unable to recover it. 00:30:34.739 [2024-12-05 12:14:08.618531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.739 [2024-12-05 12:14:08.618562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.739 qpair failed and we were unable to recover it. 00:30:34.739 [2024-12-05 12:14:08.618671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.739 [2024-12-05 12:14:08.618702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.739 qpair failed and we were unable to recover it. 00:30:34.739 [2024-12-05 12:14:08.618826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.739 [2024-12-05 12:14:08.618857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.739 qpair failed and we were unable to recover it. 00:30:34.739 [2024-12-05 12:14:08.619031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.739 [2024-12-05 12:14:08.619063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.739 qpair failed and we were unable to recover it. 00:30:34.739 [2024-12-05 12:14:08.619246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.739 [2024-12-05 12:14:08.619278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.739 qpair failed and we were unable to recover it. 00:30:34.739 [2024-12-05 12:14:08.619409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.739 [2024-12-05 12:14:08.619441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.739 qpair failed and we were unable to recover it. 00:30:34.739 [2024-12-05 12:14:08.619553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.739 [2024-12-05 12:14:08.619584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.739 qpair failed and we were unable to recover it. 00:30:34.739 [2024-12-05 12:14:08.619697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.739 [2024-12-05 12:14:08.619728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.739 qpair failed and we were unable to recover it. 00:30:34.739 [2024-12-05 12:14:08.619848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.739 [2024-12-05 12:14:08.619879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.739 qpair failed and we were unable to recover it. 00:30:34.739 [2024-12-05 12:14:08.620023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.739 [2024-12-05 12:14:08.620054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.739 qpair failed and we were unable to recover it. 00:30:34.739 [2024-12-05 12:14:08.620230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.739 [2024-12-05 12:14:08.620261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.739 qpair failed and we were unable to recover it. 00:30:34.739 [2024-12-05 12:14:08.620405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.739 [2024-12-05 12:14:08.620439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.739 qpair failed and we were unable to recover it. 00:30:34.739 [2024-12-05 12:14:08.620567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.739 [2024-12-05 12:14:08.620599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.739 qpair failed and we were unable to recover it. 00:30:34.739 [2024-12-05 12:14:08.620705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.739 [2024-12-05 12:14:08.620736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.739 qpair failed and we were unable to recover it. 00:30:34.739 [2024-12-05 12:14:08.620841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.739 [2024-12-05 12:14:08.620872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.739 qpair failed and we were unable to recover it. 00:30:34.739 [2024-12-05 12:14:08.621050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.739 [2024-12-05 12:14:08.621082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.739 qpair failed and we were unable to recover it. 00:30:34.739 [2024-12-05 12:14:08.621310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.739 [2024-12-05 12:14:08.621342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.739 qpair failed and we were unable to recover it. 00:30:34.739 [2024-12-05 12:14:08.621613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.739 [2024-12-05 12:14:08.621645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.739 qpair failed and we were unable to recover it. 00:30:34.739 [2024-12-05 12:14:08.621749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.739 [2024-12-05 12:14:08.621780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.739 qpair failed and we were unable to recover it. 00:30:34.740 [2024-12-05 12:14:08.621894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.740 [2024-12-05 12:14:08.621931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.740 qpair failed and we were unable to recover it. 00:30:34.740 [2024-12-05 12:14:08.622125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.740 [2024-12-05 12:14:08.622156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.740 qpair failed and we were unable to recover it. 00:30:34.740 [2024-12-05 12:14:08.622336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.740 [2024-12-05 12:14:08.622377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.740 qpair failed and we were unable to recover it. 00:30:34.740 [2024-12-05 12:14:08.622565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.740 [2024-12-05 12:14:08.622597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.740 qpair failed and we were unable to recover it. 00:30:34.740 [2024-12-05 12:14:08.622806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.740 [2024-12-05 12:14:08.622838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.740 qpair failed and we were unable to recover it. 00:30:34.740 [2024-12-05 12:14:08.622937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.740 [2024-12-05 12:14:08.622969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.740 qpair failed and we were unable to recover it. 00:30:34.740 [2024-12-05 12:14:08.623212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.740 [2024-12-05 12:14:08.623242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.740 qpair failed and we were unable to recover it. 00:30:34.740 [2024-12-05 12:14:08.623351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.740 [2024-12-05 12:14:08.623404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.740 qpair failed and we were unable to recover it. 00:30:34.740 [2024-12-05 12:14:08.623530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.740 [2024-12-05 12:14:08.623562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.740 qpair failed and we were unable to recover it. 00:30:34.740 [2024-12-05 12:14:08.623672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.740 [2024-12-05 12:14:08.623703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.740 qpair failed and we were unable to recover it. 00:30:34.740 [2024-12-05 12:14:08.623895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.740 [2024-12-05 12:14:08.623927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.740 qpair failed and we were unable to recover it. 00:30:34.740 [2024-12-05 12:14:08.624060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.740 [2024-12-05 12:14:08.624091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.740 qpair failed and we were unable to recover it. 00:30:34.740 [2024-12-05 12:14:08.624214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.740 [2024-12-05 12:14:08.624245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.740 qpair failed and we were unable to recover it. 00:30:34.740 [2024-12-05 12:14:08.624424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.740 [2024-12-05 12:14:08.624457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.740 qpair failed and we were unable to recover it. 00:30:34.740 [2024-12-05 12:14:08.624586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.740 [2024-12-05 12:14:08.624619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.740 qpair failed and we were unable to recover it. 00:30:34.740 [2024-12-05 12:14:08.624733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.740 [2024-12-05 12:14:08.624764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.740 qpair failed and we were unable to recover it. 00:30:34.740 [2024-12-05 12:14:08.624869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.740 [2024-12-05 12:14:08.624900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.740 qpair failed and we were unable to recover it. 00:30:34.740 [2024-12-05 12:14:08.625080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.740 [2024-12-05 12:14:08.625112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.740 qpair failed and we were unable to recover it. 00:30:34.740 [2024-12-05 12:14:08.625335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.740 [2024-12-05 12:14:08.625376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.740 qpair failed and we were unable to recover it. 00:30:34.740 [2024-12-05 12:14:08.625493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.740 [2024-12-05 12:14:08.625524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.740 qpair failed and we were unable to recover it. 00:30:34.740 [2024-12-05 12:14:08.625714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.740 [2024-12-05 12:14:08.625745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.740 qpair failed and we were unable to recover it. 00:30:34.740 [2024-12-05 12:14:08.625855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.740 [2024-12-05 12:14:08.625886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.740 qpair failed and we were unable to recover it. 00:30:34.740 [2024-12-05 12:14:08.626010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.740 [2024-12-05 12:14:08.626042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.740 qpair failed and we were unable to recover it. 00:30:34.740 [2024-12-05 12:14:08.626234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.740 [2024-12-05 12:14:08.626265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.740 qpair failed and we were unable to recover it. 00:30:34.740 [2024-12-05 12:14:08.626388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.740 [2024-12-05 12:14:08.626431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.740 qpair failed and we were unable to recover it. 00:30:34.740 [2024-12-05 12:14:08.626601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.740 [2024-12-05 12:14:08.626632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.740 qpair failed and we were unable to recover it. 00:30:34.740 [2024-12-05 12:14:08.626735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.740 [2024-12-05 12:14:08.626766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.740 qpair failed and we were unable to recover it. 00:30:34.740 [2024-12-05 12:14:08.626959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.740 [2024-12-05 12:14:08.626997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.740 qpair failed and we were unable to recover it. 00:30:34.740 [2024-12-05 12:14:08.627133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.740 [2024-12-05 12:14:08.627165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.740 qpair failed and we were unable to recover it. 00:30:34.740 [2024-12-05 12:14:08.627356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.740 [2024-12-05 12:14:08.627395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.740 qpair failed and we were unable to recover it. 00:30:34.740 [2024-12-05 12:14:08.627594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.740 [2024-12-05 12:14:08.627626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.740 qpair failed and we were unable to recover it. 00:30:34.740 [2024-12-05 12:14:08.627832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.740 [2024-12-05 12:14:08.627863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.741 qpair failed and we were unable to recover it. 00:30:34.741 [2024-12-05 12:14:08.628054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.741 [2024-12-05 12:14:08.628085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.741 qpair failed and we were unable to recover it. 00:30:34.741 [2024-12-05 12:14:08.628205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.741 [2024-12-05 12:14:08.628237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.741 qpair failed and we were unable to recover it. 00:30:34.741 [2024-12-05 12:14:08.628444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.741 [2024-12-05 12:14:08.628478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.741 qpair failed and we were unable to recover it. 00:30:34.741 [2024-12-05 12:14:08.628667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.741 [2024-12-05 12:14:08.628698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.741 qpair failed and we were unable to recover it. 00:30:34.741 [2024-12-05 12:14:08.628819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.741 [2024-12-05 12:14:08.628851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.741 qpair failed and we were unable to recover it. 00:30:34.741 [2024-12-05 12:14:08.628964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.741 [2024-12-05 12:14:08.628995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.741 qpair failed and we were unable to recover it. 00:30:34.741 [2024-12-05 12:14:08.629176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.741 [2024-12-05 12:14:08.629207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.741 qpair failed and we were unable to recover it. 00:30:34.741 [2024-12-05 12:14:08.629396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.741 [2024-12-05 12:14:08.629429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.741 qpair failed and we were unable to recover it. 00:30:34.741 [2024-12-05 12:14:08.629538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.741 [2024-12-05 12:14:08.629570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.741 qpair failed and we were unable to recover it. 00:30:34.741 [2024-12-05 12:14:08.629735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.741 [2024-12-05 12:14:08.629806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.741 qpair failed and we were unable to recover it. 00:30:34.741 [2024-12-05 12:14:08.629933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.741 [2024-12-05 12:14:08.629968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.741 qpair failed and we were unable to recover it. 00:30:34.741 [2024-12-05 12:14:08.630163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.741 [2024-12-05 12:14:08.630195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.741 qpair failed and we were unable to recover it. 00:30:34.741 [2024-12-05 12:14:08.630391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.741 [2024-12-05 12:14:08.630423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.741 qpair failed and we were unable to recover it. 00:30:34.741 [2024-12-05 12:14:08.630552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.741 [2024-12-05 12:14:08.630582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.741 qpair failed and we were unable to recover it. 00:30:34.741 [2024-12-05 12:14:08.630751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.741 [2024-12-05 12:14:08.630783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.741 qpair failed and we were unable to recover it. 00:30:34.741 [2024-12-05 12:14:08.630966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.741 [2024-12-05 12:14:08.630996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.741 qpair failed and we were unable to recover it. 00:30:34.741 [2024-12-05 12:14:08.631134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.741 [2024-12-05 12:14:08.631164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.741 qpair failed and we were unable to recover it. 00:30:34.741 [2024-12-05 12:14:08.631273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.741 [2024-12-05 12:14:08.631305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.741 qpair failed and we were unable to recover it. 00:30:34.741 [2024-12-05 12:14:08.631423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.741 [2024-12-05 12:14:08.631454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.741 qpair failed and we were unable to recover it. 00:30:34.741 [2024-12-05 12:14:08.631629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.741 [2024-12-05 12:14:08.631660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.741 qpair failed and we were unable to recover it. 00:30:34.741 [2024-12-05 12:14:08.631851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.741 [2024-12-05 12:14:08.631882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.741 qpair failed and we were unable to recover it. 00:30:34.741 [2024-12-05 12:14:08.632002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.741 [2024-12-05 12:14:08.632032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.741 qpair failed and we were unable to recover it. 00:30:34.741 [2024-12-05 12:14:08.632290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.741 [2024-12-05 12:14:08.632329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.741 qpair failed and we were unable to recover it. 00:30:34.741 [2024-12-05 12:14:08.632459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.741 [2024-12-05 12:14:08.632497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.741 qpair failed and we were unable to recover it. 00:30:34.741 [2024-12-05 12:14:08.632609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.741 [2024-12-05 12:14:08.632640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.741 qpair failed and we were unable to recover it. 00:30:34.741 [2024-12-05 12:14:08.632766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.741 [2024-12-05 12:14:08.632796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.741 qpair failed and we were unable to recover it. 00:30:34.741 [2024-12-05 12:14:08.632983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.741 [2024-12-05 12:14:08.633014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.741 qpair failed and we were unable to recover it. 00:30:34.741 [2024-12-05 12:14:08.633183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.741 [2024-12-05 12:14:08.633214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.741 qpair failed and we were unable to recover it. 00:30:34.741 [2024-12-05 12:14:08.633343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.741 [2024-12-05 12:14:08.633386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.741 qpair failed and we were unable to recover it. 00:30:34.741 [2024-12-05 12:14:08.633496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.741 [2024-12-05 12:14:08.633527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.741 qpair failed and we were unable to recover it. 00:30:34.741 [2024-12-05 12:14:08.633668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.742 [2024-12-05 12:14:08.633699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.742 qpair failed and we were unable to recover it. 00:30:34.742 [2024-12-05 12:14:08.633894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.742 [2024-12-05 12:14:08.633926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.742 qpair failed and we were unable to recover it. 00:30:34.742 [2024-12-05 12:14:08.634106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.742 [2024-12-05 12:14:08.634137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.742 qpair failed and we were unable to recover it. 00:30:34.742 [2024-12-05 12:14:08.634252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.742 [2024-12-05 12:14:08.634283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.742 qpair failed and we were unable to recover it. 00:30:34.742 [2024-12-05 12:14:08.634466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.742 [2024-12-05 12:14:08.634497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.742 qpair failed and we were unable to recover it. 00:30:34.742 [2024-12-05 12:14:08.634676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.742 [2024-12-05 12:14:08.634707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.742 qpair failed and we were unable to recover it. 00:30:34.742 [2024-12-05 12:14:08.634818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.742 [2024-12-05 12:14:08.634850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.742 qpair failed and we were unable to recover it. 00:30:34.742 [2024-12-05 12:14:08.635021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.742 [2024-12-05 12:14:08.635051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.742 qpair failed and we were unable to recover it. 00:30:34.742 [2024-12-05 12:14:08.635166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.742 [2024-12-05 12:14:08.635196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.742 qpair failed and we were unable to recover it. 00:30:34.742 [2024-12-05 12:14:08.635385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.742 [2024-12-05 12:14:08.635418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.742 qpair failed and we were unable to recover it. 00:30:34.742 [2024-12-05 12:14:08.635535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.742 [2024-12-05 12:14:08.635565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.742 qpair failed and we were unable to recover it. 00:30:34.742 [2024-12-05 12:14:08.635752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.742 [2024-12-05 12:14:08.635783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.742 qpair failed and we were unable to recover it. 00:30:34.742 [2024-12-05 12:14:08.635911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.742 [2024-12-05 12:14:08.635942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.742 qpair failed and we were unable to recover it. 00:30:34.742 [2024-12-05 12:14:08.636121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.742 [2024-12-05 12:14:08.636152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.742 qpair failed and we were unable to recover it. 00:30:34.742 [2024-12-05 12:14:08.636263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.742 [2024-12-05 12:14:08.636293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.742 qpair failed and we were unable to recover it. 00:30:34.742 [2024-12-05 12:14:08.636485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.742 [2024-12-05 12:14:08.636517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.742 qpair failed and we were unable to recover it. 00:30:34.742 [2024-12-05 12:14:08.636624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.742 [2024-12-05 12:14:08.636654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.742 qpair failed and we were unable to recover it. 00:30:34.742 [2024-12-05 12:14:08.636823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.742 [2024-12-05 12:14:08.636854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.742 qpair failed and we were unable to recover it. 00:30:34.742 [2024-12-05 12:14:08.636983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.742 [2024-12-05 12:14:08.637013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.742 qpair failed and we were unable to recover it. 00:30:34.742 [2024-12-05 12:14:08.637196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.742 [2024-12-05 12:14:08.637231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.742 qpair failed and we were unable to recover it. 00:30:34.742 [2024-12-05 12:14:08.637347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.742 [2024-12-05 12:14:08.637388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.742 qpair failed and we were unable to recover it. 00:30:34.742 [2024-12-05 12:14:08.637567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.742 [2024-12-05 12:14:08.637600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.742 qpair failed and we were unable to recover it. 00:30:34.742 [2024-12-05 12:14:08.637784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.742 [2024-12-05 12:14:08.637815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.742 qpair failed and we were unable to recover it. 00:30:34.742 [2024-12-05 12:14:08.637916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.742 [2024-12-05 12:14:08.637948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.742 qpair failed and we were unable to recover it. 00:30:34.742 [2024-12-05 12:14:08.638137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.742 [2024-12-05 12:14:08.638168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.742 qpair failed and we were unable to recover it. 00:30:34.742 [2024-12-05 12:14:08.638278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.742 [2024-12-05 12:14:08.638309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.742 qpair failed and we were unable to recover it. 00:30:34.742 [2024-12-05 12:14:08.638582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.742 [2024-12-05 12:14:08.638614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.742 qpair failed and we were unable to recover it. 00:30:34.742 [2024-12-05 12:14:08.638718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.742 [2024-12-05 12:14:08.638750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.742 qpair failed and we were unable to recover it. 00:30:34.742 [2024-12-05 12:14:08.638860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.742 [2024-12-05 12:14:08.638891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.742 qpair failed and we were unable to recover it. 00:30:34.742 [2024-12-05 12:14:08.639008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.742 [2024-12-05 12:14:08.639040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.742 qpair failed and we were unable to recover it. 00:30:34.742 [2024-12-05 12:14:08.639214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.742 [2024-12-05 12:14:08.639245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.742 qpair failed and we were unable to recover it. 00:30:34.742 [2024-12-05 12:14:08.639363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.742 [2024-12-05 12:14:08.639403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.742 qpair failed and we were unable to recover it. 00:30:34.743 [2024-12-05 12:14:08.639584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.743 [2024-12-05 12:14:08.639615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.743 qpair failed and we were unable to recover it. 00:30:34.743 [2024-12-05 12:14:08.639748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.743 [2024-12-05 12:14:08.639780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.743 qpair failed and we were unable to recover it. 00:30:34.743 [2024-12-05 12:14:08.639944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.743 [2024-12-05 12:14:08.639976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.743 qpair failed and we were unable to recover it. 00:30:34.743 [2024-12-05 12:14:08.640158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.743 [2024-12-05 12:14:08.640190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.743 qpair failed and we were unable to recover it. 00:30:34.743 [2024-12-05 12:14:08.640310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.743 [2024-12-05 12:14:08.640342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.743 qpair failed and we were unable to recover it. 00:30:34.743 [2024-12-05 12:14:08.640460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.743 [2024-12-05 12:14:08.640493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.743 qpair failed and we were unable to recover it. 00:30:34.743 [2024-12-05 12:14:08.640601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.743 [2024-12-05 12:14:08.640633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.743 qpair failed and we were unable to recover it. 00:30:34.743 [2024-12-05 12:14:08.640737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.743 [2024-12-05 12:14:08.640768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.743 qpair failed and we were unable to recover it. 00:30:34.743 [2024-12-05 12:14:08.640895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.743 [2024-12-05 12:14:08.640926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.743 qpair failed and we were unable to recover it. 00:30:34.743 [2024-12-05 12:14:08.641029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.743 [2024-12-05 12:14:08.641060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.743 qpair failed and we were unable to recover it. 00:30:34.743 [2024-12-05 12:14:08.641177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.743 [2024-12-05 12:14:08.641208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.743 qpair failed and we were unable to recover it. 00:30:34.743 [2024-12-05 12:14:08.641398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.743 [2024-12-05 12:14:08.641431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.743 qpair failed and we were unable to recover it. 00:30:34.743 [2024-12-05 12:14:08.641553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.743 [2024-12-05 12:14:08.641585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.743 qpair failed and we were unable to recover it. 00:30:34.743 [2024-12-05 12:14:08.641696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.743 [2024-12-05 12:14:08.641727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.743 qpair failed and we were unable to recover it. 00:30:34.743 [2024-12-05 12:14:08.641836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.743 [2024-12-05 12:14:08.641874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.743 qpair failed and we were unable to recover it. 00:30:34.743 [2024-12-05 12:14:08.642044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.743 [2024-12-05 12:14:08.642075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.743 qpair failed and we were unable to recover it. 00:30:34.743 [2024-12-05 12:14:08.642204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.743 [2024-12-05 12:14:08.642236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.743 qpair failed and we were unable to recover it. 00:30:34.743 [2024-12-05 12:14:08.642357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.743 [2024-12-05 12:14:08.642398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.743 qpair failed and we were unable to recover it. 00:30:34.743 [2024-12-05 12:14:08.642514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.743 [2024-12-05 12:14:08.642545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.743 qpair failed and we were unable to recover it. 00:30:34.743 [2024-12-05 12:14:08.642661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.743 [2024-12-05 12:14:08.642692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.743 qpair failed and we were unable to recover it. 00:30:34.743 [2024-12-05 12:14:08.642876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.743 [2024-12-05 12:14:08.642908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.743 qpair failed and we were unable to recover it. 00:30:34.743 [2024-12-05 12:14:08.643013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.743 [2024-12-05 12:14:08.643044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.743 qpair failed and we were unable to recover it. 00:30:34.743 [2024-12-05 12:14:08.643154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.743 [2024-12-05 12:14:08.643185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.743 qpair failed and we were unable to recover it. 00:30:34.743 [2024-12-05 12:14:08.643315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.743 [2024-12-05 12:14:08.643346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.743 qpair failed and we were unable to recover it. 00:30:34.743 [2024-12-05 12:14:08.643468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.743 [2024-12-05 12:14:08.643505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.743 qpair failed and we were unable to recover it. 00:30:34.743 [2024-12-05 12:14:08.643614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.744 [2024-12-05 12:14:08.643645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.744 qpair failed and we were unable to recover it. 00:30:34.744 [2024-12-05 12:14:08.643817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.744 [2024-12-05 12:14:08.643848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.744 qpair failed and we were unable to recover it. 00:30:34.744 [2024-12-05 12:14:08.643964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.744 [2024-12-05 12:14:08.643994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.744 qpair failed and we were unable to recover it. 00:30:34.744 [2024-12-05 12:14:08.644186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.744 [2024-12-05 12:14:08.644218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.744 qpair failed and we were unable to recover it. 00:30:34.744 [2024-12-05 12:14:08.644323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.744 [2024-12-05 12:14:08.644354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.744 qpair failed and we were unable to recover it. 00:30:34.744 [2024-12-05 12:14:08.644626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.744 [2024-12-05 12:14:08.644659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.744 qpair failed and we were unable to recover it. 00:30:34.744 [2024-12-05 12:14:08.644898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.744 [2024-12-05 12:14:08.644930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.744 qpair failed and we were unable to recover it. 00:30:34.744 [2024-12-05 12:14:08.645043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.744 [2024-12-05 12:14:08.645074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.744 qpair failed and we were unable to recover it. 00:30:34.744 [2024-12-05 12:14:08.645182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.744 [2024-12-05 12:14:08.645215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.744 qpair failed and we were unable to recover it. 00:30:34.744 [2024-12-05 12:14:08.645332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.744 [2024-12-05 12:14:08.645363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.744 qpair failed and we were unable to recover it. 00:30:34.744 [2024-12-05 12:14:08.645479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.744 [2024-12-05 12:14:08.645511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.744 qpair failed and we were unable to recover it. 00:30:34.744 [2024-12-05 12:14:08.645640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.744 [2024-12-05 12:14:08.645671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.744 qpair failed and we were unable to recover it. 00:30:34.744 [2024-12-05 12:14:08.645847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.744 [2024-12-05 12:14:08.645879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.744 qpair failed and we were unable to recover it. 00:30:34.744 [2024-12-05 12:14:08.646144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.744 [2024-12-05 12:14:08.646176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.744 qpair failed and we were unable to recover it. 00:30:34.744 [2024-12-05 12:14:08.646286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.744 [2024-12-05 12:14:08.646318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.744 qpair failed and we were unable to recover it. 00:30:34.744 [2024-12-05 12:14:08.646516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.744 [2024-12-05 12:14:08.646549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.744 qpair failed and we were unable to recover it. 00:30:34.744 [2024-12-05 12:14:08.646749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.744 [2024-12-05 12:14:08.646781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.744 qpair failed and we were unable to recover it. 00:30:34.744 [2024-12-05 12:14:08.646898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.744 [2024-12-05 12:14:08.646930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.744 qpair failed and we were unable to recover it. 00:30:34.744 [2024-12-05 12:14:08.647045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.744 [2024-12-05 12:14:08.647076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.744 qpair failed and we were unable to recover it. 00:30:34.744 [2024-12-05 12:14:08.647189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.744 [2024-12-05 12:14:08.647220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.744 qpair failed and we were unable to recover it. 00:30:34.744 [2024-12-05 12:14:08.647322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.744 [2024-12-05 12:14:08.647352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.744 qpair failed and we were unable to recover it. 00:30:34.744 [2024-12-05 12:14:08.647483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.744 [2024-12-05 12:14:08.647516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.744 qpair failed and we were unable to recover it. 00:30:34.744 [2024-12-05 12:14:08.647704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.744 [2024-12-05 12:14:08.647735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.744 qpair failed and we were unable to recover it. 00:30:34.744 [2024-12-05 12:14:08.647972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.744 [2024-12-05 12:14:08.648004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.744 qpair failed and we were unable to recover it. 00:30:34.744 [2024-12-05 12:14:08.648176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.744 [2024-12-05 12:14:08.648208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.744 qpair failed and we were unable to recover it. 00:30:34.744 [2024-12-05 12:14:08.648388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.744 [2024-12-05 12:14:08.648422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.744 qpair failed and we were unable to recover it. 00:30:34.744 [2024-12-05 12:14:08.648557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.744 [2024-12-05 12:14:08.648588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.744 qpair failed and we were unable to recover it. 00:30:34.744 [2024-12-05 12:14:08.648714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.744 [2024-12-05 12:14:08.648746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.744 qpair failed and we were unable to recover it. 00:30:34.744 [2024-12-05 12:14:08.648928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.744 [2024-12-05 12:14:08.648960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.744 qpair failed and we were unable to recover it. 00:30:34.744 [2024-12-05 12:14:08.649164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.744 [2024-12-05 12:14:08.649196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.744 qpair failed and we were unable to recover it. 00:30:34.744 [2024-12-05 12:14:08.649384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.744 [2024-12-05 12:14:08.649456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.744 qpair failed and we were unable to recover it. 00:30:34.744 [2024-12-05 12:14:08.649601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.744 [2024-12-05 12:14:08.649636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.744 qpair failed and we were unable to recover it. 00:30:34.744 [2024-12-05 12:14:08.649813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.744 [2024-12-05 12:14:08.649845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.744 qpair failed and we were unable to recover it. 00:30:34.744 [2024-12-05 12:14:08.649958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.745 [2024-12-05 12:14:08.649991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.745 qpair failed and we were unable to recover it. 00:30:34.745 [2024-12-05 12:14:08.650185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.745 [2024-12-05 12:14:08.650217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.745 qpair failed and we were unable to recover it. 00:30:34.745 [2024-12-05 12:14:08.650354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.745 [2024-12-05 12:14:08.650401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.745 qpair failed and we were unable to recover it. 00:30:34.745 [2024-12-05 12:14:08.650607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.745 [2024-12-05 12:14:08.650638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.745 qpair failed and we were unable to recover it. 00:30:34.745 [2024-12-05 12:14:08.650742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.745 [2024-12-05 12:14:08.650773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.745 qpair failed and we were unable to recover it. 00:30:34.745 [2024-12-05 12:14:08.650964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.745 [2024-12-05 12:14:08.651001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.745 qpair failed and we were unable to recover it. 00:30:34.745 [2024-12-05 12:14:08.651189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.745 [2024-12-05 12:14:08.651220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.745 qpair failed and we were unable to recover it. 00:30:34.745 [2024-12-05 12:14:08.651344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.745 [2024-12-05 12:14:08.651388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.745 qpair failed and we were unable to recover it. 00:30:34.745 [2024-12-05 12:14:08.651631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.745 [2024-12-05 12:14:08.651662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.745 qpair failed and we were unable to recover it. 00:30:34.745 [2024-12-05 12:14:08.651843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.745 [2024-12-05 12:14:08.651874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.745 qpair failed and we were unable to recover it. 00:30:34.745 [2024-12-05 12:14:08.652054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.745 [2024-12-05 12:14:08.652095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.745 qpair failed and we were unable to recover it. 00:30:34.745 [2024-12-05 12:14:08.652219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.745 [2024-12-05 12:14:08.652250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.745 qpair failed and we were unable to recover it. 00:30:34.745 [2024-12-05 12:14:08.652354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.745 [2024-12-05 12:14:08.652415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.745 qpair failed and we were unable to recover it. 00:30:34.745 [2024-12-05 12:14:08.652659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.745 [2024-12-05 12:14:08.652690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.745 qpair failed and we were unable to recover it. 00:30:34.745 [2024-12-05 12:14:08.652882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.745 [2024-12-05 12:14:08.652913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.745 qpair failed and we were unable to recover it. 00:30:34.745 [2024-12-05 12:14:08.653023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.745 [2024-12-05 12:14:08.653054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.745 qpair failed and we were unable to recover it. 00:30:34.745 [2024-12-05 12:14:08.653169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.745 [2024-12-05 12:14:08.653200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.745 qpair failed and we were unable to recover it. 00:30:34.745 [2024-12-05 12:14:08.653321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.745 [2024-12-05 12:14:08.653352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.745 qpair failed and we were unable to recover it. 00:30:34.745 [2024-12-05 12:14:08.653552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.745 [2024-12-05 12:14:08.653584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.745 qpair failed and we were unable to recover it. 00:30:34.745 [2024-12-05 12:14:08.653826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.745 [2024-12-05 12:14:08.653857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.745 qpair failed and we were unable to recover it. 00:30:34.745 [2024-12-05 12:14:08.654064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.745 [2024-12-05 12:14:08.654095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.745 qpair failed and we were unable to recover it. 00:30:34.745 [2024-12-05 12:14:08.654205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.745 [2024-12-05 12:14:08.654235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.745 qpair failed and we were unable to recover it. 00:30:34.745 [2024-12-05 12:14:08.654344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.745 [2024-12-05 12:14:08.654386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.745 qpair failed and we were unable to recover it. 00:30:34.745 [2024-12-05 12:14:08.654558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.745 [2024-12-05 12:14:08.654589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.745 qpair failed and we were unable to recover it. 00:30:34.745 [2024-12-05 12:14:08.654723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.745 [2024-12-05 12:14:08.654754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.745 qpair failed and we were unable to recover it. 00:30:34.745 [2024-12-05 12:14:08.654948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.745 [2024-12-05 12:14:08.654980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.745 qpair failed and we were unable to recover it. 00:30:34.745 [2024-12-05 12:14:08.655082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.745 [2024-12-05 12:14:08.655113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.745 qpair failed and we were unable to recover it. 00:30:34.745 [2024-12-05 12:14:08.655234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.745 [2024-12-05 12:14:08.655266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.745 qpair failed and we were unable to recover it. 00:30:34.745 [2024-12-05 12:14:08.655457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.745 [2024-12-05 12:14:08.655491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.745 qpair failed and we were unable to recover it. 00:30:34.745 [2024-12-05 12:14:08.655615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.745 [2024-12-05 12:14:08.655646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.745 qpair failed and we were unable to recover it. 00:30:34.745 [2024-12-05 12:14:08.655830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.746 [2024-12-05 12:14:08.655862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.746 qpair failed and we were unable to recover it. 00:30:34.746 [2024-12-05 12:14:08.656000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.746 [2024-12-05 12:14:08.656031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.746 qpair failed and we were unable to recover it. 00:30:34.746 [2024-12-05 12:14:08.656136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.746 [2024-12-05 12:14:08.656168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.746 qpair failed and we were unable to recover it. 00:30:34.746 [2024-12-05 12:14:08.656340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.746 [2024-12-05 12:14:08.656380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.746 qpair failed and we were unable to recover it. 00:30:34.746 [2024-12-05 12:14:08.656511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.746 [2024-12-05 12:14:08.656542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.746 qpair failed and we were unable to recover it. 00:30:34.746 [2024-12-05 12:14:08.656646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.746 [2024-12-05 12:14:08.656678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.746 qpair failed and we were unable to recover it. 00:30:34.746 [2024-12-05 12:14:08.656928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.746 [2024-12-05 12:14:08.656959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.746 qpair failed and we were unable to recover it. 00:30:34.746 [2024-12-05 12:14:08.657139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.746 [2024-12-05 12:14:08.657171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.746 qpair failed and we were unable to recover it. 00:30:34.746 [2024-12-05 12:14:08.657386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.746 [2024-12-05 12:14:08.657418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.746 qpair failed and we were unable to recover it. 00:30:34.746 [2024-12-05 12:14:08.657547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.746 [2024-12-05 12:14:08.657579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.746 qpair failed and we were unable to recover it. 00:30:34.746 [2024-12-05 12:14:08.657705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.746 [2024-12-05 12:14:08.657737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.746 qpair failed and we were unable to recover it. 00:30:34.746 [2024-12-05 12:14:08.657909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.746 [2024-12-05 12:14:08.657940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.746 qpair failed and we were unable to recover it. 00:30:34.746 [2024-12-05 12:14:08.658062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.746 [2024-12-05 12:14:08.658094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.746 qpair failed and we were unable to recover it. 00:30:34.746 [2024-12-05 12:14:08.658293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.746 [2024-12-05 12:14:08.658324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.746 qpair failed and we were unable to recover it. 00:30:34.746 [2024-12-05 12:14:08.658460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.746 [2024-12-05 12:14:08.658492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.746 qpair failed and we were unable to recover it. 00:30:34.746 [2024-12-05 12:14:08.658728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.746 [2024-12-05 12:14:08.658759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.746 qpair failed and we were unable to recover it. 00:30:34.746 [2024-12-05 12:14:08.658950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.746 [2024-12-05 12:14:08.658982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.746 qpair failed and we were unable to recover it. 00:30:34.746 [2024-12-05 12:14:08.659175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.746 [2024-12-05 12:14:08.659207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.746 qpair failed and we were unable to recover it. 00:30:34.746 [2024-12-05 12:14:08.659446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.746 [2024-12-05 12:14:08.659479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.746 qpair failed and we were unable to recover it. 00:30:34.746 [2024-12-05 12:14:08.659716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.746 [2024-12-05 12:14:08.659748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.746 qpair failed and we were unable to recover it. 00:30:34.746 [2024-12-05 12:14:08.659889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.746 [2024-12-05 12:14:08.659927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.746 qpair failed and we were unable to recover it. 00:30:34.746 [2024-12-05 12:14:08.660098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.746 [2024-12-05 12:14:08.660130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.746 qpair failed and we were unable to recover it. 00:30:34.746 [2024-12-05 12:14:08.660259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.746 [2024-12-05 12:14:08.660291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.746 qpair failed and we were unable to recover it. 00:30:34.746 [2024-12-05 12:14:08.660473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.746 [2024-12-05 12:14:08.660506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.746 qpair failed and we were unable to recover it. 00:30:34.746 [2024-12-05 12:14:08.661987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.746 [2024-12-05 12:14:08.662039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.746 qpair failed and we were unable to recover it. 00:30:34.746 [2024-12-05 12:14:08.662178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.746 [2024-12-05 12:14:08.662211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.746 qpair failed and we were unable to recover it. 00:30:34.746 [2024-12-05 12:14:08.662453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.746 [2024-12-05 12:14:08.662488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.746 qpair failed and we were unable to recover it. 00:30:34.746 [2024-12-05 12:14:08.662597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.746 [2024-12-05 12:14:08.662629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.746 qpair failed and we were unable to recover it. 00:30:34.746 [2024-12-05 12:14:08.662743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.746 [2024-12-05 12:14:08.662776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.746 qpair failed and we were unable to recover it. 00:30:34.746 [2024-12-05 12:14:08.662951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.746 [2024-12-05 12:14:08.662983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.746 qpair failed and we were unable to recover it. 00:30:34.747 [2024-12-05 12:14:08.663180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.747 [2024-12-05 12:14:08.663212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.747 qpair failed and we were unable to recover it. 00:30:34.747 [2024-12-05 12:14:08.663414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.747 [2024-12-05 12:14:08.663447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.747 qpair failed and we were unable to recover it. 00:30:34.747 [2024-12-05 12:14:08.663560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.747 [2024-12-05 12:14:08.663592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.747 qpair failed and we were unable to recover it. 00:30:34.747 [2024-12-05 12:14:08.663765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.747 [2024-12-05 12:14:08.663797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.747 qpair failed and we were unable to recover it. 00:30:34.747 [2024-12-05 12:14:08.663992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.747 [2024-12-05 12:14:08.664024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.747 qpair failed and we were unable to recover it. 00:30:34.747 [2024-12-05 12:14:08.664207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.747 [2024-12-05 12:14:08.664239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.747 qpair failed and we were unable to recover it. 00:30:34.747 [2024-12-05 12:14:08.664412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.747 [2024-12-05 12:14:08.664445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.747 qpair failed and we were unable to recover it. 00:30:34.747 [2024-12-05 12:14:08.664552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.747 [2024-12-05 12:14:08.664584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.747 qpair failed and we were unable to recover it. 00:30:34.747 [2024-12-05 12:14:08.664689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.747 [2024-12-05 12:14:08.664720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.747 qpair failed and we were unable to recover it. 00:30:34.747 [2024-12-05 12:14:08.664825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.747 [2024-12-05 12:14:08.664858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.747 qpair failed and we were unable to recover it. 00:30:34.747 [2024-12-05 12:14:08.664990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.747 [2024-12-05 12:14:08.665022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.747 qpair failed and we were unable to recover it. 00:30:34.747 [2024-12-05 12:14:08.665196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.747 [2024-12-05 12:14:08.665228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.747 qpair failed and we were unable to recover it. 00:30:34.747 [2024-12-05 12:14:08.665346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.747 [2024-12-05 12:14:08.665385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.747 qpair failed and we were unable to recover it. 00:30:34.747 [2024-12-05 12:14:08.665492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.747 [2024-12-05 12:14:08.665523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.747 qpair failed and we were unable to recover it. 00:30:34.747 [2024-12-05 12:14:08.665694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.747 [2024-12-05 12:14:08.665726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.747 qpair failed and we were unable to recover it. 00:30:34.747 [2024-12-05 12:14:08.665856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.747 [2024-12-05 12:14:08.665887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.747 qpair failed and we were unable to recover it. 00:30:34.747 [2024-12-05 12:14:08.666001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.747 [2024-12-05 12:14:08.666033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.747 qpair failed and we were unable to recover it. 00:30:34.747 [2024-12-05 12:14:08.666192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.747 [2024-12-05 12:14:08.666264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.747 qpair failed and we were unable to recover it. 00:30:34.747 [2024-12-05 12:14:08.666419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.747 [2024-12-05 12:14:08.666456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.747 qpair failed and we were unable to recover it. 00:30:34.747 [2024-12-05 12:14:08.666580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.747 [2024-12-05 12:14:08.666613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.747 qpair failed and we were unable to recover it. 00:30:34.747 [2024-12-05 12:14:08.666720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.747 [2024-12-05 12:14:08.666751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.747 qpair failed and we were unable to recover it. 00:30:34.747 [2024-12-05 12:14:08.666857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.747 [2024-12-05 12:14:08.666889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.747 qpair failed and we were unable to recover it. 00:30:34.747 [2024-12-05 12:14:08.666994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.747 [2024-12-05 12:14:08.667027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.747 qpair failed and we were unable to recover it. 00:30:34.747 [2024-12-05 12:14:08.667268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.747 [2024-12-05 12:14:08.667301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.747 qpair failed and we were unable to recover it. 00:30:34.747 [2024-12-05 12:14:08.667539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.747 [2024-12-05 12:14:08.667572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.747 qpair failed and we were unable to recover it. 00:30:34.747 [2024-12-05 12:14:08.667683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.747 [2024-12-05 12:14:08.667715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.747 qpair failed and we were unable to recover it. 00:30:34.747 [2024-12-05 12:14:08.667821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.747 [2024-12-05 12:14:08.667854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.747 qpair failed and we were unable to recover it. 00:30:34.747 [2024-12-05 12:14:08.668028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.747 [2024-12-05 12:14:08.668060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.747 qpair failed and we were unable to recover it. 00:30:34.747 [2024-12-05 12:14:08.668173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.747 [2024-12-05 12:14:08.668205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.747 qpair failed and we were unable to recover it. 00:30:34.747 [2024-12-05 12:14:08.668338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.747 [2024-12-05 12:14:08.668397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.747 qpair failed and we were unable to recover it. 00:30:34.747 [2024-12-05 12:14:08.668520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.748 [2024-12-05 12:14:08.668554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.748 qpair failed and we were unable to recover it. 00:30:34.748 [2024-12-05 12:14:08.668754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.748 [2024-12-05 12:14:08.668786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.748 qpair failed and we were unable to recover it. 00:30:34.748 [2024-12-05 12:14:08.668905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.748 [2024-12-05 12:14:08.668937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.748 qpair failed and we were unable to recover it. 00:30:34.748 [2024-12-05 12:14:08.669053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.748 [2024-12-05 12:14:08.669085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.748 qpair failed and we were unable to recover it. 00:30:34.748 [2024-12-05 12:14:08.669219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.748 [2024-12-05 12:14:08.669250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.748 qpair failed and we were unable to recover it. 00:30:34.748 [2024-12-05 12:14:08.669440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.748 [2024-12-05 12:14:08.669472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.748 qpair failed and we were unable to recover it. 00:30:34.748 [2024-12-05 12:14:08.669591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.748 [2024-12-05 12:14:08.669623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.748 qpair failed and we were unable to recover it. 00:30:34.748 [2024-12-05 12:14:08.669749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.748 [2024-12-05 12:14:08.669781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.748 qpair failed and we were unable to recover it. 00:30:34.748 [2024-12-05 12:14:08.669914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.748 [2024-12-05 12:14:08.669946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.748 qpair failed and we were unable to recover it. 00:30:34.748 [2024-12-05 12:14:08.670128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.748 [2024-12-05 12:14:08.670159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.748 qpair failed and we were unable to recover it. 00:30:34.748 [2024-12-05 12:14:08.670276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.748 [2024-12-05 12:14:08.670308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.748 qpair failed and we were unable to recover it. 00:30:34.748 [2024-12-05 12:14:08.670450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.748 [2024-12-05 12:14:08.670485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.748 qpair failed and we were unable to recover it. 00:30:34.748 [2024-12-05 12:14:08.670598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.748 [2024-12-05 12:14:08.670629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.748 qpair failed and we were unable to recover it. 00:30:34.748 [2024-12-05 12:14:08.670745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.748 [2024-12-05 12:14:08.670776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.748 qpair failed and we were unable to recover it. 00:30:34.748 [2024-12-05 12:14:08.670878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.748 [2024-12-05 12:14:08.670916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.748 qpair failed and we were unable to recover it. 00:30:34.748 [2024-12-05 12:14:08.671098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.748 [2024-12-05 12:14:08.671131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.748 qpair failed and we were unable to recover it. 00:30:34.748 [2024-12-05 12:14:08.671242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.748 [2024-12-05 12:14:08.671274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.748 qpair failed and we were unable to recover it. 00:30:34.748 [2024-12-05 12:14:08.671400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.748 [2024-12-05 12:14:08.671433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.748 qpair failed and we were unable to recover it. 00:30:34.748 [2024-12-05 12:14:08.671540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.748 [2024-12-05 12:14:08.671571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.748 qpair failed and we were unable to recover it. 00:30:34.748 [2024-12-05 12:14:08.671750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.748 [2024-12-05 12:14:08.671786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.748 qpair failed and we were unable to recover it. 00:30:34.748 [2024-12-05 12:14:08.672032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.748 [2024-12-05 12:14:08.672067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.748 qpair failed and we were unable to recover it. 00:30:34.748 [2024-12-05 12:14:08.672330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.748 [2024-12-05 12:14:08.672362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.748 qpair failed and we were unable to recover it. 00:30:34.748 [2024-12-05 12:14:08.672567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.748 [2024-12-05 12:14:08.672600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.748 qpair failed and we were unable to recover it. 00:30:34.748 [2024-12-05 12:14:08.672718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.748 [2024-12-05 12:14:08.672751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.748 qpair failed and we were unable to recover it. 00:30:34.748 [2024-12-05 12:14:08.672931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.748 [2024-12-05 12:14:08.672962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.748 qpair failed and we were unable to recover it. 00:30:34.748 [2024-12-05 12:14:08.673152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.748 [2024-12-05 12:14:08.673184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.748 qpair failed and we were unable to recover it. 00:30:34.748 [2024-12-05 12:14:08.673310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.748 [2024-12-05 12:14:08.673341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.748 qpair failed and we were unable to recover it. 00:30:34.748 [2024-12-05 12:14:08.673560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.748 [2024-12-05 12:14:08.673593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.748 qpair failed and we were unable to recover it. 00:30:34.748 [2024-12-05 12:14:08.673791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.748 [2024-12-05 12:14:08.673823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.748 qpair failed and we were unable to recover it. 00:30:34.748 [2024-12-05 12:14:08.673991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.748 [2024-12-05 12:14:08.674024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.748 qpair failed and we were unable to recover it. 00:30:34.748 [2024-12-05 12:14:08.674145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.748 [2024-12-05 12:14:08.674177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.748 qpair failed and we were unable to recover it. 00:30:34.748 [2024-12-05 12:14:08.674287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.749 [2024-12-05 12:14:08.674319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.749 qpair failed and we were unable to recover it. 00:30:34.749 [2024-12-05 12:14:08.674457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.749 [2024-12-05 12:14:08.674491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.749 qpair failed and we were unable to recover it. 00:30:34.749 [2024-12-05 12:14:08.674759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.749 [2024-12-05 12:14:08.674791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.749 qpair failed and we were unable to recover it. 00:30:34.749 [2024-12-05 12:14:08.675049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.749 [2024-12-05 12:14:08.675080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.749 qpair failed and we were unable to recover it. 00:30:34.749 [2024-12-05 12:14:08.675191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.749 [2024-12-05 12:14:08.675223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.749 qpair failed and we were unable to recover it. 00:30:34.749 [2024-12-05 12:14:08.675347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.749 [2024-12-05 12:14:08.675389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.749 qpair failed and we were unable to recover it. 00:30:34.749 [2024-12-05 12:14:08.675509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.749 [2024-12-05 12:14:08.675541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.749 qpair failed and we were unable to recover it. 00:30:34.749 [2024-12-05 12:14:08.675666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.749 [2024-12-05 12:14:08.675697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.749 qpair failed and we were unable to recover it. 00:30:34.749 [2024-12-05 12:14:08.675882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.749 [2024-12-05 12:14:08.675915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.749 qpair failed and we were unable to recover it. 00:30:34.749 [2024-12-05 12:14:08.676098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.749 [2024-12-05 12:14:08.676129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.749 qpair failed and we were unable to recover it. 00:30:34.749 [2024-12-05 12:14:08.676399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.749 [2024-12-05 12:14:08.676432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.749 qpair failed and we were unable to recover it. 00:30:34.749 [2024-12-05 12:14:08.676623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.749 [2024-12-05 12:14:08.676655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.749 qpair failed and we were unable to recover it. 00:30:34.749 [2024-12-05 12:14:08.676781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.749 [2024-12-05 12:14:08.676813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.749 qpair failed and we were unable to recover it. 00:30:34.749 [2024-12-05 12:14:08.676989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.749 [2024-12-05 12:14:08.677020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.749 qpair failed and we were unable to recover it. 00:30:34.749 [2024-12-05 12:14:08.677137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.749 [2024-12-05 12:14:08.677170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.749 qpair failed and we were unable to recover it. 00:30:34.749 [2024-12-05 12:14:08.677360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.749 [2024-12-05 12:14:08.677404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.749 qpair failed and we were unable to recover it. 00:30:34.749 [2024-12-05 12:14:08.677661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.749 [2024-12-05 12:14:08.677694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.749 qpair failed and we were unable to recover it. 00:30:34.749 [2024-12-05 12:14:08.677872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.749 [2024-12-05 12:14:08.677904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.749 qpair failed and we were unable to recover it. 00:30:34.749 [2024-12-05 12:14:08.678085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.749 [2024-12-05 12:14:08.678119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.749 qpair failed and we were unable to recover it. 00:30:34.749 [2024-12-05 12:14:08.678360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.749 [2024-12-05 12:14:08.678403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.749 qpair failed and we were unable to recover it. 00:30:34.749 [2024-12-05 12:14:08.678582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.749 [2024-12-05 12:14:08.678614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.749 qpair failed and we were unable to recover it. 00:30:34.749 [2024-12-05 12:14:08.678732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.749 [2024-12-05 12:14:08.678763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.749 qpair failed and we were unable to recover it. 00:30:34.749 [2024-12-05 12:14:08.678938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.749 [2024-12-05 12:14:08.678970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.749 qpair failed and we were unable to recover it. 00:30:34.749 [2024-12-05 12:14:08.679145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.749 [2024-12-05 12:14:08.679177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.749 qpair failed and we were unable to recover it. 00:30:34.749 [2024-12-05 12:14:08.679442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.749 [2024-12-05 12:14:08.679480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.749 qpair failed and we were unable to recover it. 00:30:34.749 [2024-12-05 12:14:08.679692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.749 [2024-12-05 12:14:08.679724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.749 qpair failed and we were unable to recover it. 00:30:34.749 [2024-12-05 12:14:08.679908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.750 [2024-12-05 12:14:08.679939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.750 qpair failed and we were unable to recover it. 00:30:34.750 [2024-12-05 12:14:08.680068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.750 [2024-12-05 12:14:08.680100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.750 qpair failed and we were unable to recover it. 00:30:34.750 [2024-12-05 12:14:08.680376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.750 [2024-12-05 12:14:08.680409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.750 qpair failed and we were unable to recover it. 00:30:34.750 [2024-12-05 12:14:08.680519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.750 [2024-12-05 12:14:08.680550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.750 qpair failed and we were unable to recover it. 00:30:34.750 [2024-12-05 12:14:08.680721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.750 [2024-12-05 12:14:08.680752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.750 qpair failed and we were unable to recover it. 00:30:34.750 [2024-12-05 12:14:08.680935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.750 [2024-12-05 12:14:08.680967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.750 qpair failed and we were unable to recover it. 00:30:34.750 [2024-12-05 12:14:08.681151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.750 [2024-12-05 12:14:08.681182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.750 qpair failed and we were unable to recover it. 00:30:34.750 [2024-12-05 12:14:08.681381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.750 [2024-12-05 12:14:08.681413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.750 qpair failed and we were unable to recover it. 00:30:34.750 [2024-12-05 12:14:08.681600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.750 [2024-12-05 12:14:08.681631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.750 qpair failed and we were unable to recover it. 00:30:34.750 [2024-12-05 12:14:08.681762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.750 [2024-12-05 12:14:08.681793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.750 qpair failed and we were unable to recover it. 00:30:34.750 [2024-12-05 12:14:08.681989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.750 [2024-12-05 12:14:08.682021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.750 qpair failed and we were unable to recover it. 00:30:34.750 [2024-12-05 12:14:08.682159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.750 [2024-12-05 12:14:08.682193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.750 qpair failed and we were unable to recover it. 00:30:34.750 [2024-12-05 12:14:08.682404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.750 [2024-12-05 12:14:08.682438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.750 qpair failed and we were unable to recover it. 00:30:34.750 [2024-12-05 12:14:08.682739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.750 [2024-12-05 12:14:08.682773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.750 qpair failed and we were unable to recover it. 00:30:34.750 [2024-12-05 12:14:08.682896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.750 [2024-12-05 12:14:08.682928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.750 qpair failed and we were unable to recover it. 00:30:34.750 [2024-12-05 12:14:08.683106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.750 [2024-12-05 12:14:08.683137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.750 qpair failed and we were unable to recover it. 00:30:34.750 [2024-12-05 12:14:08.683267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.750 [2024-12-05 12:14:08.683300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.750 qpair failed and we were unable to recover it. 00:30:34.750 [2024-12-05 12:14:08.683557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.750 [2024-12-05 12:14:08.683589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.750 qpair failed and we were unable to recover it. 00:30:34.750 [2024-12-05 12:14:08.683706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.750 [2024-12-05 12:14:08.683739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.750 qpair failed and we were unable to recover it. 00:30:34.750 [2024-12-05 12:14:08.683861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.750 [2024-12-05 12:14:08.683892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.750 qpair failed and we were unable to recover it. 00:30:34.750 [2024-12-05 12:14:08.684006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.750 [2024-12-05 12:14:08.684038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.750 qpair failed and we were unable to recover it. 00:30:34.750 [2024-12-05 12:14:08.684147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.750 [2024-12-05 12:14:08.684178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.750 qpair failed and we were unable to recover it. 00:30:34.750 [2024-12-05 12:14:08.684391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.750 [2024-12-05 12:14:08.684426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.750 qpair failed and we were unable to recover it. 00:30:34.750 [2024-12-05 12:14:08.684607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.750 [2024-12-05 12:14:08.684640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.750 qpair failed and we were unable to recover it. 00:30:34.750 [2024-12-05 12:14:08.684822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.750 [2024-12-05 12:14:08.684854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.750 qpair failed and we were unable to recover it. 00:30:34.750 [2024-12-05 12:14:08.685104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.750 [2024-12-05 12:14:08.685142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.750 qpair failed and we were unable to recover it. 00:30:34.750 [2024-12-05 12:14:08.685260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.750 [2024-12-05 12:14:08.685293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.750 qpair failed and we were unable to recover it. 00:30:34.750 [2024-12-05 12:14:08.685426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.750 [2024-12-05 12:14:08.685459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.750 qpair failed and we were unable to recover it. 00:30:34.750 [2024-12-05 12:14:08.685564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.750 [2024-12-05 12:14:08.685596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.750 qpair failed and we were unable to recover it. 00:30:34.750 [2024-12-05 12:14:08.685708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.750 [2024-12-05 12:14:08.685740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.750 qpair failed and we were unable to recover it. 00:30:34.750 [2024-12-05 12:14:08.685933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.750 [2024-12-05 12:14:08.685964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.750 qpair failed and we were unable to recover it. 00:30:34.750 [2024-12-05 12:14:08.686073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.750 [2024-12-05 12:14:08.686104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.750 qpair failed and we were unable to recover it. 00:30:34.750 [2024-12-05 12:14:08.686299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.751 [2024-12-05 12:14:08.686331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.751 qpair failed and we were unable to recover it. 00:30:34.751 [2024-12-05 12:14:08.686451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.751 [2024-12-05 12:14:08.686484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.751 qpair failed and we were unable to recover it. 00:30:34.751 [2024-12-05 12:14:08.686700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.751 [2024-12-05 12:14:08.686731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.751 qpair failed and we were unable to recover it. 00:30:34.751 [2024-12-05 12:14:08.686914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.751 [2024-12-05 12:14:08.686945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.751 qpair failed and we were unable to recover it. 00:30:34.751 [2024-12-05 12:14:08.687150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.751 [2024-12-05 12:14:08.687182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.751 qpair failed and we were unable to recover it. 00:30:34.751 [2024-12-05 12:14:08.687365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.751 [2024-12-05 12:14:08.687409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.751 qpair failed and we were unable to recover it. 00:30:34.751 [2024-12-05 12:14:08.687596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.751 [2024-12-05 12:14:08.687630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.751 qpair failed and we were unable to recover it. 00:30:34.751 [2024-12-05 12:14:08.687792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.751 [2024-12-05 12:14:08.687858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.751 qpair failed and we were unable to recover it. 00:30:34.751 [2024-12-05 12:14:08.687989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.751 [2024-12-05 12:14:08.688025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.751 qpair failed and we were unable to recover it. 00:30:34.751 [2024-12-05 12:14:08.688200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.751 [2024-12-05 12:14:08.688238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.751 qpair failed and we were unable to recover it. 00:30:34.751 [2024-12-05 12:14:08.688418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.751 [2024-12-05 12:14:08.688453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.751 qpair failed and we were unable to recover it. 00:30:34.751 [2024-12-05 12:14:08.688567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.751 [2024-12-05 12:14:08.688606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.751 qpair failed and we were unable to recover it. 00:30:34.751 [2024-12-05 12:14:08.688786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.751 [2024-12-05 12:14:08.688817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.751 qpair failed and we were unable to recover it. 00:30:34.751 [2024-12-05 12:14:08.688998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.751 [2024-12-05 12:14:08.689040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.751 qpair failed and we were unable to recover it. 00:30:34.751 [2024-12-05 12:14:08.689310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.751 [2024-12-05 12:14:08.689341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.751 qpair failed and we were unable to recover it. 00:30:34.751 [2024-12-05 12:14:08.689477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.751 [2024-12-05 12:14:08.689507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.751 qpair failed and we were unable to recover it. 00:30:34.751 [2024-12-05 12:14:08.689629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.751 [2024-12-05 12:14:08.689659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.751 qpair failed and we were unable to recover it. 00:30:34.751 [2024-12-05 12:14:08.689829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.751 [2024-12-05 12:14:08.689858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.751 qpair failed and we were unable to recover it. 00:30:34.751 [2024-12-05 12:14:08.689970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.751 [2024-12-05 12:14:08.690000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.751 qpair failed and we were unable to recover it. 00:30:34.751 [2024-12-05 12:14:08.690138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.751 [2024-12-05 12:14:08.690171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.751 qpair failed and we were unable to recover it. 00:30:34.751 [2024-12-05 12:14:08.690353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.751 [2024-12-05 12:14:08.690396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.751 qpair failed and we were unable to recover it. 00:30:34.751 [2024-12-05 12:14:08.690582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.751 [2024-12-05 12:14:08.690612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.751 qpair failed and we were unable to recover it. 00:30:34.751 [2024-12-05 12:14:08.690733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.751 [2024-12-05 12:14:08.690763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.751 qpair failed and we were unable to recover it. 00:30:34.751 [2024-12-05 12:14:08.690876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.751 [2024-12-05 12:14:08.690907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.751 qpair failed and we were unable to recover it. 00:30:34.751 [2024-12-05 12:14:08.691028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.751 [2024-12-05 12:14:08.691058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.751 qpair failed and we were unable to recover it. 00:30:34.751 [2024-12-05 12:14:08.691229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.751 [2024-12-05 12:14:08.691260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.751 qpair failed and we were unable to recover it. 00:30:34.751 [2024-12-05 12:14:08.691387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.751 [2024-12-05 12:14:08.691421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.751 qpair failed and we were unable to recover it. 00:30:34.751 [2024-12-05 12:14:08.691597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.751 [2024-12-05 12:14:08.691628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.751 qpair failed and we were unable to recover it. 00:30:34.751 [2024-12-05 12:14:08.691754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.751 [2024-12-05 12:14:08.691786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.751 qpair failed and we were unable to recover it. 00:30:34.751 [2024-12-05 12:14:08.691978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.751 [2024-12-05 12:14:08.692010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.751 qpair failed and we were unable to recover it. 00:30:34.751 [2024-12-05 12:14:08.692199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.751 [2024-12-05 12:14:08.692230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.751 qpair failed and we were unable to recover it. 00:30:34.751 [2024-12-05 12:14:08.692498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.751 [2024-12-05 12:14:08.692530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.751 qpair failed and we were unable to recover it. 00:30:34.751 [2024-12-05 12:14:08.692632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.751 [2024-12-05 12:14:08.692662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.751 qpair failed and we were unable to recover it. 00:30:34.752 [2024-12-05 12:14:08.692775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.752 [2024-12-05 12:14:08.692805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.752 qpair failed and we were unable to recover it. 00:30:34.752 [2024-12-05 12:14:08.692929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.752 [2024-12-05 12:14:08.692960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.752 qpair failed and we were unable to recover it. 00:30:34.752 [2024-12-05 12:14:08.693148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.752 [2024-12-05 12:14:08.693179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.752 qpair failed and we were unable to recover it. 00:30:34.752 [2024-12-05 12:14:08.693305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.752 [2024-12-05 12:14:08.693335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.752 qpair failed and we were unable to recover it. 00:30:34.752 [2024-12-05 12:14:08.693525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.752 [2024-12-05 12:14:08.693557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.752 qpair failed and we were unable to recover it. 00:30:34.752 [2024-12-05 12:14:08.693666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.752 [2024-12-05 12:14:08.693695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.752 qpair failed and we were unable to recover it. 00:30:34.752 [2024-12-05 12:14:08.693821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.752 [2024-12-05 12:14:08.693852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.752 qpair failed and we were unable to recover it. 00:30:34.752 [2024-12-05 12:14:08.694094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.752 [2024-12-05 12:14:08.694126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.752 qpair failed and we were unable to recover it. 00:30:34.752 [2024-12-05 12:14:08.694388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.752 [2024-12-05 12:14:08.694421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.752 qpair failed and we were unable to recover it. 00:30:34.752 [2024-12-05 12:14:08.694547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.752 [2024-12-05 12:14:08.694580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.752 qpair failed and we were unable to recover it. 00:30:34.752 [2024-12-05 12:14:08.694822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.752 [2024-12-05 12:14:08.694854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.752 qpair failed and we were unable to recover it. 00:30:34.752 [2024-12-05 12:14:08.695036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.752 [2024-12-05 12:14:08.695067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.752 qpair failed and we were unable to recover it. 00:30:34.752 [2024-12-05 12:14:08.695180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.752 [2024-12-05 12:14:08.695211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.752 qpair failed and we were unable to recover it. 00:30:34.752 [2024-12-05 12:14:08.695339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.752 [2024-12-05 12:14:08.695376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.752 qpair failed and we were unable to recover it. 00:30:34.752 [2024-12-05 12:14:08.695495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.752 [2024-12-05 12:14:08.695525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.752 qpair failed and we were unable to recover it. 00:30:34.752 [2024-12-05 12:14:08.695719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.752 [2024-12-05 12:14:08.695750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.752 qpair failed and we were unable to recover it. 00:30:34.752 [2024-12-05 12:14:08.695935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.752 [2024-12-05 12:14:08.695966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.752 qpair failed and we were unable to recover it. 00:30:34.752 [2024-12-05 12:14:08.696088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.752 [2024-12-05 12:14:08.696119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.752 qpair failed and we were unable to recover it. 00:30:34.752 [2024-12-05 12:14:08.696341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.752 [2024-12-05 12:14:08.696400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.752 qpair failed and we were unable to recover it. 00:30:34.752 [2024-12-05 12:14:08.696603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.752 [2024-12-05 12:14:08.696635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.752 qpair failed and we were unable to recover it. 00:30:34.752 [2024-12-05 12:14:08.696759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.752 [2024-12-05 12:14:08.696788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.752 qpair failed and we were unable to recover it. 00:30:34.752 [2024-12-05 12:14:08.696917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.752 [2024-12-05 12:14:08.696947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.752 qpair failed and we were unable to recover it. 00:30:34.752 [2024-12-05 12:14:08.697075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.752 [2024-12-05 12:14:08.697107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.752 qpair failed and we were unable to recover it. 00:30:34.752 [2024-12-05 12:14:08.697225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.752 [2024-12-05 12:14:08.697255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.752 qpair failed and we were unable to recover it. 00:30:34.752 [2024-12-05 12:14:08.697444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.752 [2024-12-05 12:14:08.697477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.752 qpair failed and we were unable to recover it. 00:30:34.752 [2024-12-05 12:14:08.697594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.752 [2024-12-05 12:14:08.697627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.752 qpair failed and we were unable to recover it. 00:30:34.752 [2024-12-05 12:14:08.697741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.752 [2024-12-05 12:14:08.697774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.752 qpair failed and we were unable to recover it. 00:30:34.752 [2024-12-05 12:14:08.697883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.752 [2024-12-05 12:14:08.697914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.752 qpair failed and we were unable to recover it. 00:30:34.752 [2024-12-05 12:14:08.698151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.752 [2024-12-05 12:14:08.698188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.752 qpair failed and we were unable to recover it. 00:30:34.752 [2024-12-05 12:14:08.698307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.752 [2024-12-05 12:14:08.698336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.752 qpair failed and we were unable to recover it. 00:30:34.752 [2024-12-05 12:14:08.698450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.752 [2024-12-05 12:14:08.698481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.752 qpair failed and we were unable to recover it. 00:30:34.753 [2024-12-05 12:14:08.698599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.753 [2024-12-05 12:14:08.698630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.753 qpair failed and we were unable to recover it. 00:30:34.753 [2024-12-05 12:14:08.698732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.753 [2024-12-05 12:14:08.698762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.753 qpair failed and we were unable to recover it. 00:30:34.753 [2024-12-05 12:14:08.698882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.753 [2024-12-05 12:14:08.698913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.753 qpair failed and we were unable to recover it. 00:30:34.753 [2024-12-05 12:14:08.699112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.753 [2024-12-05 12:14:08.699143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.753 qpair failed and we were unable to recover it. 00:30:34.753 [2024-12-05 12:14:08.699317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.753 [2024-12-05 12:14:08.699349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.753 qpair failed and we were unable to recover it. 00:30:34.753 [2024-12-05 12:14:08.699502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.753 [2024-12-05 12:14:08.699534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.753 qpair failed and we were unable to recover it. 00:30:34.753 [2024-12-05 12:14:08.699649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.753 [2024-12-05 12:14:08.699679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.753 qpair failed and we were unable to recover it. 00:30:34.753 [2024-12-05 12:14:08.699852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.753 [2024-12-05 12:14:08.699884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.753 qpair failed and we were unable to recover it. 00:30:34.753 [2024-12-05 12:14:08.700058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.753 [2024-12-05 12:14:08.700091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.753 qpair failed and we were unable to recover it. 00:30:34.753 [2024-12-05 12:14:08.700266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.753 [2024-12-05 12:14:08.700299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.753 qpair failed and we were unable to recover it. 00:30:34.753 [2024-12-05 12:14:08.700503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.753 [2024-12-05 12:14:08.700535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.753 qpair failed and we were unable to recover it. 00:30:34.753 [2024-12-05 12:14:08.700675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.753 [2024-12-05 12:14:08.700707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.753 qpair failed and we were unable to recover it. 00:30:34.753 [2024-12-05 12:14:08.700899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.753 [2024-12-05 12:14:08.700930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.753 qpair failed and we were unable to recover it. 00:30:34.753 [2024-12-05 12:14:08.701120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.753 [2024-12-05 12:14:08.701152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.753 qpair failed and we were unable to recover it. 00:30:34.753 [2024-12-05 12:14:08.701393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.753 [2024-12-05 12:14:08.701426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.753 qpair failed and we were unable to recover it. 00:30:34.753 [2024-12-05 12:14:08.701556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.753 [2024-12-05 12:14:08.701588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.753 qpair failed and we were unable to recover it. 00:30:34.753 [2024-12-05 12:14:08.701758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.753 [2024-12-05 12:14:08.701789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.753 qpair failed and we were unable to recover it. 00:30:34.753 [2024-12-05 12:14:08.701903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.753 [2024-12-05 12:14:08.701933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.753 qpair failed and we were unable to recover it. 00:30:34.753 [2024-12-05 12:14:08.702163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.753 [2024-12-05 12:14:08.702194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.753 qpair failed and we were unable to recover it. 00:30:34.753 [2024-12-05 12:14:08.702416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.753 [2024-12-05 12:14:08.702448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.753 qpair failed and we were unable to recover it. 00:30:34.753 [2024-12-05 12:14:08.702636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.753 [2024-12-05 12:14:08.702668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.753 qpair failed and we were unable to recover it. 00:30:34.753 [2024-12-05 12:14:08.702906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.753 [2024-12-05 12:14:08.702938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.753 qpair failed and we were unable to recover it. 00:30:34.753 [2024-12-05 12:14:08.703196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.753 [2024-12-05 12:14:08.703228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.753 qpair failed and we were unable to recover it. 00:30:34.753 [2024-12-05 12:14:08.703403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.753 [2024-12-05 12:14:08.703436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.753 qpair failed and we were unable to recover it. 00:30:34.753 [2024-12-05 12:14:08.703569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.753 [2024-12-05 12:14:08.703609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.753 qpair failed and we were unable to recover it. 00:30:34.753 [2024-12-05 12:14:08.703741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.753 [2024-12-05 12:14:08.703772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.753 qpair failed and we were unable to recover it. 00:30:34.753 [2024-12-05 12:14:08.703883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.753 [2024-12-05 12:14:08.703913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.753 qpair failed and we were unable to recover it. 00:30:34.753 [2024-12-05 12:14:08.704055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.753 [2024-12-05 12:14:08.704085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.753 qpair failed and we were unable to recover it. 00:30:34.753 [2024-12-05 12:14:08.704260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.753 [2024-12-05 12:14:08.704291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.753 qpair failed and we were unable to recover it. 00:30:34.753 [2024-12-05 12:14:08.704534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.753 [2024-12-05 12:14:08.704567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.753 qpair failed and we were unable to recover it. 00:30:34.753 [2024-12-05 12:14:08.704686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.753 [2024-12-05 12:14:08.704716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.753 qpair failed and we were unable to recover it. 00:30:34.753 [2024-12-05 12:14:08.704821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.753 [2024-12-05 12:14:08.704851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.753 qpair failed and we were unable to recover it. 00:30:34.753 [2024-12-05 12:14:08.704970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.754 [2024-12-05 12:14:08.705005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.754 qpair failed and we were unable to recover it. 00:30:34.754 [2024-12-05 12:14:08.705121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.754 [2024-12-05 12:14:08.705153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.754 qpair failed and we were unable to recover it. 00:30:34.754 [2024-12-05 12:14:08.705345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.754 [2024-12-05 12:14:08.705389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.754 qpair failed and we were unable to recover it. 00:30:34.754 [2024-12-05 12:14:08.705500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.754 [2024-12-05 12:14:08.705532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.754 qpair failed and we were unable to recover it. 00:30:34.754 [2024-12-05 12:14:08.705641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.754 [2024-12-05 12:14:08.705673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.754 qpair failed and we were unable to recover it. 00:30:34.754 [2024-12-05 12:14:08.705856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.754 [2024-12-05 12:14:08.705888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.754 qpair failed and we were unable to recover it. 00:30:34.754 [2024-12-05 12:14:08.706016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.754 [2024-12-05 12:14:08.706049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.754 qpair failed and we were unable to recover it. 00:30:34.754 [2024-12-05 12:14:08.706167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.754 [2024-12-05 12:14:08.706199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.754 qpair failed and we were unable to recover it. 00:30:34.754 [2024-12-05 12:14:08.706319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.754 [2024-12-05 12:14:08.706351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.754 qpair failed and we were unable to recover it. 00:30:34.754 [2024-12-05 12:14:08.706533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.754 [2024-12-05 12:14:08.706564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.754 qpair failed and we were unable to recover it. 00:30:34.754 [2024-12-05 12:14:08.706678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.754 [2024-12-05 12:14:08.706709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.754 qpair failed and we were unable to recover it. 00:30:34.754 [2024-12-05 12:14:08.706901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.754 [2024-12-05 12:14:08.706933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.754 qpair failed and we were unable to recover it. 00:30:34.754 [2024-12-05 12:14:08.707045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.754 [2024-12-05 12:14:08.707075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.754 qpair failed and we were unable to recover it. 00:30:34.754 [2024-12-05 12:14:08.707200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.754 [2024-12-05 12:14:08.707232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.754 qpair failed and we were unable to recover it. 00:30:34.754 [2024-12-05 12:14:08.707410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.754 [2024-12-05 12:14:08.707443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.754 qpair failed and we were unable to recover it. 00:30:34.754 [2024-12-05 12:14:08.707719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.754 [2024-12-05 12:14:08.707750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.754 qpair failed and we were unable to recover it. 00:30:34.754 [2024-12-05 12:14:08.707942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.754 [2024-12-05 12:14:08.707975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.754 qpair failed and we were unable to recover it. 00:30:34.754 [2024-12-05 12:14:08.708097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.754 [2024-12-05 12:14:08.708130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.754 qpair failed and we were unable to recover it. 00:30:34.754 [2024-12-05 12:14:08.708239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.754 [2024-12-05 12:14:08.708272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.754 qpair failed and we were unable to recover it. 00:30:34.754 [2024-12-05 12:14:08.708487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.754 [2024-12-05 12:14:08.708522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.754 qpair failed and we were unable to recover it. 00:30:34.754 [2024-12-05 12:14:08.708638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.754 [2024-12-05 12:14:08.708670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.754 qpair failed and we were unable to recover it. 00:30:34.754 [2024-12-05 12:14:08.708782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.754 [2024-12-05 12:14:08.708814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.754 qpair failed and we were unable to recover it. 00:30:34.754 [2024-12-05 12:14:08.708922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.754 [2024-12-05 12:14:08.708954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.754 qpair failed and we were unable to recover it. 00:30:34.754 [2024-12-05 12:14:08.709067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.754 [2024-12-05 12:14:08.709099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.754 qpair failed and we were unable to recover it. 00:30:34.754 [2024-12-05 12:14:08.709218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.754 [2024-12-05 12:14:08.709250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.754 qpair failed and we were unable to recover it. 00:30:34.754 [2024-12-05 12:14:08.709382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.754 [2024-12-05 12:14:08.709415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.754 qpair failed and we were unable to recover it. 00:30:34.754 [2024-12-05 12:14:08.709531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.754 [2024-12-05 12:14:08.709562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.754 qpair failed and we were unable to recover it. 00:30:34.754 [2024-12-05 12:14:08.709668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.754 [2024-12-05 12:14:08.709700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.755 qpair failed and we were unable to recover it. 00:30:34.755 [2024-12-05 12:14:08.709829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.755 [2024-12-05 12:14:08.709862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.755 qpair failed and we were unable to recover it. 00:30:34.755 [2024-12-05 12:14:08.709968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.755 [2024-12-05 12:14:08.710000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.755 qpair failed and we were unable to recover it. 00:30:34.755 [2024-12-05 12:14:08.710131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.755 [2024-12-05 12:14:08.710163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.755 qpair failed and we were unable to recover it. 00:30:34.755 [2024-12-05 12:14:08.710356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.755 [2024-12-05 12:14:08.710397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.755 qpair failed and we were unable to recover it. 00:30:34.755 [2024-12-05 12:14:08.710511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.755 [2024-12-05 12:14:08.710543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.755 qpair failed and we were unable to recover it. 00:30:34.755 [2024-12-05 12:14:08.710648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.755 [2024-12-05 12:14:08.710685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.755 qpair failed and we were unable to recover it. 00:30:34.755 [2024-12-05 12:14:08.710860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.755 [2024-12-05 12:14:08.710891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.755 qpair failed and we were unable to recover it. 00:30:34.755 [2024-12-05 12:14:08.711029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.755 [2024-12-05 12:14:08.711061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.755 qpair failed and we were unable to recover it. 00:30:34.755 [2024-12-05 12:14:08.711296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.755 [2024-12-05 12:14:08.711329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.755 qpair failed and we were unable to recover it. 00:30:34.755 [2024-12-05 12:14:08.711448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.755 [2024-12-05 12:14:08.711480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.755 qpair failed and we were unable to recover it. 00:30:34.755 [2024-12-05 12:14:08.711664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.755 [2024-12-05 12:14:08.711696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.755 qpair failed and we were unable to recover it. 00:30:34.755 [2024-12-05 12:14:08.711884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.755 [2024-12-05 12:14:08.711916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.755 qpair failed and we were unable to recover it. 00:30:34.755 [2024-12-05 12:14:08.712051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.755 [2024-12-05 12:14:08.712084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.755 qpair failed and we were unable to recover it. 00:30:34.755 [2024-12-05 12:14:08.712192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.755 [2024-12-05 12:14:08.712225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.755 qpair failed and we were unable to recover it. 00:30:34.755 [2024-12-05 12:14:08.712415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.755 [2024-12-05 12:14:08.712448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.755 qpair failed and we were unable to recover it. 00:30:34.755 [2024-12-05 12:14:08.712562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.755 [2024-12-05 12:14:08.712594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.755 qpair failed and we were unable to recover it. 00:30:34.755 [2024-12-05 12:14:08.712777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.755 [2024-12-05 12:14:08.712808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.755 qpair failed and we were unable to recover it. 00:30:34.755 [2024-12-05 12:14:08.713067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.755 [2024-12-05 12:14:08.713098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.755 qpair failed and we were unable to recover it. 00:30:34.755 [2024-12-05 12:14:08.713207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.755 [2024-12-05 12:14:08.713239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.755 qpair failed and we were unable to recover it. 00:30:34.755 [2024-12-05 12:14:08.713349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.755 [2024-12-05 12:14:08.713388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.755 qpair failed and we were unable to recover it. 00:30:34.755 [2024-12-05 12:14:08.713606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.755 [2024-12-05 12:14:08.713639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.755 qpair failed and we were unable to recover it. 00:30:34.755 [2024-12-05 12:14:08.713744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.755 [2024-12-05 12:14:08.713783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.755 qpair failed and we were unable to recover it. 00:30:34.755 [2024-12-05 12:14:08.713983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.755 [2024-12-05 12:14:08.714016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.755 qpair failed and we were unable to recover it. 00:30:34.755 [2024-12-05 12:14:08.714208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.755 [2024-12-05 12:14:08.714241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.755 qpair failed and we were unable to recover it. 00:30:34.755 [2024-12-05 12:14:08.714507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.755 [2024-12-05 12:14:08.714539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.755 qpair failed and we were unable to recover it. 00:30:34.755 [2024-12-05 12:14:08.714720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.755 [2024-12-05 12:14:08.714752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.755 qpair failed and we were unable to recover it. 00:30:34.755 [2024-12-05 12:14:08.714919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.755 [2024-12-05 12:14:08.714951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.755 qpair failed and we were unable to recover it. 00:30:34.755 [2024-12-05 12:14:08.715080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.755 [2024-12-05 12:14:08.715112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.755 qpair failed and we were unable to recover it. 00:30:34.755 [2024-12-05 12:14:08.715395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.755 [2024-12-05 12:14:08.715427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.755 qpair failed and we were unable to recover it. 00:30:34.755 [2024-12-05 12:14:08.715664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.755 [2024-12-05 12:14:08.715695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.755 qpair failed and we were unable to recover it. 00:30:34.755 [2024-12-05 12:14:08.715814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.756 [2024-12-05 12:14:08.715846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.756 qpair failed and we were unable to recover it. 00:30:34.756 [2024-12-05 12:14:08.715964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.756 [2024-12-05 12:14:08.715994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.756 qpair failed and we were unable to recover it. 00:30:34.756 [2024-12-05 12:14:08.716114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.756 [2024-12-05 12:14:08.716153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.756 qpair failed and we were unable to recover it. 00:30:34.756 [2024-12-05 12:14:08.716271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.756 [2024-12-05 12:14:08.716302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.756 qpair failed and we were unable to recover it. 00:30:34.756 [2024-12-05 12:14:08.716421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.756 [2024-12-05 12:14:08.716454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.756 qpair failed and we were unable to recover it. 00:30:34.756 [2024-12-05 12:14:08.716661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.756 [2024-12-05 12:14:08.716695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.756 qpair failed and we were unable to recover it. 00:30:34.756 [2024-12-05 12:14:08.716882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.756 [2024-12-05 12:14:08.716914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.756 qpair failed and we were unable to recover it. 00:30:34.756 [2024-12-05 12:14:08.717098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.756 [2024-12-05 12:14:08.717130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.756 qpair failed and we were unable to recover it. 00:30:34.756 [2024-12-05 12:14:08.717383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.756 [2024-12-05 12:14:08.717416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.756 qpair failed and we were unable to recover it. 00:30:34.756 [2024-12-05 12:14:08.717518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.756 [2024-12-05 12:14:08.717548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.756 qpair failed and we were unable to recover it. 00:30:34.756 [2024-12-05 12:14:08.717697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.756 [2024-12-05 12:14:08.717728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.756 qpair failed and we were unable to recover it. 00:30:34.756 [2024-12-05 12:14:08.717857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.756 [2024-12-05 12:14:08.717886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.756 qpair failed and we were unable to recover it. 00:30:34.756 [2024-12-05 12:14:08.718065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.756 [2024-12-05 12:14:08.718096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.756 qpair failed and we were unable to recover it. 00:30:34.756 [2024-12-05 12:14:08.718221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.756 [2024-12-05 12:14:08.718253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.756 qpair failed and we were unable to recover it. 00:30:34.756 [2024-12-05 12:14:08.718492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.756 [2024-12-05 12:14:08.718524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.756 qpair failed and we were unable to recover it. 00:30:34.756 [2024-12-05 12:14:08.718786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.756 [2024-12-05 12:14:08.718817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.756 qpair failed and we were unable to recover it. 00:30:34.756 [2024-12-05 12:14:08.718984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.756 [2024-12-05 12:14:08.719055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.756 qpair failed and we were unable to recover it. 00:30:34.756 [2024-12-05 12:14:08.719269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.756 [2024-12-05 12:14:08.719306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.756 qpair failed and we were unable to recover it. 00:30:34.756 [2024-12-05 12:14:08.719499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.756 [2024-12-05 12:14:08.719534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.756 qpair failed and we were unable to recover it. 00:30:34.756 [2024-12-05 12:14:08.719650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.756 [2024-12-05 12:14:08.719682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.756 qpair failed and we were unable to recover it. 00:30:34.756 [2024-12-05 12:14:08.719869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.756 [2024-12-05 12:14:08.719901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.756 qpair failed and we were unable to recover it. 00:30:34.756 [2024-12-05 12:14:08.720040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.756 [2024-12-05 12:14:08.720071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.756 qpair failed and we were unable to recover it. 00:30:34.756 [2024-12-05 12:14:08.720310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.756 [2024-12-05 12:14:08.720342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.756 qpair failed and we were unable to recover it. 00:30:34.756 [2024-12-05 12:14:08.720526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.756 [2024-12-05 12:14:08.720559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.756 qpair failed and we were unable to recover it. 00:30:34.756 [2024-12-05 12:14:08.720745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.756 [2024-12-05 12:14:08.720777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.756 qpair failed and we were unable to recover it. 00:30:34.756 [2024-12-05 12:14:08.721015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.756 [2024-12-05 12:14:08.721046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.756 qpair failed and we were unable to recover it. 00:30:34.756 [2024-12-05 12:14:08.721232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.756 [2024-12-05 12:14:08.721264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.756 qpair failed and we were unable to recover it. 00:30:34.756 [2024-12-05 12:14:08.721379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.756 [2024-12-05 12:14:08.721413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.756 qpair failed and we were unable to recover it. 00:30:34.756 [2024-12-05 12:14:08.721600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.756 [2024-12-05 12:14:08.721632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.756 qpair failed and we were unable to recover it. 00:30:34.756 [2024-12-05 12:14:08.721764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.756 [2024-12-05 12:14:08.721806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.756 qpair failed and we were unable to recover it. 00:30:34.756 [2024-12-05 12:14:08.721992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.756 [2024-12-05 12:14:08.722023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.756 qpair failed and we were unable to recover it. 00:30:34.756 [2024-12-05 12:14:08.722144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.756 [2024-12-05 12:14:08.722175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.756 qpair failed and we were unable to recover it. 00:30:34.757 [2024-12-05 12:14:08.722351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.757 [2024-12-05 12:14:08.722394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.757 qpair failed and we were unable to recover it. 00:30:34.757 [2024-12-05 12:14:08.722521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.757 [2024-12-05 12:14:08.722550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.757 qpair failed and we were unable to recover it. 00:30:34.757 [2024-12-05 12:14:08.722683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.757 [2024-12-05 12:14:08.722714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.757 qpair failed and we were unable to recover it. 00:30:34.757 [2024-12-05 12:14:08.722952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.757 [2024-12-05 12:14:08.722984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.757 qpair failed and we were unable to recover it. 00:30:34.757 [2024-12-05 12:14:08.723094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.757 [2024-12-05 12:14:08.723124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.757 qpair failed and we were unable to recover it. 00:30:34.757 [2024-12-05 12:14:08.723236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.757 [2024-12-05 12:14:08.723267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.757 qpair failed and we were unable to recover it. 00:30:34.757 [2024-12-05 12:14:08.723454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.757 [2024-12-05 12:14:08.723486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.757 qpair failed and we were unable to recover it. 00:30:34.757 [2024-12-05 12:14:08.723592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.757 [2024-12-05 12:14:08.723623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.757 qpair failed and we were unable to recover it. 00:30:34.757 [2024-12-05 12:14:08.723751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.757 [2024-12-05 12:14:08.723783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.757 qpair failed and we were unable to recover it. 00:30:34.757 [2024-12-05 12:14:08.723962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.757 [2024-12-05 12:14:08.723993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.757 qpair failed and we were unable to recover it. 00:30:34.757 [2024-12-05 12:14:08.724231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.757 [2024-12-05 12:14:08.724263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.757 qpair failed and we were unable to recover it. 00:30:34.757 [2024-12-05 12:14:08.724386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.757 [2024-12-05 12:14:08.724421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.757 qpair failed and we were unable to recover it. 00:30:34.757 [2024-12-05 12:14:08.725834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.757 [2024-12-05 12:14:08.725886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.757 qpair failed and we were unable to recover it. 00:30:34.757 [2024-12-05 12:14:08.726081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.757 [2024-12-05 12:14:08.726114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.757 qpair failed and we were unable to recover it. 00:30:34.757 [2024-12-05 12:14:08.726390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.757 [2024-12-05 12:14:08.726424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.757 qpair failed and we were unable to recover it. 00:30:34.757 [2024-12-05 12:14:08.726541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.757 [2024-12-05 12:14:08.726573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.757 qpair failed and we were unable to recover it. 00:30:34.757 [2024-12-05 12:14:08.726677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.757 [2024-12-05 12:14:08.726710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.757 qpair failed and we were unable to recover it. 00:30:34.757 [2024-12-05 12:14:08.726969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.757 [2024-12-05 12:14:08.727001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.757 qpair failed and we were unable to recover it. 00:30:34.757 [2024-12-05 12:14:08.727179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.757 [2024-12-05 12:14:08.727209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.757 qpair failed and we were unable to recover it. 00:30:34.757 [2024-12-05 12:14:08.727336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.757 [2024-12-05 12:14:08.727395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.757 qpair failed and we were unable to recover it. 00:30:34.757 [2024-12-05 12:14:08.727576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.757 [2024-12-05 12:14:08.727608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.757 qpair failed and we were unable to recover it. 00:30:34.757 [2024-12-05 12:14:08.727746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.757 [2024-12-05 12:14:08.727778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.757 qpair failed and we were unable to recover it. 00:30:34.757 [2024-12-05 12:14:08.727954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.757 [2024-12-05 12:14:08.727986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.757 qpair failed and we were unable to recover it. 00:30:34.757 [2024-12-05 12:14:08.728114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.757 [2024-12-05 12:14:08.728147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:34.757 qpair failed and we were unable to recover it. 00:30:34.757 [2024-12-05 12:14:08.728426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.757 [2024-12-05 12:14:08.728462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.757 qpair failed and we were unable to recover it. 00:30:34.757 [2024-12-05 12:14:08.728650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.757 [2024-12-05 12:14:08.728682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.757 qpair failed and we were unable to recover it. 00:30:34.757 [2024-12-05 12:14:08.728803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.757 [2024-12-05 12:14:08.728835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.757 qpair failed and we were unable to recover it. 00:30:34.757 [2024-12-05 12:14:08.729119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.757 [2024-12-05 12:14:08.729160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.757 qpair failed and we were unable to recover it. 00:30:34.757 [2024-12-05 12:14:08.729319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.757 [2024-12-05 12:14:08.729350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.757 qpair failed and we were unable to recover it. 00:30:34.757 [2024-12-05 12:14:08.729484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.757 [2024-12-05 12:14:08.729516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.757 qpair failed and we were unable to recover it. 00:30:34.757 [2024-12-05 12:14:08.729644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.757 [2024-12-05 12:14:08.729677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.757 qpair failed and we were unable to recover it. 00:30:34.758 [2024-12-05 12:14:08.729799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.758 [2024-12-05 12:14:08.729829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.758 qpair failed and we were unable to recover it. 00:30:34.758 [2024-12-05 12:14:08.729957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.758 [2024-12-05 12:14:08.729987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.758 qpair failed and we were unable to recover it. 00:30:34.758 [2024-12-05 12:14:08.730222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.758 [2024-12-05 12:14:08.730253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.758 qpair failed and we were unable to recover it. 00:30:34.758 [2024-12-05 12:14:08.730466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.758 [2024-12-05 12:14:08.730500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.758 qpair failed and we were unable to recover it. 00:30:34.758 [2024-12-05 12:14:08.730627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.758 [2024-12-05 12:14:08.730659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.758 qpair failed and we were unable to recover it. 00:30:34.758 [2024-12-05 12:14:08.730769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.758 [2024-12-05 12:14:08.730798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.758 qpair failed and we were unable to recover it. 00:30:34.758 [2024-12-05 12:14:08.730925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.758 [2024-12-05 12:14:08.730957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.758 qpair failed and we were unable to recover it. 00:30:34.758 [2024-12-05 12:14:08.731223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.758 [2024-12-05 12:14:08.731255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.758 qpair failed and we were unable to recover it. 00:30:34.758 [2024-12-05 12:14:08.731428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.758 [2024-12-05 12:14:08.731461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.758 qpair failed and we were unable to recover it. 00:30:34.758 [2024-12-05 12:14:08.731636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.758 [2024-12-05 12:14:08.731669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.758 qpair failed and we were unable to recover it. 00:30:34.758 [2024-12-05 12:14:08.731791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.758 [2024-12-05 12:14:08.731821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.758 qpair failed and we were unable to recover it. 00:30:34.758 [2024-12-05 12:14:08.731938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.758 [2024-12-05 12:14:08.731971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.758 qpair failed and we were unable to recover it. 00:30:34.758 [2024-12-05 12:14:08.732209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.758 [2024-12-05 12:14:08.732243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.758 qpair failed and we were unable to recover it. 00:30:34.758 [2024-12-05 12:14:08.732359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.758 [2024-12-05 12:14:08.732408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.758 qpair failed and we were unable to recover it. 00:30:34.758 [2024-12-05 12:14:08.732622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.758 [2024-12-05 12:14:08.732655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.758 qpair failed and we were unable to recover it. 00:30:34.758 [2024-12-05 12:14:08.732836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.758 [2024-12-05 12:14:08.732866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.758 qpair failed and we were unable to recover it. 00:30:34.758 [2024-12-05 12:14:08.733127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.758 [2024-12-05 12:14:08.733159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.758 qpair failed and we were unable to recover it. 00:30:34.758 [2024-12-05 12:14:08.733279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.758 [2024-12-05 12:14:08.733310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.758 qpair failed and we were unable to recover it. 00:30:34.758 [2024-12-05 12:14:08.733428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.758 [2024-12-05 12:14:08.733464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.758 qpair failed and we were unable to recover it. 00:30:34.758 [2024-12-05 12:14:08.733666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.758 [2024-12-05 12:14:08.733696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.758 qpair failed and we were unable to recover it. 00:30:34.758 [2024-12-05 12:14:08.733812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.758 [2024-12-05 12:14:08.733849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.758 qpair failed and we were unable to recover it. 00:30:34.758 [2024-12-05 12:14:08.733955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.758 [2024-12-05 12:14:08.733985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.758 qpair failed and we were unable to recover it. 00:30:34.758 [2024-12-05 12:14:08.734171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.758 [2024-12-05 12:14:08.734203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.758 qpair failed and we were unable to recover it. 00:30:34.758 [2024-12-05 12:14:08.734385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.758 [2024-12-05 12:14:08.734419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.758 qpair failed and we were unable to recover it. 00:30:34.758 [2024-12-05 12:14:08.734658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.758 [2024-12-05 12:14:08.734692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.758 qpair failed and we were unable to recover it. 00:30:34.758 [2024-12-05 12:14:08.734806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.758 [2024-12-05 12:14:08.734837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.758 qpair failed and we were unable to recover it. 00:30:34.758 [2024-12-05 12:14:08.735074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.758 [2024-12-05 12:14:08.735106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.758 qpair failed and we were unable to recover it. 00:30:34.758 [2024-12-05 12:14:08.735285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.758 [2024-12-05 12:14:08.735316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.758 qpair failed and we were unable to recover it. 00:30:34.758 [2024-12-05 12:14:08.735439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.758 [2024-12-05 12:14:08.735472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.758 qpair failed and we were unable to recover it. 00:30:34.758 [2024-12-05 12:14:08.735653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.758 [2024-12-05 12:14:08.735685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.758 qpair failed and we were unable to recover it. 00:30:34.758 [2024-12-05 12:14:08.735920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.759 [2024-12-05 12:14:08.735951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.759 qpair failed and we were unable to recover it. 00:30:34.759 [2024-12-05 12:14:08.736075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.759 [2024-12-05 12:14:08.736105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.759 qpair failed and we were unable to recover it. 00:30:34.759 [2024-12-05 12:14:08.736282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.759 [2024-12-05 12:14:08.736314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.759 qpair failed and we were unable to recover it. 00:30:34.759 [2024-12-05 12:14:08.736441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.759 [2024-12-05 12:14:08.736472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.759 qpair failed and we were unable to recover it. 00:30:34.759 [2024-12-05 12:14:08.736668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.759 [2024-12-05 12:14:08.736700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.759 qpair failed and we were unable to recover it. 00:30:34.759 [2024-12-05 12:14:08.736906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.759 [2024-12-05 12:14:08.736937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.759 qpair failed and we were unable to recover it. 00:30:34.759 [2024-12-05 12:14:08.737113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.759 [2024-12-05 12:14:08.737145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.759 qpair failed and we were unable to recover it. 00:30:34.759 [2024-12-05 12:14:08.737256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.759 [2024-12-05 12:14:08.737287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.759 qpair failed and we were unable to recover it. 00:30:34.759 [2024-12-05 12:14:08.737407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.759 [2024-12-05 12:14:08.737440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.759 qpair failed and we were unable to recover it. 00:30:34.759 [2024-12-05 12:14:08.737562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.759 [2024-12-05 12:14:08.737594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.759 qpair failed and we were unable to recover it. 00:30:34.759 [2024-12-05 12:14:08.737762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.759 [2024-12-05 12:14:08.737793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.759 qpair failed and we were unable to recover it. 00:30:34.759 [2024-12-05 12:14:08.737900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.759 [2024-12-05 12:14:08.737932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.759 qpair failed and we were unable to recover it. 00:30:34.759 [2024-12-05 12:14:08.738135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.759 [2024-12-05 12:14:08.738166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.759 qpair failed and we were unable to recover it. 00:30:34.759 [2024-12-05 12:14:08.738427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.759 [2024-12-05 12:14:08.738460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.759 qpair failed and we were unable to recover it. 00:30:34.759 [2024-12-05 12:14:08.738652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.759 [2024-12-05 12:14:08.738684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.759 qpair failed and we were unable to recover it. 00:30:34.759 [2024-12-05 12:14:08.738810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.759 [2024-12-05 12:14:08.738841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.759 qpair failed and we were unable to recover it. 00:30:34.759 [2024-12-05 12:14:08.738956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.759 [2024-12-05 12:14:08.738987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.759 qpair failed and we were unable to recover it. 00:30:34.759 [2024-12-05 12:14:08.739110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.759 [2024-12-05 12:14:08.739141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.759 qpair failed and we were unable to recover it. 00:30:34.759 [2024-12-05 12:14:08.739324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.759 [2024-12-05 12:14:08.739356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.759 qpair failed and we were unable to recover it. 00:30:34.759 [2024-12-05 12:14:08.739471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.759 [2024-12-05 12:14:08.739502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.759 qpair failed and we were unable to recover it. 00:30:34.759 [2024-12-05 12:14:08.739684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.759 [2024-12-05 12:14:08.739716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.759 qpair failed and we were unable to recover it. 00:30:34.759 [2024-12-05 12:14:08.739917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.759 [2024-12-05 12:14:08.739948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.759 qpair failed and we were unable to recover it. 00:30:34.759 [2024-12-05 12:14:08.740140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.759 [2024-12-05 12:14:08.740171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.759 qpair failed and we were unable to recover it. 00:30:34.759 [2024-12-05 12:14:08.740379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.759 [2024-12-05 12:14:08.740413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.759 qpair failed and we were unable to recover it. 00:30:34.759 [2024-12-05 12:14:08.740539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.759 [2024-12-05 12:14:08.740571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.759 qpair failed and we were unable to recover it. 00:30:34.759 [2024-12-05 12:14:08.740769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.759 [2024-12-05 12:14:08.740801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.759 qpair failed and we were unable to recover it. 00:30:34.759 [2024-12-05 12:14:08.740987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.759 [2024-12-05 12:14:08.741020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.759 qpair failed and we were unable to recover it. 00:30:34.759 [2024-12-05 12:14:08.741146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.759 [2024-12-05 12:14:08.741177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.759 qpair failed and we were unable to recover it. 00:30:34.759 [2024-12-05 12:14:08.741299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.759 [2024-12-05 12:14:08.741332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.759 qpair failed and we were unable to recover it. 00:30:34.759 [2024-12-05 12:14:08.741613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.759 [2024-12-05 12:14:08.741646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.759 qpair failed and we were unable to recover it. 00:30:34.759 [2024-12-05 12:14:08.741939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.759 [2024-12-05 12:14:08.741972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.759 qpair failed and we were unable to recover it. 00:30:34.759 [2024-12-05 12:14:08.742081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.759 [2024-12-05 12:14:08.742113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.759 qpair failed and we were unable to recover it. 00:30:34.759 [2024-12-05 12:14:08.742249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.760 [2024-12-05 12:14:08.742281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.760 qpair failed and we were unable to recover it. 00:30:34.760 [2024-12-05 12:14:08.742388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.760 [2024-12-05 12:14:08.742421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.760 qpair failed and we were unable to recover it. 00:30:34.760 [2024-12-05 12:14:08.742607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.760 [2024-12-05 12:14:08.742640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.760 qpair failed and we were unable to recover it. 00:30:34.760 [2024-12-05 12:14:08.742747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.760 [2024-12-05 12:14:08.742779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.760 qpair failed and we were unable to recover it. 00:30:34.760 [2024-12-05 12:14:08.742897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.760 [2024-12-05 12:14:08.742930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.760 qpair failed and we were unable to recover it. 00:30:34.760 [2024-12-05 12:14:08.743051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.760 [2024-12-05 12:14:08.743083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.760 qpair failed and we were unable to recover it. 00:30:34.760 [2024-12-05 12:14:08.743259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.760 [2024-12-05 12:14:08.743292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.760 qpair failed and we were unable to recover it. 00:30:34.760 [2024-12-05 12:14:08.743399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.760 [2024-12-05 12:14:08.743432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.760 qpair failed and we were unable to recover it. 00:30:34.760 [2024-12-05 12:14:08.743631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.760 [2024-12-05 12:14:08.743664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.760 qpair failed and we were unable to recover it. 00:30:34.760 [2024-12-05 12:14:08.743767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.760 [2024-12-05 12:14:08.743799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.760 qpair failed and we were unable to recover it. 00:30:34.760 [2024-12-05 12:14:08.744049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.760 [2024-12-05 12:14:08.744082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.760 qpair failed and we were unable to recover it. 00:30:34.760 [2024-12-05 12:14:08.744274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.760 [2024-12-05 12:14:08.744307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.760 qpair failed and we were unable to recover it. 00:30:34.760 [2024-12-05 12:14:08.744434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.760 [2024-12-05 12:14:08.744467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.760 qpair failed and we were unable to recover it. 00:30:34.760 [2024-12-05 12:14:08.744599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.760 [2024-12-05 12:14:08.744632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.760 qpair failed and we were unable to recover it. 00:30:34.760 [2024-12-05 12:14:08.744814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.760 [2024-12-05 12:14:08.744845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.760 qpair failed and we were unable to recover it. 00:30:34.760 [2024-12-05 12:14:08.744960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.760 [2024-12-05 12:14:08.744992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.760 qpair failed and we were unable to recover it. 00:30:34.760 [2024-12-05 12:14:08.745099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.760 [2024-12-05 12:14:08.745133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.760 qpair failed and we were unable to recover it. 00:30:34.760 [2024-12-05 12:14:08.745329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.760 [2024-12-05 12:14:08.745360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.760 qpair failed and we were unable to recover it. 00:30:34.760 [2024-12-05 12:14:08.745480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.760 [2024-12-05 12:14:08.745513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.760 qpair failed and we were unable to recover it. 00:30:34.760 [2024-12-05 12:14:08.745756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.760 [2024-12-05 12:14:08.745788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.760 qpair failed and we were unable to recover it. 00:30:34.760 [2024-12-05 12:14:08.745919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.760 [2024-12-05 12:14:08.745951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.760 qpair failed and we were unable to recover it. 00:30:34.760 [2024-12-05 12:14:08.746080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.760 [2024-12-05 12:14:08.746113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.760 qpair failed and we were unable to recover it. 00:30:34.760 [2024-12-05 12:14:08.746388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.760 [2024-12-05 12:14:08.746422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.760 qpair failed and we were unable to recover it. 00:30:34.760 [2024-12-05 12:14:08.746619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.760 [2024-12-05 12:14:08.746651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.760 qpair failed and we were unable to recover it. 00:30:34.760 [2024-12-05 12:14:08.746918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.760 [2024-12-05 12:14:08.746950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.760 qpair failed and we were unable to recover it. 00:30:34.760 [2024-12-05 12:14:08.747086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.760 [2024-12-05 12:14:08.747118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.760 qpair failed and we were unable to recover it. 00:30:34.760 [2024-12-05 12:14:08.747253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.761 [2024-12-05 12:14:08.747291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.761 qpair failed and we were unable to recover it. 00:30:34.761 [2024-12-05 12:14:08.747561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.761 [2024-12-05 12:14:08.747594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.761 qpair failed and we were unable to recover it. 00:30:34.761 [2024-12-05 12:14:08.747777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.761 [2024-12-05 12:14:08.747808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.761 qpair failed and we were unable to recover it. 00:30:34.761 [2024-12-05 12:14:08.747996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.761 [2024-12-05 12:14:08.748029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.761 qpair failed and we were unable to recover it. 00:30:34.761 [2024-12-05 12:14:08.748148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.761 [2024-12-05 12:14:08.748181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.761 qpair failed and we were unable to recover it. 00:30:34.761 [2024-12-05 12:14:08.748396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.761 [2024-12-05 12:14:08.748430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.761 qpair failed and we were unable to recover it. 00:30:34.761 [2024-12-05 12:14:08.748623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.761 [2024-12-05 12:14:08.748654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.761 qpair failed and we were unable to recover it. 00:30:34.761 [2024-12-05 12:14:08.748768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.761 [2024-12-05 12:14:08.748801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.761 qpair failed and we were unable to recover it. 00:30:34.761 [2024-12-05 12:14:08.748996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.761 [2024-12-05 12:14:08.749028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.761 qpair failed and we were unable to recover it. 00:30:34.761 [2024-12-05 12:14:08.749149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.761 [2024-12-05 12:14:08.749180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.761 qpair failed and we were unable to recover it. 00:30:34.761 [2024-12-05 12:14:08.749446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.761 [2024-12-05 12:14:08.749479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.761 qpair failed and we were unable to recover it. 00:30:34.761 [2024-12-05 12:14:08.749675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.761 [2024-12-05 12:14:08.749706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.761 qpair failed and we were unable to recover it. 00:30:34.761 [2024-12-05 12:14:08.749880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.761 [2024-12-05 12:14:08.749912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.761 qpair failed and we were unable to recover it. 00:30:34.761 [2024-12-05 12:14:08.750021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.761 [2024-12-05 12:14:08.750053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.761 qpair failed and we were unable to recover it. 00:30:34.761 [2024-12-05 12:14:08.750301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.761 [2024-12-05 12:14:08.750333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.761 qpair failed and we were unable to recover it. 00:30:34.761 [2024-12-05 12:14:08.750522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.761 [2024-12-05 12:14:08.750554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.761 qpair failed and we were unable to recover it. 00:30:34.761 [2024-12-05 12:14:08.750724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.761 [2024-12-05 12:14:08.750756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.761 qpair failed and we were unable to recover it. 00:30:34.761 [2024-12-05 12:14:08.750864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.761 [2024-12-05 12:14:08.750896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.761 qpair failed and we were unable to recover it. 00:30:34.761 [2024-12-05 12:14:08.751028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.761 [2024-12-05 12:14:08.751060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.761 qpair failed and we were unable to recover it. 00:30:34.761 [2024-12-05 12:14:08.751188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.761 [2024-12-05 12:14:08.751219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.761 qpair failed and we were unable to recover it. 00:30:34.761 [2024-12-05 12:14:08.751391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.761 [2024-12-05 12:14:08.751424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.761 qpair failed and we were unable to recover it. 00:30:34.761 [2024-12-05 12:14:08.751540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.761 [2024-12-05 12:14:08.751572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.761 qpair failed and we were unable to recover it. 00:30:34.761 [2024-12-05 12:14:08.751761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.761 [2024-12-05 12:14:08.751792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.761 qpair failed and we were unable to recover it. 00:30:34.761 [2024-12-05 12:14:08.751980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.761 [2024-12-05 12:14:08.752011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.761 qpair failed and we were unable to recover it. 00:30:34.761 [2024-12-05 12:14:08.752121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.761 [2024-12-05 12:14:08.752154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.761 qpair failed and we were unable to recover it. 00:30:34.761 [2024-12-05 12:14:08.752323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.761 [2024-12-05 12:14:08.752355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.761 qpair failed and we were unable to recover it. 00:30:34.761 [2024-12-05 12:14:08.752486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.761 [2024-12-05 12:14:08.752518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.761 qpair failed and we were unable to recover it. 00:30:34.761 [2024-12-05 12:14:08.752713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.761 [2024-12-05 12:14:08.752745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.761 qpair failed and we were unable to recover it. 00:30:34.761 [2024-12-05 12:14:08.752885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.761 [2024-12-05 12:14:08.752917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.761 qpair failed and we were unable to recover it. 00:30:34.761 [2024-12-05 12:14:08.753091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.761 [2024-12-05 12:14:08.753122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.761 qpair failed and we were unable to recover it. 00:30:34.761 [2024-12-05 12:14:08.753239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.761 [2024-12-05 12:14:08.753272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.761 qpair failed and we were unable to recover it. 00:30:34.761 [2024-12-05 12:14:08.753410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.762 [2024-12-05 12:14:08.753443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.762 qpair failed and we were unable to recover it. 00:30:34.762 [2024-12-05 12:14:08.753633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.762 [2024-12-05 12:14:08.753665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.762 qpair failed and we were unable to recover it. 00:30:34.762 [2024-12-05 12:14:08.753866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.762 [2024-12-05 12:14:08.753898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.762 qpair failed and we were unable to recover it. 00:30:34.762 [2024-12-05 12:14:08.754072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.762 [2024-12-05 12:14:08.754104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.762 qpair failed and we were unable to recover it. 00:30:34.762 [2024-12-05 12:14:08.754288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.762 [2024-12-05 12:14:08.754321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.762 qpair failed and we were unable to recover it. 00:30:34.762 [2024-12-05 12:14:08.754504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.762 [2024-12-05 12:14:08.754536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.762 qpair failed and we were unable to recover it. 00:30:34.762 [2024-12-05 12:14:08.754707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.762 [2024-12-05 12:14:08.754739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.762 qpair failed and we were unable to recover it. 00:30:34.762 [2024-12-05 12:14:08.754874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.762 [2024-12-05 12:14:08.754905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.762 qpair failed and we were unable to recover it. 00:30:34.762 [2024-12-05 12:14:08.755094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.762 [2024-12-05 12:14:08.755126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.762 qpair failed and we were unable to recover it. 00:30:34.762 [2024-12-05 12:14:08.755322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.762 [2024-12-05 12:14:08.755354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.762 qpair failed and we were unable to recover it. 00:30:34.762 [2024-12-05 12:14:08.755560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.762 [2024-12-05 12:14:08.755599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.762 qpair failed and we were unable to recover it. 00:30:34.762 [2024-12-05 12:14:08.755734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.762 [2024-12-05 12:14:08.755766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.762 qpair failed and we were unable to recover it. 00:30:34.762 [2024-12-05 12:14:08.755971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.762 [2024-12-05 12:14:08.756004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.762 qpair failed and we were unable to recover it. 00:30:34.762 [2024-12-05 12:14:08.756139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.762 [2024-12-05 12:14:08.756171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.762 qpair failed and we were unable to recover it. 00:30:34.762 [2024-12-05 12:14:08.756282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.762 [2024-12-05 12:14:08.756314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.762 qpair failed and we were unable to recover it. 00:30:34.762 [2024-12-05 12:14:08.756458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.762 [2024-12-05 12:14:08.756491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.762 qpair failed and we were unable to recover it. 00:30:34.762 [2024-12-05 12:14:08.756693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.762 [2024-12-05 12:14:08.756726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.762 qpair failed and we were unable to recover it. 00:30:34.762 [2024-12-05 12:14:08.756989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.762 [2024-12-05 12:14:08.757022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.762 qpair failed and we were unable to recover it. 00:30:34.762 [2024-12-05 12:14:08.757258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.762 [2024-12-05 12:14:08.757291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.762 qpair failed and we were unable to recover it. 00:30:34.762 [2024-12-05 12:14:08.757424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.762 [2024-12-05 12:14:08.757457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.762 qpair failed and we were unable to recover it. 00:30:34.762 [2024-12-05 12:14:08.757699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.762 [2024-12-05 12:14:08.757731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.762 qpair failed and we were unable to recover it. 00:30:34.762 [2024-12-05 12:14:08.757846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.762 [2024-12-05 12:14:08.757878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.762 qpair failed and we were unable to recover it. 00:30:34.762 [2024-12-05 12:14:08.757999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.762 [2024-12-05 12:14:08.758031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.762 qpair failed and we were unable to recover it. 00:30:34.762 [2024-12-05 12:14:08.758317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.762 [2024-12-05 12:14:08.758348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.762 qpair failed and we were unable to recover it. 00:30:34.762 [2024-12-05 12:14:08.758562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.762 [2024-12-05 12:14:08.758596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.762 qpair failed and we were unable to recover it. 00:30:34.762 [2024-12-05 12:14:08.758775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.762 [2024-12-05 12:14:08.758807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.762 qpair failed and we were unable to recover it. 00:30:34.762 [2024-12-05 12:14:08.759002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.762 [2024-12-05 12:14:08.759034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.762 qpair failed and we were unable to recover it. 00:30:34.762 [2024-12-05 12:14:08.759210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.762 [2024-12-05 12:14:08.759243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.762 qpair failed and we were unable to recover it. 00:30:34.762 [2024-12-05 12:14:08.759365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.762 [2024-12-05 12:14:08.759408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.762 qpair failed and we were unable to recover it. 00:30:34.762 [2024-12-05 12:14:08.759579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.762 [2024-12-05 12:14:08.759612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.762 qpair failed and we were unable to recover it. 00:30:34.762 [2024-12-05 12:14:08.759728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.762 [2024-12-05 12:14:08.759759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.762 qpair failed and we were unable to recover it. 00:30:34.762 [2024-12-05 12:14:08.759870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.762 [2024-12-05 12:14:08.759903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.762 qpair failed and we were unable to recover it. 00:30:34.763 [2024-12-05 12:14:08.760196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.763 [2024-12-05 12:14:08.760227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.763 qpair failed and we were unable to recover it. 00:30:34.763 [2024-12-05 12:14:08.760352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.763 [2024-12-05 12:14:08.760390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.763 qpair failed and we were unable to recover it. 00:30:34.763 [2024-12-05 12:14:08.760500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.763 [2024-12-05 12:14:08.760532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.763 qpair failed and we were unable to recover it. 00:30:34.763 [2024-12-05 12:14:08.760665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.763 [2024-12-05 12:14:08.760696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.763 qpair failed and we were unable to recover it. 00:30:34.763 [2024-12-05 12:14:08.760882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.763 [2024-12-05 12:14:08.760914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.763 qpair failed and we were unable to recover it. 00:30:34.763 [2024-12-05 12:14:08.761022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.763 [2024-12-05 12:14:08.761059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.763 qpair failed and we were unable to recover it. 00:30:34.763 [2024-12-05 12:14:08.761296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.763 [2024-12-05 12:14:08.761327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.763 qpair failed and we were unable to recover it. 00:30:34.763 [2024-12-05 12:14:08.761528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.763 [2024-12-05 12:14:08.761561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.763 qpair failed and we were unable to recover it. 00:30:34.763 [2024-12-05 12:14:08.761733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.763 [2024-12-05 12:14:08.761764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.763 qpair failed and we were unable to recover it. 00:30:34.763 [2024-12-05 12:14:08.761874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.763 [2024-12-05 12:14:08.761906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.763 qpair failed and we were unable to recover it. 00:30:34.763 [2024-12-05 12:14:08.762148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.763 [2024-12-05 12:14:08.762180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.763 qpair failed and we were unable to recover it. 00:30:34.763 [2024-12-05 12:14:08.762316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.763 [2024-12-05 12:14:08.762347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.763 qpair failed and we were unable to recover it. 00:30:34.763 [2024-12-05 12:14:08.762618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.763 [2024-12-05 12:14:08.762651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.763 qpair failed and we were unable to recover it. 00:30:34.763 [2024-12-05 12:14:08.762844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.763 [2024-12-05 12:14:08.762876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.763 qpair failed and we were unable to recover it. 00:30:34.763 [2024-12-05 12:14:08.763117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.763 [2024-12-05 12:14:08.763148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.763 qpair failed and we were unable to recover it. 00:30:34.763 [2024-12-05 12:14:08.763324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.763 [2024-12-05 12:14:08.763356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.763 qpair failed and we were unable to recover it. 00:30:34.763 [2024-12-05 12:14:08.763495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.763 [2024-12-05 12:14:08.763527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.763 qpair failed and we were unable to recover it. 00:30:34.763 [2024-12-05 12:14:08.763652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.763 [2024-12-05 12:14:08.763684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.763 qpair failed and we were unable to recover it. 00:30:34.763 [2024-12-05 12:14:08.763858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.763 [2024-12-05 12:14:08.763890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.763 qpair failed and we were unable to recover it. 00:30:34.763 [2024-12-05 12:14:08.764005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.763 [2024-12-05 12:14:08.764037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.763 qpair failed and we were unable to recover it. 00:30:34.763 [2024-12-05 12:14:08.764145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.763 [2024-12-05 12:14:08.764177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.763 qpair failed and we were unable to recover it. 00:30:34.763 [2024-12-05 12:14:08.764289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.763 [2024-12-05 12:14:08.764321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.763 qpair failed and we were unable to recover it. 00:30:34.763 [2024-12-05 12:14:08.764443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.763 [2024-12-05 12:14:08.764477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.763 qpair failed and we were unable to recover it. 00:30:34.763 [2024-12-05 12:14:08.764590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.763 [2024-12-05 12:14:08.764622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.763 qpair failed and we were unable to recover it. 00:30:34.763 [2024-12-05 12:14:08.764796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.763 [2024-12-05 12:14:08.764829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.763 qpair failed and we were unable to recover it. 00:30:34.763 [2024-12-05 12:14:08.764961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.763 [2024-12-05 12:14:08.764994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.763 qpair failed and we were unable to recover it. 00:30:34.763 [2024-12-05 12:14:08.765233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.763 [2024-12-05 12:14:08.765264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.763 qpair failed and we were unable to recover it. 00:30:34.763 [2024-12-05 12:14:08.765434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.763 [2024-12-05 12:14:08.765467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.763 qpair failed and we were unable to recover it. 00:30:34.763 [2024-12-05 12:14:08.765644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.763 [2024-12-05 12:14:08.765676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.763 qpair failed and we were unable to recover it. 00:30:34.763 [2024-12-05 12:14:08.765782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.763 [2024-12-05 12:14:08.765814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.763 qpair failed and we were unable to recover it. 00:30:34.763 [2024-12-05 12:14:08.765935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.763 [2024-12-05 12:14:08.765967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.763 qpair failed and we were unable to recover it. 00:30:34.763 [2024-12-05 12:14:08.766104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.763 [2024-12-05 12:14:08.766135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.763 qpair failed and we were unable to recover it. 00:30:34.763 [2024-12-05 12:14:08.766247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.763 [2024-12-05 12:14:08.766279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.763 qpair failed and we were unable to recover it. 00:30:34.763 [2024-12-05 12:14:08.766463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.764 [2024-12-05 12:14:08.766497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.764 qpair failed and we were unable to recover it. 00:30:34.764 [2024-12-05 12:14:08.766608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.764 [2024-12-05 12:14:08.766640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.764 qpair failed and we were unable to recover it. 00:30:34.764 [2024-12-05 12:14:08.766752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.764 [2024-12-05 12:14:08.766785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.764 qpair failed and we were unable to recover it. 00:30:34.764 [2024-12-05 12:14:08.766905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.764 [2024-12-05 12:14:08.766936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.764 qpair failed and we were unable to recover it. 00:30:34.764 [2024-12-05 12:14:08.767044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.764 [2024-12-05 12:14:08.767076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.764 qpair failed and we were unable to recover it. 00:30:34.764 [2024-12-05 12:14:08.767342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.764 [2024-12-05 12:14:08.767380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.764 qpair failed and we were unable to recover it. 00:30:34.764 [2024-12-05 12:14:08.767574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.764 [2024-12-05 12:14:08.767609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.764 qpair failed and we were unable to recover it. 00:30:34.764 [2024-12-05 12:14:08.767747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.764 [2024-12-05 12:14:08.767779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.764 qpair failed and we were unable to recover it. 00:30:34.764 [2024-12-05 12:14:08.767968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.764 [2024-12-05 12:14:08.768000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.764 qpair failed and we were unable to recover it. 00:30:34.764 [2024-12-05 12:14:08.768173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.764 [2024-12-05 12:14:08.768206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.764 qpair failed and we were unable to recover it. 00:30:34.764 [2024-12-05 12:14:08.768314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.764 [2024-12-05 12:14:08.768347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.764 qpair failed and we were unable to recover it. 00:30:34.764 [2024-12-05 12:14:08.768533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.764 [2024-12-05 12:14:08.768566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.764 qpair failed and we were unable to recover it. 00:30:34.764 [2024-12-05 12:14:08.768690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.764 [2024-12-05 12:14:08.768721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.764 qpair failed and we were unable to recover it. 00:30:34.764 [2024-12-05 12:14:08.768844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.764 [2024-12-05 12:14:08.768883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.764 qpair failed and we were unable to recover it. 00:30:34.764 [2024-12-05 12:14:08.769058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.764 [2024-12-05 12:14:08.769092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.764 qpair failed and we were unable to recover it. 00:30:34.764 [2024-12-05 12:14:08.769203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.764 [2024-12-05 12:14:08.769235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.764 qpair failed and we were unable to recover it. 00:30:34.764 [2024-12-05 12:14:08.769353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.764 [2024-12-05 12:14:08.769394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.764 qpair failed and we were unable to recover it. 00:30:34.764 [2024-12-05 12:14:08.769634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.764 [2024-12-05 12:14:08.769666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.764 qpair failed and we were unable to recover it. 00:30:34.764 [2024-12-05 12:14:08.769796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.764 [2024-12-05 12:14:08.769829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.764 qpair failed and we were unable to recover it. 00:30:34.764 [2024-12-05 12:14:08.769960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.764 [2024-12-05 12:14:08.769993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.764 qpair failed and we were unable to recover it. 00:30:34.764 [2024-12-05 12:14:08.770165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.764 [2024-12-05 12:14:08.770198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.764 qpair failed and we were unable to recover it. 00:30:34.764 [2024-12-05 12:14:08.770320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.764 [2024-12-05 12:14:08.770352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.764 qpair failed and we were unable to recover it. 00:30:34.764 [2024-12-05 12:14:08.770548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.764 [2024-12-05 12:14:08.770582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.764 qpair failed and we were unable to recover it. 00:30:34.764 [2024-12-05 12:14:08.770701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.764 [2024-12-05 12:14:08.770733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.764 qpair failed and we were unable to recover it. 00:30:34.764 [2024-12-05 12:14:08.770849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.764 [2024-12-05 12:14:08.770883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.764 qpair failed and we were unable to recover it. 00:30:34.764 [2024-12-05 12:14:08.771052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.764 [2024-12-05 12:14:08.771084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.764 qpair failed and we were unable to recover it. 00:30:34.764 [2024-12-05 12:14:08.771211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.764 [2024-12-05 12:14:08.771243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.764 qpair failed and we were unable to recover it. 00:30:34.764 [2024-12-05 12:14:08.771361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.764 [2024-12-05 12:14:08.771403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.764 qpair failed and we were unable to recover it. 00:30:34.764 [2024-12-05 12:14:08.771509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.764 [2024-12-05 12:14:08.771543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.764 qpair failed and we were unable to recover it. 00:30:34.764 [2024-12-05 12:14:08.771715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.764 [2024-12-05 12:14:08.771747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.764 qpair failed and we were unable to recover it. 00:30:34.764 [2024-12-05 12:14:08.771853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.764 [2024-12-05 12:14:08.771884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.764 qpair failed and we were unable to recover it. 00:30:34.764 [2024-12-05 12:14:08.771989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.764 [2024-12-05 12:14:08.772022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.764 qpair failed and we were unable to recover it. 00:30:34.764 [2024-12-05 12:14:08.772261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.765 [2024-12-05 12:14:08.772294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.765 qpair failed and we were unable to recover it. 00:30:34.765 [2024-12-05 12:14:08.772482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.765 [2024-12-05 12:14:08.772516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.765 qpair failed and we were unable to recover it. 00:30:34.765 [2024-12-05 12:14:08.772701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.765 [2024-12-05 12:14:08.772733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.765 qpair failed and we were unable to recover it. 00:30:34.765 [2024-12-05 12:14:08.772833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.765 [2024-12-05 12:14:08.772865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.765 qpair failed and we were unable to recover it. 00:30:34.765 [2024-12-05 12:14:08.772979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.765 [2024-12-05 12:14:08.773011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.765 qpair failed and we were unable to recover it. 00:30:34.765 [2024-12-05 12:14:08.773117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.765 [2024-12-05 12:14:08.773148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.765 qpair failed and we were unable to recover it. 00:30:34.765 [2024-12-05 12:14:08.773277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.765 [2024-12-05 12:14:08.773308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.765 qpair failed and we were unable to recover it. 00:30:34.765 [2024-12-05 12:14:08.773533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.765 [2024-12-05 12:14:08.773566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.765 qpair failed and we were unable to recover it. 00:30:34.765 [2024-12-05 12:14:08.773688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.765 [2024-12-05 12:14:08.773726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.765 qpair failed and we were unable to recover it. 00:30:34.765 [2024-12-05 12:14:08.773906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.765 [2024-12-05 12:14:08.773938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.765 qpair failed and we were unable to recover it. 00:30:34.765 [2024-12-05 12:14:08.774119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.765 [2024-12-05 12:14:08.774150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.765 qpair failed and we were unable to recover it. 00:30:34.765 [2024-12-05 12:14:08.774272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.765 [2024-12-05 12:14:08.774304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.765 qpair failed and we were unable to recover it. 00:30:34.765 [2024-12-05 12:14:08.774535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.765 [2024-12-05 12:14:08.774568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.765 qpair failed and we were unable to recover it. 00:30:34.765 [2024-12-05 12:14:08.774701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.765 [2024-12-05 12:14:08.774733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.765 qpair failed and we were unable to recover it. 00:30:34.765 [2024-12-05 12:14:08.774845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.765 [2024-12-05 12:14:08.774877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.765 qpair failed and we were unable to recover it. 00:30:34.765 [2024-12-05 12:14:08.775005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.765 [2024-12-05 12:14:08.775038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.765 qpair failed and we were unable to recover it. 00:30:34.765 [2024-12-05 12:14:08.775153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.765 [2024-12-05 12:14:08.775185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.765 qpair failed and we were unable to recover it. 00:30:34.765 [2024-12-05 12:14:08.775311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.765 [2024-12-05 12:14:08.775343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.765 qpair failed and we were unable to recover it. 00:30:34.765 [2024-12-05 12:14:08.775465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.765 [2024-12-05 12:14:08.775500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.765 qpair failed and we were unable to recover it. 00:30:34.765 [2024-12-05 12:14:08.775612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.765 [2024-12-05 12:14:08.775644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.765 qpair failed and we were unable to recover it. 00:30:34.765 [2024-12-05 12:14:08.775837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.765 [2024-12-05 12:14:08.775870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.765 qpair failed and we were unable to recover it. 00:30:34.765 [2024-12-05 12:14:08.776000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.765 [2024-12-05 12:14:08.776032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.765 qpair failed and we were unable to recover it. 00:30:34.765 [2024-12-05 12:14:08.776207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.765 [2024-12-05 12:14:08.776240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.765 qpair failed and we were unable to recover it. 00:30:34.765 [2024-12-05 12:14:08.776354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.765 [2024-12-05 12:14:08.776397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.765 qpair failed and we were unable to recover it. 00:30:34.765 [2024-12-05 12:14:08.776578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.765 [2024-12-05 12:14:08.776609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.765 qpair failed and we were unable to recover it. 00:30:34.765 [2024-12-05 12:14:08.776717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.766 [2024-12-05 12:14:08.776748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.766 qpair failed and we were unable to recover it. 00:30:34.766 [2024-12-05 12:14:08.776866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.766 [2024-12-05 12:14:08.776898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.766 qpair failed and we were unable to recover it. 00:30:34.766 [2024-12-05 12:14:08.777022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.766 [2024-12-05 12:14:08.777055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.766 qpair failed and we were unable to recover it. 00:30:34.766 [2024-12-05 12:14:08.777165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.766 [2024-12-05 12:14:08.777197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.766 qpair failed and we were unable to recover it. 00:30:34.766 [2024-12-05 12:14:08.777303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.766 [2024-12-05 12:14:08.777336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.766 qpair failed and we were unable to recover it. 00:30:34.766 [2024-12-05 12:14:08.777468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.766 [2024-12-05 12:14:08.777500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.766 qpair failed and we were unable to recover it. 00:30:34.766 [2024-12-05 12:14:08.777607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.766 [2024-12-05 12:14:08.777639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.766 qpair failed and we were unable to recover it. 00:30:34.766 [2024-12-05 12:14:08.777861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.766 [2024-12-05 12:14:08.777895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.766 qpair failed and we were unable to recover it. 00:30:34.766 [2024-12-05 12:14:08.778021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.766 [2024-12-05 12:14:08.778052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.766 qpair failed and we were unable to recover it. 00:30:34.766 [2024-12-05 12:14:08.778253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.766 [2024-12-05 12:14:08.778284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.766 qpair failed and we were unable to recover it. 00:30:34.766 [2024-12-05 12:14:08.778484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.766 [2024-12-05 12:14:08.778519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.766 qpair failed and we were unable to recover it. 00:30:34.766 [2024-12-05 12:14:08.778632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.766 [2024-12-05 12:14:08.778663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.766 qpair failed and we were unable to recover it. 00:30:34.766 [2024-12-05 12:14:08.778775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.766 [2024-12-05 12:14:08.778808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.766 qpair failed and we were unable to recover it. 00:30:34.766 [2024-12-05 12:14:08.778912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.766 [2024-12-05 12:14:08.778944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.766 qpair failed and we were unable to recover it. 00:30:34.766 [2024-12-05 12:14:08.779189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.766 [2024-12-05 12:14:08.779220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.766 qpair failed and we were unable to recover it. 00:30:34.766 [2024-12-05 12:14:08.779340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.766 [2024-12-05 12:14:08.779391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.766 qpair failed and we were unable to recover it. 00:30:34.766 [2024-12-05 12:14:08.779516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.766 [2024-12-05 12:14:08.779549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.766 qpair failed and we were unable to recover it. 00:30:34.766 [2024-12-05 12:14:08.779730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.766 [2024-12-05 12:14:08.779762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.766 qpair failed and we were unable to recover it. 00:30:34.766 [2024-12-05 12:14:08.779944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.766 [2024-12-05 12:14:08.779975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.766 qpair failed and we were unable to recover it. 00:30:34.766 [2024-12-05 12:14:08.780091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.766 [2024-12-05 12:14:08.780124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.766 qpair failed and we were unable to recover it. 00:30:34.766 [2024-12-05 12:14:08.780250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.766 [2024-12-05 12:14:08.780282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.766 qpair failed and we were unable to recover it. 00:30:34.766 [2024-12-05 12:14:08.780410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.766 [2024-12-05 12:14:08.780442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.766 qpair failed and we were unable to recover it. 00:30:34.766 [2024-12-05 12:14:08.780627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.766 [2024-12-05 12:14:08.780659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.766 qpair failed and we were unable to recover it. 00:30:34.766 [2024-12-05 12:14:08.780764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.766 [2024-12-05 12:14:08.780796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.766 qpair failed and we were unable to recover it. 00:30:34.766 [2024-12-05 12:14:08.780914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.766 [2024-12-05 12:14:08.780951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.766 qpair failed and we were unable to recover it. 00:30:34.766 [2024-12-05 12:14:08.781069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.766 [2024-12-05 12:14:08.781101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.766 qpair failed and we were unable to recover it. 00:30:34.766 [2024-12-05 12:14:08.781308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.766 [2024-12-05 12:14:08.781340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.766 qpair failed and we were unable to recover it. 00:30:34.766 [2024-12-05 12:14:08.781509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.766 [2024-12-05 12:14:08.781581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.766 qpair failed and we were unable to recover it. 00:30:34.766 [2024-12-05 12:14:08.781725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.766 [2024-12-05 12:14:08.781762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.766 qpair failed and we were unable to recover it. 00:30:34.766 [2024-12-05 12:14:08.781940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.766 [2024-12-05 12:14:08.781972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.766 qpair failed and we were unable to recover it. 00:30:34.766 [2024-12-05 12:14:08.782092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.766 [2024-12-05 12:14:08.782125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.766 qpair failed and we were unable to recover it. 00:30:34.767 [2024-12-05 12:14:08.782299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.767 [2024-12-05 12:14:08.782331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.767 qpair failed and we were unable to recover it. 00:30:34.767 [2024-12-05 12:14:08.782463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.767 [2024-12-05 12:14:08.782499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.767 qpair failed and we were unable to recover it. 00:30:34.767 [2024-12-05 12:14:08.782706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.767 [2024-12-05 12:14:08.782737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.767 qpair failed and we were unable to recover it. 00:30:34.767 [2024-12-05 12:14:08.782914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.767 [2024-12-05 12:14:08.782945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.767 qpair failed and we were unable to recover it. 00:30:34.767 [2024-12-05 12:14:08.783211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.767 [2024-12-05 12:14:08.783247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.767 qpair failed and we were unable to recover it. 00:30:34.767 [2024-12-05 12:14:08.783424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.767 [2024-12-05 12:14:08.783457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.767 qpair failed and we were unable to recover it. 00:30:34.767 [2024-12-05 12:14:08.783651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.767 [2024-12-05 12:14:08.783687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.767 qpair failed and we were unable to recover it. 00:30:34.767 [2024-12-05 12:14:08.783875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.767 [2024-12-05 12:14:08.783905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.767 qpair failed and we were unable to recover it. 00:30:34.767 [2024-12-05 12:14:08.784093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.767 [2024-12-05 12:14:08.784123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.767 qpair failed and we were unable to recover it. 00:30:34.767 [2024-12-05 12:14:08.784315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.767 [2024-12-05 12:14:08.784347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.767 qpair failed and we were unable to recover it. 00:30:34.767 [2024-12-05 12:14:08.784488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.767 [2024-12-05 12:14:08.784520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.767 qpair failed and we were unable to recover it. 00:30:34.767 [2024-12-05 12:14:08.784699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.767 [2024-12-05 12:14:08.784731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.767 qpair failed and we were unable to recover it. 00:30:34.767 [2024-12-05 12:14:08.784909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.767 [2024-12-05 12:14:08.784940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.767 qpair failed and we were unable to recover it. 00:30:34.767 [2024-12-05 12:14:08.785052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.767 [2024-12-05 12:14:08.785083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.767 qpair failed and we were unable to recover it. 00:30:34.767 [2024-12-05 12:14:08.785213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.767 [2024-12-05 12:14:08.785245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.767 qpair failed and we were unable to recover it. 00:30:34.767 [2024-12-05 12:14:08.785425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.767 [2024-12-05 12:14:08.785456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.767 qpair failed and we were unable to recover it. 00:30:34.767 [2024-12-05 12:14:08.785577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.767 [2024-12-05 12:14:08.785607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.767 qpair failed and we were unable to recover it. 00:30:34.767 [2024-12-05 12:14:08.785843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.767 [2024-12-05 12:14:08.785874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.767 qpair failed and we were unable to recover it. 00:30:34.767 [2024-12-05 12:14:08.786077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.767 [2024-12-05 12:14:08.786109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.767 qpair failed and we were unable to recover it. 00:30:34.767 [2024-12-05 12:14:08.786229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.767 [2024-12-05 12:14:08.786259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.767 qpair failed and we were unable to recover it. 00:30:34.767 [2024-12-05 12:14:08.786383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.767 [2024-12-05 12:14:08.786415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.767 qpair failed and we were unable to recover it. 00:30:34.767 [2024-12-05 12:14:08.786543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.767 [2024-12-05 12:14:08.786573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.767 qpair failed and we were unable to recover it. 00:30:34.767 [2024-12-05 12:14:08.786748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.767 [2024-12-05 12:14:08.786777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.767 qpair failed and we were unable to recover it. 00:30:34.767 [2024-12-05 12:14:08.786974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.767 [2024-12-05 12:14:08.787004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.767 qpair failed and we were unable to recover it. 00:30:34.767 [2024-12-05 12:14:08.787187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.767 [2024-12-05 12:14:08.787217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.767 qpair failed and we were unable to recover it. 00:30:34.767 [2024-12-05 12:14:08.787320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.767 [2024-12-05 12:14:08.787350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.767 qpair failed and we were unable to recover it. 00:30:34.767 [2024-12-05 12:14:08.787550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.767 [2024-12-05 12:14:08.787581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.767 qpair failed and we were unable to recover it. 00:30:34.767 [2024-12-05 12:14:08.787683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.767 [2024-12-05 12:14:08.787718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.767 qpair failed and we were unable to recover it. 00:30:34.767 [2024-12-05 12:14:08.787905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.767 [2024-12-05 12:14:08.787937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.767 qpair failed and we were unable to recover it. 00:30:34.767 [2024-12-05 12:14:08.788115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.767 [2024-12-05 12:14:08.788148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.767 qpair failed and we were unable to recover it. 00:30:34.767 [2024-12-05 12:14:08.788339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.767 [2024-12-05 12:14:08.788379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.767 qpair failed and we were unable to recover it. 00:30:34.767 [2024-12-05 12:14:08.788561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.767 [2024-12-05 12:14:08.788592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.767 qpair failed and we were unable to recover it. 00:30:34.768 [2024-12-05 12:14:08.788738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.768 [2024-12-05 12:14:08.788777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.768 qpair failed and we were unable to recover it. 00:30:34.768 [2024-12-05 12:14:08.788890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.768 [2024-12-05 12:14:08.788928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.768 qpair failed and we were unable to recover it. 00:30:34.768 [2024-12-05 12:14:08.789132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.768 [2024-12-05 12:14:08.789162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.768 qpair failed and we were unable to recover it. 00:30:34.768 [2024-12-05 12:14:08.789290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.768 [2024-12-05 12:14:08.789319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.768 qpair failed and we were unable to recover it. 00:30:34.768 [2024-12-05 12:14:08.789437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.768 [2024-12-05 12:14:08.789468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.768 qpair failed and we were unable to recover it. 00:30:34.768 [2024-12-05 12:14:08.789655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.768 [2024-12-05 12:14:08.789685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.768 qpair failed and we were unable to recover it. 00:30:34.768 [2024-12-05 12:14:08.789901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.768 [2024-12-05 12:14:08.789931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.768 qpair failed and we were unable to recover it. 00:30:34.768 [2024-12-05 12:14:08.790118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.768 [2024-12-05 12:14:08.790149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.768 qpair failed and we were unable to recover it. 00:30:34.768 [2024-12-05 12:14:08.790263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.768 [2024-12-05 12:14:08.790294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.768 qpair failed and we were unable to recover it. 00:30:34.768 [2024-12-05 12:14:08.790533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.768 [2024-12-05 12:14:08.790569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.768 qpair failed and we were unable to recover it. 00:30:34.768 [2024-12-05 12:14:08.790788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.768 [2024-12-05 12:14:08.790818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.768 qpair failed and we were unable to recover it. 00:30:34.768 [2024-12-05 12:14:08.790991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.768 [2024-12-05 12:14:08.791023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.768 qpair failed and we were unable to recover it. 00:30:34.768 [2024-12-05 12:14:08.791197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.768 [2024-12-05 12:14:08.791228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.768 qpair failed and we were unable to recover it. 00:30:34.768 [2024-12-05 12:14:08.791404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.768 [2024-12-05 12:14:08.791436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.768 qpair failed and we were unable to recover it. 00:30:34.768 [2024-12-05 12:14:08.791556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.768 [2024-12-05 12:14:08.791588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.768 qpair failed and we were unable to recover it. 00:30:34.768 [2024-12-05 12:14:08.791691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.768 [2024-12-05 12:14:08.791722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.768 qpair failed and we were unable to recover it. 00:30:34.768 [2024-12-05 12:14:08.791825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.768 [2024-12-05 12:14:08.791859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.768 qpair failed and we were unable to recover it. 00:30:34.768 [2024-12-05 12:14:08.791972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.768 [2024-12-05 12:14:08.792003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.768 qpair failed and we were unable to recover it. 00:30:34.768 [2024-12-05 12:14:08.792195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.768 [2024-12-05 12:14:08.792227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.768 qpair failed and we were unable to recover it. 00:30:34.768 [2024-12-05 12:14:08.792493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.768 [2024-12-05 12:14:08.792527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.768 qpair failed and we were unable to recover it. 00:30:34.768 [2024-12-05 12:14:08.792660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.768 [2024-12-05 12:14:08.792690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.768 qpair failed and we were unable to recover it. 00:30:34.768 [2024-12-05 12:14:08.792814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.768 [2024-12-05 12:14:08.792844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.768 qpair failed and we were unable to recover it. 00:30:34.768 [2024-12-05 12:14:08.793026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.768 [2024-12-05 12:14:08.793056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.768 qpair failed and we were unable to recover it. 00:30:34.768 [2024-12-05 12:14:08.793166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.768 [2024-12-05 12:14:08.793197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.768 qpair failed and we were unable to recover it. 00:30:34.768 [2024-12-05 12:14:08.793451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.768 [2024-12-05 12:14:08.793483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.768 qpair failed and we were unable to recover it. 00:30:34.768 [2024-12-05 12:14:08.793603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.768 [2024-12-05 12:14:08.793632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.768 qpair failed and we were unable to recover it. 00:30:34.768 [2024-12-05 12:14:08.793802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.768 [2024-12-05 12:14:08.793832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.768 qpair failed and we were unable to recover it. 00:30:34.768 [2024-12-05 12:14:08.793934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.768 [2024-12-05 12:14:08.793964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.768 qpair failed and we were unable to recover it. 00:30:34.768 [2024-12-05 12:14:08.794078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.768 [2024-12-05 12:14:08.794112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.768 qpair failed and we were unable to recover it. 00:30:34.768 [2024-12-05 12:14:08.794235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.768 [2024-12-05 12:14:08.794265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.768 qpair failed and we were unable to recover it. 00:30:34.768 [2024-12-05 12:14:08.794388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.769 [2024-12-05 12:14:08.794420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.769 qpair failed and we were unable to recover it. 00:30:34.769 [2024-12-05 12:14:08.794615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.769 [2024-12-05 12:14:08.794646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.769 qpair failed and we were unable to recover it. 00:30:34.769 [2024-12-05 12:14:08.794886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.769 [2024-12-05 12:14:08.794920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.769 qpair failed and we were unable to recover it. 00:30:34.769 [2024-12-05 12:14:08.795045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.769 [2024-12-05 12:14:08.795076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.769 qpair failed and we were unable to recover it. 00:30:34.769 [2024-12-05 12:14:08.795199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.769 [2024-12-05 12:14:08.795231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.769 qpair failed and we were unable to recover it. 00:30:34.769 [2024-12-05 12:14:08.795342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.769 [2024-12-05 12:14:08.795382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.769 qpair failed and we were unable to recover it. 00:30:34.769 [2024-12-05 12:14:08.795484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.769 [2024-12-05 12:14:08.795514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.769 qpair failed and we were unable to recover it. 00:30:34.769 [2024-12-05 12:14:08.795624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.769 [2024-12-05 12:14:08.795655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.769 qpair failed and we were unable to recover it. 00:30:34.769 [2024-12-05 12:14:08.795779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.769 [2024-12-05 12:14:08.795808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.769 qpair failed and we were unable to recover it. 00:30:34.769 [2024-12-05 12:14:08.796045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.769 [2024-12-05 12:14:08.796077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.769 qpair failed and we were unable to recover it. 00:30:34.769 [2024-12-05 12:14:08.796196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.769 [2024-12-05 12:14:08.796226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.769 qpair failed and we were unable to recover it. 00:30:34.769 [2024-12-05 12:14:08.796320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.769 [2024-12-05 12:14:08.796351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.769 qpair failed and we were unable to recover it. 00:30:34.769 [2024-12-05 12:14:08.796600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.769 [2024-12-05 12:14:08.796631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.769 qpair failed and we were unable to recover it. 00:30:34.769 [2024-12-05 12:14:08.796734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.769 [2024-12-05 12:14:08.796763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.769 qpair failed and we were unable to recover it. 00:30:34.769 [2024-12-05 12:14:08.796939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.769 [2024-12-05 12:14:08.796970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.769 qpair failed and we were unable to recover it. 00:30:34.769 [2024-12-05 12:14:08.797207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.769 [2024-12-05 12:14:08.797244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.769 qpair failed and we were unable to recover it. 00:30:34.769 [2024-12-05 12:14:08.797420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.769 [2024-12-05 12:14:08.797452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.769 qpair failed and we were unable to recover it. 00:30:34.769 [2024-12-05 12:14:08.797640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.769 [2024-12-05 12:14:08.797671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.769 qpair failed and we were unable to recover it. 00:30:34.769 [2024-12-05 12:14:08.797794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.769 [2024-12-05 12:14:08.797826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.769 qpair failed and we were unable to recover it. 00:30:34.769 [2024-12-05 12:14:08.797994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.769 [2024-12-05 12:14:08.798027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.769 qpair failed and we were unable to recover it. 00:30:34.769 [2024-12-05 12:14:08.798242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.769 [2024-12-05 12:14:08.798273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.769 qpair failed and we were unable to recover it. 00:30:34.769 [2024-12-05 12:14:08.798399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.769 [2024-12-05 12:14:08.798433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.769 qpair failed and we were unable to recover it. 00:30:34.769 [2024-12-05 12:14:08.798538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.769 [2024-12-05 12:14:08.798570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.769 qpair failed and we were unable to recover it. 00:30:34.769 [2024-12-05 12:14:08.798762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.769 [2024-12-05 12:14:08.798795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.769 qpair failed and we were unable to recover it. 00:30:34.769 [2024-12-05 12:14:08.798901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.769 [2024-12-05 12:14:08.798933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.769 qpair failed and we were unable to recover it. 00:30:34.769 [2024-12-05 12:14:08.799115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.769 [2024-12-05 12:14:08.799156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.769 qpair failed and we were unable to recover it. 00:30:34.769 [2024-12-05 12:14:08.799288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.769 [2024-12-05 12:14:08.799318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.769 qpair failed and we were unable to recover it. 00:30:34.769 [2024-12-05 12:14:08.799436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.769 [2024-12-05 12:14:08.799467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.769 qpair failed and we were unable to recover it. 00:30:34.769 [2024-12-05 12:14:08.799645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.769 [2024-12-05 12:14:08.799677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.769 qpair failed and we were unable to recover it. 00:30:34.769 [2024-12-05 12:14:08.799865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.769 [2024-12-05 12:14:08.799897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.769 qpair failed and we were unable to recover it. 00:30:34.769 [2024-12-05 12:14:08.800013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.770 [2024-12-05 12:14:08.800044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.770 qpair failed and we were unable to recover it. 00:30:34.770 [2024-12-05 12:14:08.800160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.770 [2024-12-05 12:14:08.800192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.770 qpair failed and we were unable to recover it. 00:30:34.770 [2024-12-05 12:14:08.800305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.770 [2024-12-05 12:14:08.800338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.770 qpair failed and we were unable to recover it. 00:30:34.770 [2024-12-05 12:14:08.800576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.770 [2024-12-05 12:14:08.800609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.770 qpair failed and we were unable to recover it. 00:30:34.770 [2024-12-05 12:14:08.800789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.770 [2024-12-05 12:14:08.800822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.770 qpair failed and we were unable to recover it. 00:30:34.770 [2024-12-05 12:14:08.801087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.770 [2024-12-05 12:14:08.801120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.770 qpair failed and we were unable to recover it. 00:30:34.770 [2024-12-05 12:14:08.801257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.770 [2024-12-05 12:14:08.801289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.770 qpair failed and we were unable to recover it. 00:30:34.770 [2024-12-05 12:14:08.801394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.770 [2024-12-05 12:14:08.801426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.770 qpair failed and we were unable to recover it. 00:30:34.770 [2024-12-05 12:14:08.801603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.770 [2024-12-05 12:14:08.801634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.770 qpair failed and we were unable to recover it. 00:30:34.770 [2024-12-05 12:14:08.801771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.770 [2024-12-05 12:14:08.801803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.770 qpair failed and we were unable to recover it. 00:30:34.770 [2024-12-05 12:14:08.801980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.770 [2024-12-05 12:14:08.802011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.770 qpair failed and we were unable to recover it. 00:30:34.770 [2024-12-05 12:14:08.802214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.770 [2024-12-05 12:14:08.802245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.770 qpair failed and we were unable to recover it. 00:30:34.770 [2024-12-05 12:14:08.802433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.770 [2024-12-05 12:14:08.802467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.770 qpair failed and we were unable to recover it. 00:30:34.770 [2024-12-05 12:14:08.802641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.770 [2024-12-05 12:14:08.802672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.770 qpair failed and we were unable to recover it. 00:30:34.770 [2024-12-05 12:14:08.802784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.770 [2024-12-05 12:14:08.802817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.770 qpair failed and we were unable to recover it. 00:30:34.770 [2024-12-05 12:14:08.802938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.770 [2024-12-05 12:14:08.802969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.770 qpair failed and we were unable to recover it. 00:30:34.770 [2024-12-05 12:14:08.803142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.770 [2024-12-05 12:14:08.803173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.770 qpair failed and we were unable to recover it. 00:30:34.770 [2024-12-05 12:14:08.803360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.770 [2024-12-05 12:14:08.803404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.770 qpair failed and we were unable to recover it. 00:30:34.770 [2024-12-05 12:14:08.803584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.770 [2024-12-05 12:14:08.803615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.770 qpair failed and we were unable to recover it. 00:30:34.770 [2024-12-05 12:14:08.803734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.770 [2024-12-05 12:14:08.803766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.770 qpair failed and we were unable to recover it. 00:30:34.770 [2024-12-05 12:14:08.803942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.770 [2024-12-05 12:14:08.803974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.770 qpair failed and we were unable to recover it. 00:30:34.770 [2024-12-05 12:14:08.804081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.770 [2024-12-05 12:14:08.804110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.770 qpair failed and we were unable to recover it. 00:30:34.770 [2024-12-05 12:14:08.804213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.770 [2024-12-05 12:14:08.804253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.770 qpair failed and we were unable to recover it. 00:30:34.770 [2024-12-05 12:14:08.804427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.770 [2024-12-05 12:14:08.804459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.770 qpair failed and we were unable to recover it. 00:30:34.770 [2024-12-05 12:14:08.804572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.770 [2024-12-05 12:14:08.804603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.770 qpair failed and we were unable to recover it. 00:30:34.770 [2024-12-05 12:14:08.804718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.770 [2024-12-05 12:14:08.804747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.770 qpair failed and we were unable to recover it. 00:30:34.770 [2024-12-05 12:14:08.804923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.770 [2024-12-05 12:14:08.804955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.770 qpair failed and we were unable to recover it. 00:30:34.770 [2024-12-05 12:14:08.805063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.770 [2024-12-05 12:14:08.805095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.770 qpair failed and we were unable to recover it. 00:30:34.770 [2024-12-05 12:14:08.805278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.770 [2024-12-05 12:14:08.805308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.770 qpair failed and we were unable to recover it. 00:30:34.770 [2024-12-05 12:14:08.805556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.770 [2024-12-05 12:14:08.805591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.770 qpair failed and we were unable to recover it. 00:30:34.770 [2024-12-05 12:14:08.805833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.770 [2024-12-05 12:14:08.805866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.770 qpair failed and we were unable to recover it. 00:30:34.770 [2024-12-05 12:14:08.805981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.771 [2024-12-05 12:14:08.806013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.771 qpair failed and we were unable to recover it. 00:30:34.771 [2024-12-05 12:14:08.806116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.771 [2024-12-05 12:14:08.806148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.771 qpair failed and we were unable to recover it. 00:30:34.771 [2024-12-05 12:14:08.806283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.771 [2024-12-05 12:14:08.806315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.771 qpair failed and we were unable to recover it. 00:30:34.771 [2024-12-05 12:14:08.806466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.771 [2024-12-05 12:14:08.806500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.771 qpair failed and we were unable to recover it. 00:30:34.771 [2024-12-05 12:14:08.806621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.771 [2024-12-05 12:14:08.806653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.771 qpair failed and we were unable to recover it. 00:30:34.771 [2024-12-05 12:14:08.806772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.771 [2024-12-05 12:14:08.806817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.771 qpair failed and we were unable to recover it. 00:30:34.771 [2024-12-05 12:14:08.807018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.771 [2024-12-05 12:14:08.807051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.771 qpair failed and we were unable to recover it. 00:30:34.771 [2024-12-05 12:14:08.807242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.771 [2024-12-05 12:14:08.807274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.771 qpair failed and we were unable to recover it. 00:30:34.771 [2024-12-05 12:14:08.807477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.771 [2024-12-05 12:14:08.807513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.771 qpair failed and we were unable to recover it. 00:30:34.771 [2024-12-05 12:14:08.807691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.771 [2024-12-05 12:14:08.807724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.771 qpair failed and we were unable to recover it. 00:30:34.771 [2024-12-05 12:14:08.807839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.771 [2024-12-05 12:14:08.807870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.771 qpair failed and we were unable to recover it. 00:30:34.771 [2024-12-05 12:14:08.808048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.771 [2024-12-05 12:14:08.808080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.771 qpair failed and we were unable to recover it. 00:30:34.771 [2024-12-05 12:14:08.808202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.771 [2024-12-05 12:14:08.808235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.771 qpair failed and we were unable to recover it. 00:30:34.771 [2024-12-05 12:14:08.808363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.771 [2024-12-05 12:14:08.808407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.771 qpair failed and we were unable to recover it. 00:30:34.771 [2024-12-05 12:14:08.808515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.771 [2024-12-05 12:14:08.808548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.771 qpair failed and we were unable to recover it. 00:30:34.771 [2024-12-05 12:14:08.808750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.771 [2024-12-05 12:14:08.808785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.771 qpair failed and we were unable to recover it. 00:30:34.771 [2024-12-05 12:14:08.808907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.771 [2024-12-05 12:14:08.808939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.771 qpair failed and we were unable to recover it. 00:30:34.771 [2024-12-05 12:14:08.809056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.771 [2024-12-05 12:14:08.809087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.771 qpair failed and we were unable to recover it. 00:30:34.771 [2024-12-05 12:14:08.809197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.771 [2024-12-05 12:14:08.809237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.771 qpair failed and we were unable to recover it. 00:30:34.771 [2024-12-05 12:14:08.809531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.771 [2024-12-05 12:14:08.809564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.771 qpair failed and we were unable to recover it. 00:30:34.771 [2024-12-05 12:14:08.809751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.771 [2024-12-05 12:14:08.809783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.771 qpair failed and we were unable to recover it. 00:30:34.771 [2024-12-05 12:14:08.809896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.771 [2024-12-05 12:14:08.809927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.771 qpair failed and we were unable to recover it. 00:30:34.771 [2024-12-05 12:14:08.810189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.771 [2024-12-05 12:14:08.810221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.771 qpair failed and we were unable to recover it. 00:30:34.771 [2024-12-05 12:14:08.810348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.771 [2024-12-05 12:14:08.810394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.771 qpair failed and we were unable to recover it. 00:30:34.771 [2024-12-05 12:14:08.810524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.771 [2024-12-05 12:14:08.810555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.771 qpair failed and we were unable to recover it. 00:30:34.771 [2024-12-05 12:14:08.810768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.771 [2024-12-05 12:14:08.810802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.771 qpair failed and we were unable to recover it. 00:30:34.771 [2024-12-05 12:14:08.810920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.772 [2024-12-05 12:14:08.810953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.772 qpair failed and we were unable to recover it. 00:30:34.772 [2024-12-05 12:14:08.811134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.772 [2024-12-05 12:14:08.811166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.772 qpair failed and we were unable to recover it. 00:30:34.772 [2024-12-05 12:14:08.811296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.772 [2024-12-05 12:14:08.811327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.772 qpair failed and we were unable to recover it. 00:30:34.772 [2024-12-05 12:14:08.811449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.772 [2024-12-05 12:14:08.811482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.772 qpair failed and we were unable to recover it. 00:30:34.772 [2024-12-05 12:14:08.811580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.772 [2024-12-05 12:14:08.811612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.772 qpair failed and we were unable to recover it. 00:30:34.772 [2024-12-05 12:14:08.811723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.772 [2024-12-05 12:14:08.811755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.772 qpair failed and we were unable to recover it. 00:30:34.772 [2024-12-05 12:14:08.811882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.772 [2024-12-05 12:14:08.811914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.772 qpair failed and we were unable to recover it. 00:30:34.772 [2024-12-05 12:14:08.812026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.772 [2024-12-05 12:14:08.812057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.772 qpair failed and we were unable to recover it. 00:30:34.772 [2024-12-05 12:14:08.812253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.772 [2024-12-05 12:14:08.812284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.772 qpair failed and we were unable to recover it. 00:30:34.772 [2024-12-05 12:14:08.812474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.772 [2024-12-05 12:14:08.812508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.772 qpair failed and we were unable to recover it. 00:30:34.772 [2024-12-05 12:14:08.812773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.772 [2024-12-05 12:14:08.812806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.772 qpair failed and we were unable to recover it. 00:30:34.772 [2024-12-05 12:14:08.812923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.772 [2024-12-05 12:14:08.812954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.772 qpair failed and we were unable to recover it. 00:30:34.772 [2024-12-05 12:14:08.813124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.772 [2024-12-05 12:14:08.813155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.772 qpair failed and we were unable to recover it. 00:30:34.772 [2024-12-05 12:14:08.813352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.772 [2024-12-05 12:14:08.813407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.772 qpair failed and we were unable to recover it. 00:30:34.772 [2024-12-05 12:14:08.813601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.772 [2024-12-05 12:14:08.813635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.772 qpair failed and we were unable to recover it. 00:30:34.772 [2024-12-05 12:14:08.813812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.772 [2024-12-05 12:14:08.813842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.772 qpair failed and we were unable to recover it. 00:30:34.772 [2024-12-05 12:14:08.813949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.772 [2024-12-05 12:14:08.813978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.772 qpair failed and we were unable to recover it. 00:30:34.772 [2024-12-05 12:14:08.814248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.772 [2024-12-05 12:14:08.814279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.772 qpair failed and we were unable to recover it. 00:30:34.772 [2024-12-05 12:14:08.814453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.772 [2024-12-05 12:14:08.814484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.772 qpair failed and we were unable to recover it. 00:30:34.772 [2024-12-05 12:14:08.814615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.772 [2024-12-05 12:14:08.814649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.772 qpair failed and we were unable to recover it. 00:30:34.772 [2024-12-05 12:14:08.814918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.772 [2024-12-05 12:14:08.814950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.772 qpair failed and we were unable to recover it. 00:30:34.772 [2024-12-05 12:14:08.815062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.772 [2024-12-05 12:14:08.815092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.772 qpair failed and we were unable to recover it. 00:30:34.772 [2024-12-05 12:14:08.815210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.772 [2024-12-05 12:14:08.815240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.772 qpair failed and we were unable to recover it. 00:30:34.772 [2024-12-05 12:14:08.815351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.772 [2024-12-05 12:14:08.815393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.772 qpair failed and we were unable to recover it. 00:30:34.772 [2024-12-05 12:14:08.815580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.772 [2024-12-05 12:14:08.815610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.772 qpair failed and we were unable to recover it. 00:30:34.772 [2024-12-05 12:14:08.815754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.772 [2024-12-05 12:14:08.815787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.772 qpair failed and we were unable to recover it. 00:30:34.772 [2024-12-05 12:14:08.815908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.772 [2024-12-05 12:14:08.815939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.772 qpair failed and we were unable to recover it. 00:30:34.772 [2024-12-05 12:14:08.816176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.772 [2024-12-05 12:14:08.816208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.772 qpair failed and we were unable to recover it. 00:30:34.772 [2024-12-05 12:14:08.816330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.772 [2024-12-05 12:14:08.816362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.772 qpair failed and we were unable to recover it. 00:30:34.772 [2024-12-05 12:14:08.816499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.772 [2024-12-05 12:14:08.816531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.772 qpair failed and we were unable to recover it. 00:30:34.772 [2024-12-05 12:14:08.816711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.772 [2024-12-05 12:14:08.816743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.772 qpair failed and we were unable to recover it. 00:30:34.772 [2024-12-05 12:14:08.817007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.772 [2024-12-05 12:14:08.817039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.773 qpair failed and we were unable to recover it. 00:30:34.773 [2024-12-05 12:14:08.817156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.773 [2024-12-05 12:14:08.817186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.773 qpair failed and we were unable to recover it. 00:30:34.773 [2024-12-05 12:14:08.817458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.773 [2024-12-05 12:14:08.817491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.773 qpair failed and we were unable to recover it. 00:30:34.773 [2024-12-05 12:14:08.817614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.773 [2024-12-05 12:14:08.817644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.773 qpair failed and we were unable to recover it. 00:30:34.773 [2024-12-05 12:14:08.817756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.773 [2024-12-05 12:14:08.817786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.773 qpair failed and we were unable to recover it. 00:30:34.773 [2024-12-05 12:14:08.817957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.773 [2024-12-05 12:14:08.817987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.773 qpair failed and we were unable to recover it. 00:30:34.773 [2024-12-05 12:14:08.818092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.773 [2024-12-05 12:14:08.818123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.773 qpair failed and we were unable to recover it. 00:30:34.773 [2024-12-05 12:14:08.818242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.773 [2024-12-05 12:14:08.818272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.773 qpair failed and we were unable to recover it. 00:30:34.773 [2024-12-05 12:14:08.818442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.773 [2024-12-05 12:14:08.818473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.773 qpair failed and we were unable to recover it. 00:30:34.773 [2024-12-05 12:14:08.818664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.773 [2024-12-05 12:14:08.818695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.773 qpair failed and we were unable to recover it. 00:30:34.773 [2024-12-05 12:14:08.818869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.773 [2024-12-05 12:14:08.818899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.773 qpair failed and we were unable to recover it. 00:30:34.773 [2024-12-05 12:14:08.819007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.773 [2024-12-05 12:14:08.819038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.773 qpair failed and we were unable to recover it. 00:30:34.773 [2024-12-05 12:14:08.819215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.773 [2024-12-05 12:14:08.819249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.773 qpair failed and we were unable to recover it. 00:30:34.773 [2024-12-05 12:14:08.819433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.773 [2024-12-05 12:14:08.819465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.773 qpair failed and we were unable to recover it. 00:30:34.773 [2024-12-05 12:14:08.819731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.773 [2024-12-05 12:14:08.819764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.773 qpair failed and we were unable to recover it. 00:30:34.773 [2024-12-05 12:14:08.819878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.773 [2024-12-05 12:14:08.819917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.773 qpair failed and we were unable to recover it. 00:30:34.773 [2024-12-05 12:14:08.820040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.773 [2024-12-05 12:14:08.820073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.773 qpair failed and we were unable to recover it. 00:30:34.773 [2024-12-05 12:14:08.820246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.773 [2024-12-05 12:14:08.820278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.773 qpair failed and we were unable to recover it. 00:30:34.773 [2024-12-05 12:14:08.820422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.773 [2024-12-05 12:14:08.820455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.773 qpair failed and we were unable to recover it. 00:30:34.773 [2024-12-05 12:14:08.820570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.773 [2024-12-05 12:14:08.820602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.773 qpair failed and we were unable to recover it. 00:30:34.773 [2024-12-05 12:14:08.820847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.773 [2024-12-05 12:14:08.820878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.773 qpair failed and we were unable to recover it. 00:30:34.773 [2024-12-05 12:14:08.821054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.773 [2024-12-05 12:14:08.821086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.773 qpair failed and we were unable to recover it. 00:30:34.773 [2024-12-05 12:14:08.821212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.773 [2024-12-05 12:14:08.821244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.773 qpair failed and we were unable to recover it. 00:30:34.773 [2024-12-05 12:14:08.821378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.773 [2024-12-05 12:14:08.821411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.773 qpair failed and we were unable to recover it. 00:30:34.773 [2024-12-05 12:14:08.821598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.773 [2024-12-05 12:14:08.821631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.773 qpair failed and we were unable to recover it. 00:30:34.773 [2024-12-05 12:14:08.821824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.773 [2024-12-05 12:14:08.821856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.773 qpair failed and we were unable to recover it. 00:30:34.773 [2024-12-05 12:14:08.822037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.773 [2024-12-05 12:14:08.822070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.773 qpair failed and we were unable to recover it. 00:30:34.773 [2024-12-05 12:14:08.822258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.773 [2024-12-05 12:14:08.822290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.773 qpair failed and we were unable to recover it. 00:30:34.773 [2024-12-05 12:14:08.822410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.773 [2024-12-05 12:14:08.822443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.773 qpair failed and we were unable to recover it. 00:30:34.773 [2024-12-05 12:14:08.822585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.773 [2024-12-05 12:14:08.822616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.773 qpair failed and we were unable to recover it. 00:30:34.773 [2024-12-05 12:14:08.822729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.773 [2024-12-05 12:14:08.822762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.773 qpair failed and we were unable to recover it. 00:30:34.773 [2024-12-05 12:14:08.822946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.773 [2024-12-05 12:14:08.822979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.773 qpair failed and we were unable to recover it. 00:30:34.774 [2024-12-05 12:14:08.823164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.774 [2024-12-05 12:14:08.823196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.774 qpair failed and we were unable to recover it. 00:30:34.774 [2024-12-05 12:14:08.823313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.774 [2024-12-05 12:14:08.823346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.774 qpair failed and we were unable to recover it. 00:30:34.774 [2024-12-05 12:14:08.823540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.774 [2024-12-05 12:14:08.823575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.774 qpair failed and we were unable to recover it. 00:30:34.774 [2024-12-05 12:14:08.823748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.774 [2024-12-05 12:14:08.823780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.774 qpair failed and we were unable to recover it. 00:30:34.774 [2024-12-05 12:14:08.823952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.774 [2024-12-05 12:14:08.823986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.774 qpair failed and we were unable to recover it. 00:30:34.774 [2024-12-05 12:14:08.824095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.774 [2024-12-05 12:14:08.824126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.774 qpair failed and we were unable to recover it. 00:30:34.774 [2024-12-05 12:14:08.824251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.774 [2024-12-05 12:14:08.824284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.774 qpair failed and we were unable to recover it. 00:30:34.774 [2024-12-05 12:14:08.824477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.774 [2024-12-05 12:14:08.824510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.774 qpair failed and we were unable to recover it. 00:30:34.774 [2024-12-05 12:14:08.824706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.774 [2024-12-05 12:14:08.824738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.774 qpair failed and we were unable to recover it. 00:30:34.774 [2024-12-05 12:14:08.824863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.774 [2024-12-05 12:14:08.824894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.774 qpair failed and we were unable to recover it. 00:30:34.774 [2024-12-05 12:14:08.825027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.774 [2024-12-05 12:14:08.825059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.774 qpair failed and we were unable to recover it. 00:30:34.774 [2024-12-05 12:14:08.825232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.774 [2024-12-05 12:14:08.825264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.774 qpair failed and we were unable to recover it. 00:30:34.774 [2024-12-05 12:14:08.825483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.774 [2024-12-05 12:14:08.825516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.774 qpair failed and we were unable to recover it. 00:30:34.774 [2024-12-05 12:14:08.825624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.774 [2024-12-05 12:14:08.825656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.774 qpair failed and we were unable to recover it. 00:30:34.774 [2024-12-05 12:14:08.825772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.774 [2024-12-05 12:14:08.825805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.774 qpair failed and we were unable to recover it. 00:30:34.774 [2024-12-05 12:14:08.825935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.774 [2024-12-05 12:14:08.825966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.774 qpair failed and we were unable to recover it. 00:30:34.774 [2024-12-05 12:14:08.826157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.774 [2024-12-05 12:14:08.826190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.774 qpair failed and we were unable to recover it. 00:30:34.774 [2024-12-05 12:14:08.826310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.774 [2024-12-05 12:14:08.826343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.774 qpair failed and we were unable to recover it. 00:30:34.774 [2024-12-05 12:14:08.826631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.774 [2024-12-05 12:14:08.826663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.774 qpair failed and we were unable to recover it. 00:30:34.774 [2024-12-05 12:14:08.826787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.774 [2024-12-05 12:14:08.826819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.774 qpair failed and we were unable to recover it. 00:30:34.774 [2024-12-05 12:14:08.827110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.774 [2024-12-05 12:14:08.827143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.774 qpair failed and we were unable to recover it. 00:30:34.774 [2024-12-05 12:14:08.827281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.774 [2024-12-05 12:14:08.827313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.774 qpair failed and we were unable to recover it. 00:30:34.774 [2024-12-05 12:14:08.827528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.774 [2024-12-05 12:14:08.827562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.774 qpair failed and we were unable to recover it. 00:30:34.774 [2024-12-05 12:14:08.827736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.774 [2024-12-05 12:14:08.827769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.774 qpair failed and we were unable to recover it. 00:30:34.774 [2024-12-05 12:14:08.827960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.774 [2024-12-05 12:14:08.828001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.774 qpair failed and we were unable to recover it. 00:30:34.774 [2024-12-05 12:14:08.828132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.774 [2024-12-05 12:14:08.828166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.774 qpair failed and we were unable to recover it. 00:30:34.774 [2024-12-05 12:14:08.828352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.774 [2024-12-05 12:14:08.828400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.774 qpair failed and we were unable to recover it. 00:30:34.774 [2024-12-05 12:14:08.828583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.774 [2024-12-05 12:14:08.828616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.774 qpair failed and we were unable to recover it. 00:30:34.774 [2024-12-05 12:14:08.828832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.774 [2024-12-05 12:14:08.828864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.774 qpair failed and we were unable to recover it. 00:30:34.774 [2024-12-05 12:14:08.829035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.774 [2024-12-05 12:14:08.829066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.774 qpair failed and we were unable to recover it. 00:30:34.774 [2024-12-05 12:14:08.829175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.774 [2024-12-05 12:14:08.829207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.774 qpair failed and we were unable to recover it. 00:30:34.775 [2024-12-05 12:14:08.829333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.775 [2024-12-05 12:14:08.829364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.775 qpair failed and we were unable to recover it. 00:30:34.775 [2024-12-05 12:14:08.829493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.775 [2024-12-05 12:14:08.829527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.775 qpair failed and we were unable to recover it. 00:30:34.775 [2024-12-05 12:14:08.829749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.775 [2024-12-05 12:14:08.829788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.775 qpair failed and we were unable to recover it. 00:30:34.775 [2024-12-05 12:14:08.830039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.775 [2024-12-05 12:14:08.830072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.775 qpair failed and we were unable to recover it. 00:30:34.775 [2024-12-05 12:14:08.830182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.775 [2024-12-05 12:14:08.830213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.775 qpair failed and we were unable to recover it. 00:30:34.775 [2024-12-05 12:14:08.830335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.775 [2024-12-05 12:14:08.830365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.775 qpair failed and we were unable to recover it. 00:30:34.775 [2024-12-05 12:14:08.830621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.775 [2024-12-05 12:14:08.830667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.775 qpair failed and we were unable to recover it. 00:30:34.775 [2024-12-05 12:14:08.830929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.775 [2024-12-05 12:14:08.830965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.775 qpair failed and we were unable to recover it. 00:30:34.775 [2024-12-05 12:14:08.831090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.775 [2024-12-05 12:14:08.831122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.775 qpair failed and we were unable to recover it. 00:30:34.775 [2024-12-05 12:14:08.831244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.775 [2024-12-05 12:14:08.831278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.775 qpair failed and we were unable to recover it. 00:30:34.775 [2024-12-05 12:14:08.831391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.775 [2024-12-05 12:14:08.831424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.775 qpair failed and we were unable to recover it. 00:30:34.775 [2024-12-05 12:14:08.831613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.775 [2024-12-05 12:14:08.831646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.775 qpair failed and we were unable to recover it. 00:30:34.775 [2024-12-05 12:14:08.831777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.775 [2024-12-05 12:14:08.831809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.775 qpair failed and we were unable to recover it. 00:30:34.775 [2024-12-05 12:14:08.831939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.775 [2024-12-05 12:14:08.831971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.775 qpair failed and we were unable to recover it. 00:30:34.775 [2024-12-05 12:14:08.832086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.775 [2024-12-05 12:14:08.832118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.775 qpair failed and we were unable to recover it. 00:30:34.775 [2024-12-05 12:14:08.832290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.775 [2024-12-05 12:14:08.832325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.775 qpair failed and we were unable to recover it. 00:30:34.775 [2024-12-05 12:14:08.832539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.775 [2024-12-05 12:14:08.832573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.775 qpair failed and we were unable to recover it. 00:30:34.775 [2024-12-05 12:14:08.832711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.775 [2024-12-05 12:14:08.832745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.775 qpair failed and we were unable to recover it. 00:30:34.775 [2024-12-05 12:14:08.832927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.775 [2024-12-05 12:14:08.832960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.775 qpair failed and we were unable to recover it. 00:30:34.775 [2024-12-05 12:14:08.833071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.775 [2024-12-05 12:14:08.833105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.775 qpair failed and we were unable to recover it. 00:30:34.775 [2024-12-05 12:14:08.833238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.775 [2024-12-05 12:14:08.833270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.775 qpair failed and we were unable to recover it. 00:30:34.775 [2024-12-05 12:14:08.833397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.775 [2024-12-05 12:14:08.833428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.775 qpair failed and we were unable to recover it. 00:30:34.775 [2024-12-05 12:14:08.833565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.775 [2024-12-05 12:14:08.833601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.775 qpair failed and we were unable to recover it. 00:30:34.775 [2024-12-05 12:14:08.833819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.775 [2024-12-05 12:14:08.833855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.775 qpair failed and we were unable to recover it. 00:30:34.775 [2024-12-05 12:14:08.833975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.775 [2024-12-05 12:14:08.834005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.775 qpair failed and we were unable to recover it. 00:30:34.775 [2024-12-05 12:14:08.834115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.775 [2024-12-05 12:14:08.834148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.775 qpair failed and we were unable to recover it. 00:30:34.775 [2024-12-05 12:14:08.834274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.775 [2024-12-05 12:14:08.834305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.775 qpair failed and we were unable to recover it. 00:30:34.775 [2024-12-05 12:14:08.834476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.775 [2024-12-05 12:14:08.834507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.775 qpair failed and we were unable to recover it. 00:30:34.775 [2024-12-05 12:14:08.834637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.775 [2024-12-05 12:14:08.834668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.775 qpair failed and we were unable to recover it. 00:30:34.775 [2024-12-05 12:14:08.834792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.775 [2024-12-05 12:14:08.834823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.775 qpair failed and we were unable to recover it. 00:30:34.775 [2024-12-05 12:14:08.834937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.775 [2024-12-05 12:14:08.834967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.775 qpair failed and we were unable to recover it. 00:30:34.775 [2024-12-05 12:14:08.835165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.775 [2024-12-05 12:14:08.835195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.775 qpair failed and we were unable to recover it. 00:30:34.775 [2024-12-05 12:14:08.835307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.775 [2024-12-05 12:14:08.835338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.775 qpair failed and we were unable to recover it. 00:30:34.776 [2024-12-05 12:14:08.835477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.776 [2024-12-05 12:14:08.835516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.776 qpair failed and we were unable to recover it. 00:30:34.776 [2024-12-05 12:14:08.835639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.776 [2024-12-05 12:14:08.835668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.776 qpair failed and we were unable to recover it. 00:30:34.776 [2024-12-05 12:14:08.835869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.776 [2024-12-05 12:14:08.835900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.776 qpair failed and we were unable to recover it. 00:30:34.776 [2024-12-05 12:14:08.836010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.776 [2024-12-05 12:14:08.836040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.776 qpair failed and we were unable to recover it. 00:30:34.776 [2024-12-05 12:14:08.836221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.776 [2024-12-05 12:14:08.836251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.776 qpair failed and we were unable to recover it. 00:30:34.776 [2024-12-05 12:14:08.836420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.776 [2024-12-05 12:14:08.836454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.776 qpair failed and we were unable to recover it. 00:30:34.776 [2024-12-05 12:14:08.836627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.776 [2024-12-05 12:14:08.836661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.776 qpair failed and we were unable to recover it. 00:30:34.776 [2024-12-05 12:14:08.838130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.776 [2024-12-05 12:14:08.838185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.776 qpair failed and we were unable to recover it. 00:30:34.776 [2024-12-05 12:14:08.838395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.776 [2024-12-05 12:14:08.838431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.776 qpair failed and we were unable to recover it. 00:30:34.776 [2024-12-05 12:14:08.838688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.776 [2024-12-05 12:14:08.838722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.776 qpair failed and we were unable to recover it. 00:30:34.776 [2024-12-05 12:14:08.838856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.776 [2024-12-05 12:14:08.838888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.776 qpair failed and we were unable to recover it. 00:30:34.776 [2024-12-05 12:14:08.839018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.776 [2024-12-05 12:14:08.839050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.776 qpair failed and we were unable to recover it. 00:30:34.776 [2024-12-05 12:14:08.839222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.776 [2024-12-05 12:14:08.839256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.776 qpair failed and we were unable to recover it. 00:30:34.776 [2024-12-05 12:14:08.839380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.776 [2024-12-05 12:14:08.839414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.776 qpair failed and we were unable to recover it. 00:30:34.776 [2024-12-05 12:14:08.839594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.776 [2024-12-05 12:14:08.839625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.776 qpair failed and we were unable to recover it. 00:30:34.776 [2024-12-05 12:14:08.839737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.776 [2024-12-05 12:14:08.839771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.776 qpair failed and we were unable to recover it. 00:30:34.776 [2024-12-05 12:14:08.839950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.776 [2024-12-05 12:14:08.839982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.776 qpair failed and we were unable to recover it. 00:30:34.776 [2024-12-05 12:14:08.840087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.776 [2024-12-05 12:14:08.840117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.776 qpair failed and we were unable to recover it. 00:30:34.776 [2024-12-05 12:14:08.840352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.776 [2024-12-05 12:14:08.840394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.776 qpair failed and we were unable to recover it. 00:30:34.776 [2024-12-05 12:14:08.840515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.776 [2024-12-05 12:14:08.840546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.776 qpair failed and we were unable to recover it. 00:30:34.776 [2024-12-05 12:14:08.840733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.776 [2024-12-05 12:14:08.840763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.776 qpair failed and we were unable to recover it. 00:30:34.776 [2024-12-05 12:14:08.840967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.776 [2024-12-05 12:14:08.841001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.776 qpair failed and we were unable to recover it. 00:30:34.776 [2024-12-05 12:14:08.841107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.776 [2024-12-05 12:14:08.841137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.776 qpair failed and we were unable to recover it. 00:30:34.776 [2024-12-05 12:14:08.841254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.776 [2024-12-05 12:14:08.841285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.776 qpair failed and we were unable to recover it. 00:30:34.776 [2024-12-05 12:14:08.841421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.776 [2024-12-05 12:14:08.841452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.776 qpair failed and we were unable to recover it. 00:30:34.776 [2024-12-05 12:14:08.841643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.776 [2024-12-05 12:14:08.841674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.776 qpair failed and we were unable to recover it. 00:30:34.776 [2024-12-05 12:14:08.841810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.776 [2024-12-05 12:14:08.841840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.776 qpair failed and we were unable to recover it. 00:30:34.776 [2024-12-05 12:14:08.842018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.776 [2024-12-05 12:14:08.842054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.776 qpair failed and we were unable to recover it. 00:30:34.776 [2024-12-05 12:14:08.842224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.776 [2024-12-05 12:14:08.842254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.776 qpair failed and we were unable to recover it. 00:30:34.776 [2024-12-05 12:14:08.842475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.776 [2024-12-05 12:14:08.842508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.776 qpair failed and we were unable to recover it. 00:30:34.776 [2024-12-05 12:14:08.842694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.776 [2024-12-05 12:14:08.842726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.776 qpair failed and we were unable to recover it. 00:30:34.776 [2024-12-05 12:14:08.842854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.776 [2024-12-05 12:14:08.842886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.777 qpair failed and we were unable to recover it. 00:30:34.777 [2024-12-05 12:14:08.843010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.777 [2024-12-05 12:14:08.843043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.777 qpair failed and we were unable to recover it. 00:30:34.777 [2024-12-05 12:14:08.843220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.777 [2024-12-05 12:14:08.843251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.777 qpair failed and we were unable to recover it. 00:30:34.777 [2024-12-05 12:14:08.843449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.777 [2024-12-05 12:14:08.843481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.777 qpair failed and we were unable to recover it. 00:30:34.777 [2024-12-05 12:14:08.843603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.777 [2024-12-05 12:14:08.843634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.777 qpair failed and we were unable to recover it. 00:30:34.777 [2024-12-05 12:14:08.843816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.777 [2024-12-05 12:14:08.843845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.777 qpair failed and we were unable to recover it. 00:30:34.777 [2024-12-05 12:14:08.844085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.777 [2024-12-05 12:14:08.844117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.777 qpair failed and we were unable to recover it. 00:30:34.777 [2024-12-05 12:14:08.844304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.777 [2024-12-05 12:14:08.844336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.777 qpair failed and we were unable to recover it. 00:30:34.777 [2024-12-05 12:14:08.844509] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5ab20 is same with the state(6) to be set 00:30:34.777 [2024-12-05 12:14:08.844743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.777 [2024-12-05 12:14:08.844780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.777 qpair failed and we were unable to recover it. 00:30:34.777 [2024-12-05 12:14:08.844898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.777 [2024-12-05 12:14:08.844936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.777 qpair failed and we were unable to recover it. 00:30:34.777 [2024-12-05 12:14:08.845060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.777 [2024-12-05 12:14:08.845091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.777 qpair failed and we were unable to recover it. 00:30:34.777 [2024-12-05 12:14:08.845204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.777 [2024-12-05 12:14:08.845235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.777 qpair failed and we were unable to recover it. 00:30:34.777 [2024-12-05 12:14:08.845523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.777 [2024-12-05 12:14:08.845562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.777 qpair failed and we were unable to recover it. 00:30:34.777 [2024-12-05 12:14:08.845676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.777 [2024-12-05 12:14:08.845712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.777 qpair failed and we were unable to recover it. 00:30:34.777 [2024-12-05 12:14:08.845842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.777 [2024-12-05 12:14:08.845876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.777 qpair failed and we were unable to recover it. 00:30:34.777 [2024-12-05 12:14:08.846030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.777 [2024-12-05 12:14:08.846065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.777 qpair failed and we were unable to recover it. 00:30:34.777 [2024-12-05 12:14:08.846249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.777 [2024-12-05 12:14:08.846283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.777 qpair failed and we were unable to recover it. 00:30:34.777 [2024-12-05 12:14:08.846468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.777 [2024-12-05 12:14:08.846501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.777 qpair failed and we were unable to recover it. 00:30:34.777 [2024-12-05 12:14:08.846715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.777 [2024-12-05 12:14:08.846746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.777 qpair failed and we were unable to recover it. 00:30:34.777 [2024-12-05 12:14:08.846946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.777 [2024-12-05 12:14:08.846980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.777 qpair failed and we were unable to recover it. 00:30:34.777 [2024-12-05 12:14:08.847234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.777 [2024-12-05 12:14:08.847269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.777 qpair failed and we were unable to recover it. 00:30:34.777 [2024-12-05 12:14:08.847393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.777 [2024-12-05 12:14:08.847433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.777 qpair failed and we were unable to recover it. 00:30:34.777 [2024-12-05 12:14:08.847544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.777 [2024-12-05 12:14:08.847575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.777 qpair failed and we were unable to recover it. 00:30:34.777 [2024-12-05 12:14:08.847711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.777 [2024-12-05 12:14:08.847741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.777 qpair failed and we were unable to recover it. 00:30:34.777 [2024-12-05 12:14:08.847861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.777 [2024-12-05 12:14:08.847891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.777 qpair failed and we were unable to recover it. 00:30:34.777 [2024-12-05 12:14:08.848075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.777 [2024-12-05 12:14:08.848105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.777 qpair failed and we were unable to recover it. 00:30:34.777 [2024-12-05 12:14:08.848233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.777 [2024-12-05 12:14:08.848263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.777 qpair failed and we were unable to recover it. 00:30:34.777 [2024-12-05 12:14:08.848364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.777 [2024-12-05 12:14:08.848403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.777 qpair failed and we were unable to recover it. 00:30:34.777 [2024-12-05 12:14:08.848509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.777 [2024-12-05 12:14:08.848540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.777 qpair failed and we were unable to recover it. 00:30:34.777 [2024-12-05 12:14:08.848721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.777 [2024-12-05 12:14:08.848751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.777 qpair failed and we were unable to recover it. 00:30:34.777 [2024-12-05 12:14:08.848950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.777 [2024-12-05 12:14:08.848981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.777 qpair failed and we were unable to recover it. 00:30:34.777 [2024-12-05 12:14:08.849086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.777 [2024-12-05 12:14:08.849117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.777 qpair failed and we were unable to recover it. 00:30:34.778 [2024-12-05 12:14:08.849400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.778 [2024-12-05 12:14:08.849438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.778 qpair failed and we were unable to recover it. 00:30:34.778 [2024-12-05 12:14:08.849575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.778 [2024-12-05 12:14:08.849616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.778 qpair failed and we were unable to recover it. 00:30:34.778 [2024-12-05 12:14:08.849818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.778 [2024-12-05 12:14:08.849849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.778 qpair failed and we were unable to recover it. 00:30:34.778 [2024-12-05 12:14:08.850021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.778 [2024-12-05 12:14:08.850053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.778 qpair failed and we were unable to recover it. 00:30:34.778 [2024-12-05 12:14:08.850180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.778 [2024-12-05 12:14:08.850213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.778 qpair failed and we were unable to recover it. 00:30:34.778 [2024-12-05 12:14:08.850406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.778 [2024-12-05 12:14:08.850441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.778 qpair failed and we were unable to recover it. 00:30:34.778 [2024-12-05 12:14:08.850568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.778 [2024-12-05 12:14:08.850607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.778 qpair failed and we were unable to recover it. 00:30:34.778 [2024-12-05 12:14:08.850738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.778 [2024-12-05 12:14:08.850768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.778 qpair failed and we were unable to recover it. 00:30:34.778 [2024-12-05 12:14:08.850887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.778 [2024-12-05 12:14:08.850917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.778 qpair failed and we were unable to recover it. 00:30:34.778 [2024-12-05 12:14:08.851117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.778 [2024-12-05 12:14:08.851149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.778 qpair failed and we were unable to recover it. 00:30:34.778 [2024-12-05 12:14:08.851321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.778 [2024-12-05 12:14:08.851351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.778 qpair failed and we were unable to recover it. 00:30:34.778 [2024-12-05 12:14:08.851482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.778 [2024-12-05 12:14:08.851513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.778 qpair failed and we were unable to recover it. 00:30:34.778 [2024-12-05 12:14:08.851614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.778 [2024-12-05 12:14:08.851644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.778 qpair failed and we were unable to recover it. 00:30:34.778 [2024-12-05 12:14:08.851825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.778 [2024-12-05 12:14:08.851856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.778 qpair failed and we were unable to recover it. 00:30:34.778 [2024-12-05 12:14:08.852040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.778 [2024-12-05 12:14:08.852071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.778 qpair failed and we were unable to recover it. 00:30:34.778 [2024-12-05 12:14:08.852190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.778 [2024-12-05 12:14:08.852221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.778 qpair failed and we were unable to recover it. 00:30:34.778 [2024-12-05 12:14:08.852338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.778 [2024-12-05 12:14:08.852378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.778 qpair failed and we were unable to recover it. 00:30:34.778 [2024-12-05 12:14:08.852507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.778 [2024-12-05 12:14:08.852545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.778 qpair failed and we were unable to recover it. 00:30:34.778 [2024-12-05 12:14:08.852732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.778 [2024-12-05 12:14:08.852763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.778 qpair failed and we were unable to recover it. 00:30:34.778 [2024-12-05 12:14:08.852950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.778 [2024-12-05 12:14:08.852981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.778 qpair failed and we were unable to recover it. 00:30:34.778 [2024-12-05 12:14:08.853152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.778 [2024-12-05 12:14:08.853184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.778 qpair failed and we were unable to recover it. 00:30:34.778 [2024-12-05 12:14:08.853299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.778 [2024-12-05 12:14:08.853330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.778 qpair failed and we were unable to recover it. 00:30:34.778 [2024-12-05 12:14:08.853463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.778 [2024-12-05 12:14:08.853498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.778 qpair failed and we were unable to recover it. 00:30:34.778 [2024-12-05 12:14:08.853630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.778 [2024-12-05 12:14:08.853663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.778 qpair failed and we were unable to recover it. 00:30:34.778 [2024-12-05 12:14:08.853905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.778 [2024-12-05 12:14:08.853939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.778 qpair failed and we were unable to recover it. 00:30:34.778 [2024-12-05 12:14:08.854057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.778 [2024-12-05 12:14:08.854088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.778 qpair failed and we were unable to recover it. 00:30:34.778 [2024-12-05 12:14:08.854199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.778 [2024-12-05 12:14:08.854230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.778 qpair failed and we were unable to recover it. 00:30:34.778 [2024-12-05 12:14:08.854365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.778 [2024-12-05 12:14:08.854410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.778 qpair failed and we were unable to recover it. 00:30:34.778 [2024-12-05 12:14:08.854525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.778 [2024-12-05 12:14:08.854555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.778 qpair failed and we were unable to recover it. 00:30:34.778 [2024-12-05 12:14:08.854672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.778 [2024-12-05 12:14:08.854702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.778 qpair failed and we were unable to recover it. 00:30:34.778 [2024-12-05 12:14:08.854837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.778 [2024-12-05 12:14:08.854870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.778 qpair failed and we were unable to recover it. 00:30:34.778 [2024-12-05 12:14:08.855065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.778 [2024-12-05 12:14:08.855103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.778 qpair failed and we were unable to recover it. 00:30:34.778 [2024-12-05 12:14:08.855294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.778 [2024-12-05 12:14:08.855329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.778 qpair failed and we were unable to recover it. 00:30:34.778 [2024-12-05 12:14:08.855452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.778 [2024-12-05 12:14:08.855486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.778 qpair failed and we were unable to recover it. 00:30:34.778 [2024-12-05 12:14:08.855666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.779 [2024-12-05 12:14:08.855698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.779 qpair failed and we were unable to recover it. 00:30:34.779 [2024-12-05 12:14:08.855818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.779 [2024-12-05 12:14:08.855850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.779 qpair failed and we were unable to recover it. 00:30:34.779 [2024-12-05 12:14:08.855954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.779 [2024-12-05 12:14:08.855987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.779 qpair failed and we were unable to recover it. 00:30:34.779 [2024-12-05 12:14:08.856096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.779 [2024-12-05 12:14:08.856128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.779 qpair failed and we were unable to recover it. 00:30:34.779 [2024-12-05 12:14:08.856235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.779 [2024-12-05 12:14:08.856273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.779 qpair failed and we were unable to recover it. 00:30:34.779 [2024-12-05 12:14:08.856462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.779 [2024-12-05 12:14:08.856498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.779 qpair failed and we were unable to recover it. 00:30:34.779 [2024-12-05 12:14:08.856678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.779 [2024-12-05 12:14:08.856709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.779 qpair failed and we were unable to recover it. 00:30:34.779 [2024-12-05 12:14:08.856905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.779 [2024-12-05 12:14:08.856937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.779 qpair failed and we were unable to recover it. 00:30:34.779 [2024-12-05 12:14:08.857045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.779 [2024-12-05 12:14:08.857075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.779 qpair failed and we were unable to recover it. 00:30:34.779 [2024-12-05 12:14:08.857179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.779 [2024-12-05 12:14:08.857209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:34.779 qpair failed and we were unable to recover it. 00:30:34.779 [2024-12-05 12:14:08.857385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.779 [2024-12-05 12:14:08.857457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.779 qpair failed and we were unable to recover it. 00:30:34.779 [2024-12-05 12:14:08.857609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.779 [2024-12-05 12:14:08.857645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.779 qpair failed and we were unable to recover it. 00:30:34.779 [2024-12-05 12:14:08.857786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.779 [2024-12-05 12:14:08.857821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.779 qpair failed and we were unable to recover it. 00:30:34.779 [2024-12-05 12:14:08.857995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.779 [2024-12-05 12:14:08.858028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.779 qpair failed and we were unable to recover it. 00:30:34.779 [2024-12-05 12:14:08.858137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.779 [2024-12-05 12:14:08.858169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.779 qpair failed and we were unable to recover it. 00:30:34.779 [2024-12-05 12:14:08.858340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.779 [2024-12-05 12:14:08.858387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.779 qpair failed and we were unable to recover it. 00:30:34.779 [2024-12-05 12:14:08.858503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.779 [2024-12-05 12:14:08.858535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.779 qpair failed and we were unable to recover it. 00:30:34.779 [2024-12-05 12:14:08.858673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.779 [2024-12-05 12:14:08.858705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.779 qpair failed and we were unable to recover it. 00:30:34.779 [2024-12-05 12:14:08.858971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.779 [2024-12-05 12:14:08.859002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.779 qpair failed and we were unable to recover it. 00:30:34.779 [2024-12-05 12:14:08.859134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.779 [2024-12-05 12:14:08.859165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.779 qpair failed and we were unable to recover it. 00:30:34.779 [2024-12-05 12:14:08.859276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.779 [2024-12-05 12:14:08.859306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.779 qpair failed and we were unable to recover it. 00:30:34.779 [2024-12-05 12:14:08.859479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.779 [2024-12-05 12:14:08.859512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.779 qpair failed and we were unable to recover it. 00:30:34.779 [2024-12-05 12:14:08.859631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.779 [2024-12-05 12:14:08.859663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.779 qpair failed and we were unable to recover it. 00:30:34.779 [2024-12-05 12:14:08.859831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.779 [2024-12-05 12:14:08.859863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.779 qpair failed and we were unable to recover it. 00:30:34.779 [2024-12-05 12:14:08.860052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.779 [2024-12-05 12:14:08.860083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.779 qpair failed and we were unable to recover it. 00:30:34.779 [2024-12-05 12:14:08.860206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.779 [2024-12-05 12:14:08.860237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.779 qpair failed and we were unable to recover it. 00:30:34.779 [2024-12-05 12:14:08.860419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.779 [2024-12-05 12:14:08.860451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.779 qpair failed and we were unable to recover it. 00:30:34.779 [2024-12-05 12:14:08.860579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.779 [2024-12-05 12:14:08.860609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.779 qpair failed and we were unable to recover it. 00:30:34.779 [2024-12-05 12:14:08.860716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.779 [2024-12-05 12:14:08.860746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.779 qpair failed and we were unable to recover it. 00:30:34.779 [2024-12-05 12:14:08.860985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.779 [2024-12-05 12:14:08.861017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.779 qpair failed and we were unable to recover it. 00:30:34.779 [2024-12-05 12:14:08.861137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.779 [2024-12-05 12:14:08.861169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.779 qpair failed and we were unable to recover it. 00:30:34.779 [2024-12-05 12:14:08.861304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.779 [2024-12-05 12:14:08.861335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.779 qpair failed and we were unable to recover it. 00:30:34.779 [2024-12-05 12:14:08.861454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.779 [2024-12-05 12:14:08.861487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.779 qpair failed and we were unable to recover it. 00:30:34.779 [2024-12-05 12:14:08.861678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.779 [2024-12-05 12:14:08.861709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.779 qpair failed and we were unable to recover it. 00:30:34.779 [2024-12-05 12:14:08.861951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.780 [2024-12-05 12:14:08.861983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.780 qpair failed and we were unable to recover it. 00:30:34.780 [2024-12-05 12:14:08.862108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.780 [2024-12-05 12:14:08.862137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.780 qpair failed and we were unable to recover it. 00:30:34.780 [2024-12-05 12:14:08.862259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.780 [2024-12-05 12:14:08.862289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.780 qpair failed and we were unable to recover it. 00:30:34.780 [2024-12-05 12:14:08.862410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.780 [2024-12-05 12:14:08.862442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.780 qpair failed and we were unable to recover it. 00:30:34.780 [2024-12-05 12:14:08.862559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.780 [2024-12-05 12:14:08.862590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.780 qpair failed and we were unable to recover it. 00:30:34.780 [2024-12-05 12:14:08.862764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.780 [2024-12-05 12:14:08.862800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.780 qpair failed and we were unable to recover it. 00:30:34.780 [2024-12-05 12:14:08.862948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.780 [2024-12-05 12:14:08.862977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.780 qpair failed and we were unable to recover it. 00:30:34.780 [2024-12-05 12:14:08.863214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.780 [2024-12-05 12:14:08.863245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.780 qpair failed and we were unable to recover it. 00:30:34.780 [2024-12-05 12:14:08.863379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.780 [2024-12-05 12:14:08.863412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.780 qpair failed and we were unable to recover it. 00:30:34.780 [2024-12-05 12:14:08.863545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.780 [2024-12-05 12:14:08.863576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.780 qpair failed and we were unable to recover it. 00:30:34.780 [2024-12-05 12:14:08.863749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.780 [2024-12-05 12:14:08.863781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.780 qpair failed and we were unable to recover it. 00:30:34.780 [2024-12-05 12:14:08.863896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.780 [2024-12-05 12:14:08.863927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.780 qpair failed and we were unable to recover it. 00:30:34.780 [2024-12-05 12:14:08.864199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.780 [2024-12-05 12:14:08.864231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.780 qpair failed and we were unable to recover it. 00:30:34.780 [2024-12-05 12:14:08.864344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.780 [2024-12-05 12:14:08.864388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.780 qpair failed and we were unable to recover it. 00:30:34.780 [2024-12-05 12:14:08.864575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.780 [2024-12-05 12:14:08.864607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.780 qpair failed and we were unable to recover it. 00:30:34.780 [2024-12-05 12:14:08.864785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.780 [2024-12-05 12:14:08.864816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.780 qpair failed and we were unable to recover it. 00:30:34.780 [2024-12-05 12:14:08.864995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.780 [2024-12-05 12:14:08.865031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.780 qpair failed and we were unable to recover it. 00:30:34.780 [2024-12-05 12:14:08.865138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.780 [2024-12-05 12:14:08.865169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.780 qpair failed and we were unable to recover it. 00:30:34.780 [2024-12-05 12:14:08.865271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.780 [2024-12-05 12:14:08.865301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.780 qpair failed and we were unable to recover it. 00:30:34.780 [2024-12-05 12:14:08.865475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.780 [2024-12-05 12:14:08.865508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.780 qpair failed and we were unable to recover it. 00:30:34.780 [2024-12-05 12:14:08.865631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.780 [2024-12-05 12:14:08.865662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.780 qpair failed and we were unable to recover it. 00:30:34.780 [2024-12-05 12:14:08.865843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.780 [2024-12-05 12:14:08.865875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.780 qpair failed and we were unable to recover it. 00:30:34.780 [2024-12-05 12:14:08.866059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.780 [2024-12-05 12:14:08.866090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.780 qpair failed and we were unable to recover it. 00:30:34.780 [2024-12-05 12:14:08.866192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.780 [2024-12-05 12:14:08.866224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.780 qpair failed and we were unable to recover it. 00:30:34.780 [2024-12-05 12:14:08.866343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.780 [2024-12-05 12:14:08.866384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.780 qpair failed and we were unable to recover it. 00:30:34.780 [2024-12-05 12:14:08.866494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.780 [2024-12-05 12:14:08.866526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.780 qpair failed and we were unable to recover it. 00:30:34.780 [2024-12-05 12:14:08.866764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.780 [2024-12-05 12:14:08.866795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.780 qpair failed and we were unable to recover it. 00:30:34.780 [2024-12-05 12:14:08.866927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.780 [2024-12-05 12:14:08.866958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.780 qpair failed and we were unable to recover it. 00:30:34.780 [2024-12-05 12:14:08.867078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.780 [2024-12-05 12:14:08.867111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.780 qpair failed and we were unable to recover it. 00:30:34.780 [2024-12-05 12:14:08.867221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.780 [2024-12-05 12:14:08.867252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.780 qpair failed and we were unable to recover it. 00:30:34.780 [2024-12-05 12:14:08.867449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.780 [2024-12-05 12:14:08.867482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.780 qpair failed and we were unable to recover it. 00:30:34.780 [2024-12-05 12:14:08.867596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.780 [2024-12-05 12:14:08.867628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.780 qpair failed and we were unable to recover it. 00:30:34.780 [2024-12-05 12:14:08.867808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.781 [2024-12-05 12:14:08.867839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.781 qpair failed and we were unable to recover it. 00:30:34.781 [2024-12-05 12:14:08.867942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.781 [2024-12-05 12:14:08.867971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.781 qpair failed and we were unable to recover it. 00:30:34.781 [2024-12-05 12:14:08.868073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.781 [2024-12-05 12:14:08.868105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.781 qpair failed and we were unable to recover it. 00:30:34.781 [2024-12-05 12:14:08.868317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.781 [2024-12-05 12:14:08.868348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.781 qpair failed and we were unable to recover it. 00:30:34.781 [2024-12-05 12:14:08.868462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.781 [2024-12-05 12:14:08.868495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.781 qpair failed and we were unable to recover it. 00:30:34.781 [2024-12-05 12:14:08.868597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.781 [2024-12-05 12:14:08.868629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.781 qpair failed and we were unable to recover it. 00:30:34.781 [2024-12-05 12:14:08.868819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.781 [2024-12-05 12:14:08.868851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.781 qpair failed and we were unable to recover it. 00:30:34.781 [2024-12-05 12:14:08.869044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.781 [2024-12-05 12:14:08.869076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.781 qpair failed and we were unable to recover it. 00:30:34.781 [2024-12-05 12:14:08.869262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.781 [2024-12-05 12:14:08.869295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.781 qpair failed and we were unable to recover it. 00:30:34.781 [2024-12-05 12:14:08.869410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.781 [2024-12-05 12:14:08.869443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.781 qpair failed and we were unable to recover it. 00:30:34.781 [2024-12-05 12:14:08.869548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.781 [2024-12-05 12:14:08.869579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.781 qpair failed and we were unable to recover it. 00:30:34.781 [2024-12-05 12:14:08.869711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.781 [2024-12-05 12:14:08.869741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.781 qpair failed and we were unable to recover it. 00:30:34.781 [2024-12-05 12:14:08.869980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.781 [2024-12-05 12:14:08.870012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.781 qpair failed and we were unable to recover it. 00:30:34.781 [2024-12-05 12:14:08.870122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.781 [2024-12-05 12:14:08.870152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.781 qpair failed and we were unable to recover it. 00:30:34.781 [2024-12-05 12:14:08.870285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.781 [2024-12-05 12:14:08.870314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.781 qpair failed and we were unable to recover it. 00:30:34.781 [2024-12-05 12:14:08.870423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.781 [2024-12-05 12:14:08.870454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.781 qpair failed and we were unable to recover it. 00:30:34.781 [2024-12-05 12:14:08.870692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.781 [2024-12-05 12:14:08.870724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.781 qpair failed and we were unable to recover it. 00:30:34.781 [2024-12-05 12:14:08.870866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.781 [2024-12-05 12:14:08.870897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.781 qpair failed and we were unable to recover it. 00:30:34.781 [2024-12-05 12:14:08.870998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.781 [2024-12-05 12:14:08.871029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.781 qpair failed and we were unable to recover it. 00:30:34.781 [2024-12-05 12:14:08.871153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.781 [2024-12-05 12:14:08.871184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.781 qpair failed and we were unable to recover it. 00:30:34.781 [2024-12-05 12:14:08.871425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.781 [2024-12-05 12:14:08.871457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.781 qpair failed and we were unable to recover it. 00:30:34.781 [2024-12-05 12:14:08.871567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.781 [2024-12-05 12:14:08.871602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.781 qpair failed and we were unable to recover it. 00:30:34.781 [2024-12-05 12:14:08.871728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.781 [2024-12-05 12:14:08.871759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.781 qpair failed and we were unable to recover it. 00:30:34.781 [2024-12-05 12:14:08.871951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.781 [2024-12-05 12:14:08.871983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.781 qpair failed and we were unable to recover it. 00:30:34.781 [2024-12-05 12:14:08.872094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.781 [2024-12-05 12:14:08.872130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.781 qpair failed and we were unable to recover it. 00:30:34.781 [2024-12-05 12:14:08.872252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.781 [2024-12-05 12:14:08.872284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.781 qpair failed and we were unable to recover it. 00:30:34.781 [2024-12-05 12:14:08.872457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.781 [2024-12-05 12:14:08.872490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.781 qpair failed and we were unable to recover it. 00:30:34.781 [2024-12-05 12:14:08.872683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.781 [2024-12-05 12:14:08.872714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.781 qpair failed and we were unable to recover it. 00:30:34.781 [2024-12-05 12:14:08.872819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.781 [2024-12-05 12:14:08.872851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.781 qpair failed and we were unable to recover it. 00:30:34.781 [2024-12-05 12:14:08.873034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.781 [2024-12-05 12:14:08.873066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.781 qpair failed and we were unable to recover it. 00:30:34.781 [2024-12-05 12:14:08.873184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.781 [2024-12-05 12:14:08.873216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.781 qpair failed and we were unable to recover it. 00:30:34.781 [2024-12-05 12:14:08.873319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.781 [2024-12-05 12:14:08.873351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.781 qpair failed and we were unable to recover it. 00:30:34.781 [2024-12-05 12:14:08.873470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.781 [2024-12-05 12:14:08.873501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.781 qpair failed and we were unable to recover it. 00:30:34.781 [2024-12-05 12:14:08.873606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.781 [2024-12-05 12:14:08.873637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.781 qpair failed and we were unable to recover it. 00:30:34.781 [2024-12-05 12:14:08.873755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.781 [2024-12-05 12:14:08.873786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.781 qpair failed and we were unable to recover it. 00:30:34.781 [2024-12-05 12:14:08.874023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.781 [2024-12-05 12:14:08.874053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.781 qpair failed and we were unable to recover it. 00:30:34.781 [2024-12-05 12:14:08.874166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.781 [2024-12-05 12:14:08.874196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.782 qpair failed and we were unable to recover it. 00:30:34.782 [2024-12-05 12:14:08.874314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.782 [2024-12-05 12:14:08.874343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.782 qpair failed and we were unable to recover it. 00:30:34.782 [2024-12-05 12:14:08.874536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.782 [2024-12-05 12:14:08.874568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.782 qpair failed and we were unable to recover it. 00:30:34.782 [2024-12-05 12:14:08.874757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.782 [2024-12-05 12:14:08.874786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.782 qpair failed and we were unable to recover it. 00:30:34.782 [2024-12-05 12:14:08.874974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.782 [2024-12-05 12:14:08.875004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.782 qpair failed and we were unable to recover it. 00:30:34.782 [2024-12-05 12:14:08.875187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.782 [2024-12-05 12:14:08.875219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.782 qpair failed and we were unable to recover it. 00:30:34.782 [2024-12-05 12:14:08.875339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.782 [2024-12-05 12:14:08.875392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.782 qpair failed and we were unable to recover it. 00:30:34.782 [2024-12-05 12:14:08.875581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.782 [2024-12-05 12:14:08.875614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.782 qpair failed and we were unable to recover it. 00:30:34.782 [2024-12-05 12:14:08.875788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.782 [2024-12-05 12:14:08.875819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.782 qpair failed and we were unable to recover it. 00:30:34.782 [2024-12-05 12:14:08.876004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.782 [2024-12-05 12:14:08.876035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.782 qpair failed and we were unable to recover it. 00:30:34.782 [2024-12-05 12:14:08.876237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.782 [2024-12-05 12:14:08.876269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.782 qpair failed and we were unable to recover it. 00:30:34.782 [2024-12-05 12:14:08.876388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.782 [2024-12-05 12:14:08.876421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.782 qpair failed and we were unable to recover it. 00:30:34.782 [2024-12-05 12:14:08.876541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.782 [2024-12-05 12:14:08.876573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.782 qpair failed and we were unable to recover it. 00:30:34.782 [2024-12-05 12:14:08.876716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.782 [2024-12-05 12:14:08.876748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.782 qpair failed and we were unable to recover it. 00:30:34.782 [2024-12-05 12:14:08.876923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.782 [2024-12-05 12:14:08.876953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.782 qpair failed and we were unable to recover it. 00:30:34.782 [2024-12-05 12:14:08.877075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.782 [2024-12-05 12:14:08.877107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.782 qpair failed and we were unable to recover it. 00:30:34.782 [2024-12-05 12:14:08.877279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.782 [2024-12-05 12:14:08.877310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.782 qpair failed and we were unable to recover it. 00:30:34.782 [2024-12-05 12:14:08.877564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.782 [2024-12-05 12:14:08.877596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.782 qpair failed and we were unable to recover it. 00:30:34.782 [2024-12-05 12:14:08.877777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.782 [2024-12-05 12:14:08.877808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.782 qpair failed and we were unable to recover it. 00:30:34.782 [2024-12-05 12:14:08.877924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.782 [2024-12-05 12:14:08.877955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.782 qpair failed and we were unable to recover it. 00:30:34.782 [2024-12-05 12:14:08.878083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.782 [2024-12-05 12:14:08.878115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.782 qpair failed and we were unable to recover it. 00:30:34.782 [2024-12-05 12:14:08.878221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.782 [2024-12-05 12:14:08.878253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.782 qpair failed and we were unable to recover it. 00:30:34.782 [2024-12-05 12:14:08.878359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.782 [2024-12-05 12:14:08.878407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.782 qpair failed and we were unable to recover it. 00:30:34.782 [2024-12-05 12:14:08.878587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.782 [2024-12-05 12:14:08.878618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.782 qpair failed and we were unable to recover it. 00:30:34.782 [2024-12-05 12:14:08.878749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.782 [2024-12-05 12:14:08.878780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.782 qpair failed and we were unable to recover it. 00:30:34.782 [2024-12-05 12:14:08.878969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.782 [2024-12-05 12:14:08.879000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.782 qpair failed and we were unable to recover it. 00:30:34.782 [2024-12-05 12:14:08.879108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.782 [2024-12-05 12:14:08.879139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.782 qpair failed and we were unable to recover it. 00:30:34.782 [2024-12-05 12:14:08.879249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.782 [2024-12-05 12:14:08.879281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.782 qpair failed and we were unable to recover it. 00:30:34.782 [2024-12-05 12:14:08.879458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.782 [2024-12-05 12:14:08.879497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.782 qpair failed and we were unable to recover it. 00:30:34.782 [2024-12-05 12:14:08.879701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.782 [2024-12-05 12:14:08.879732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.782 qpair failed and we were unable to recover it. 00:30:34.782 [2024-12-05 12:14:08.879917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.782 [2024-12-05 12:14:08.879949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.782 qpair failed and we were unable to recover it. 00:30:34.783 [2024-12-05 12:14:08.880054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.783 [2024-12-05 12:14:08.880085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.783 qpair failed and we were unable to recover it. 00:30:34.783 [2024-12-05 12:14:08.880228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.783 [2024-12-05 12:14:08.880260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.783 qpair failed and we were unable to recover it. 00:30:34.783 [2024-12-05 12:14:08.880438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.783 [2024-12-05 12:14:08.880474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.783 qpair failed and we were unable to recover it. 00:30:34.783 [2024-12-05 12:14:08.880662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.783 [2024-12-05 12:14:08.880694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.783 qpair failed and we were unable to recover it. 00:30:34.783 [2024-12-05 12:14:08.880808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.783 [2024-12-05 12:14:08.880838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.783 qpair failed and we were unable to recover it. 00:30:34.783 [2024-12-05 12:14:08.880964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.783 [2024-12-05 12:14:08.880995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.783 qpair failed and we were unable to recover it. 00:30:34.783 [2024-12-05 12:14:08.881119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.783 [2024-12-05 12:14:08.881149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.783 qpair failed and we were unable to recover it. 00:30:34.783 [2024-12-05 12:14:08.881265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.783 [2024-12-05 12:14:08.881295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.783 qpair failed and we were unable to recover it. 00:30:34.783 [2024-12-05 12:14:08.881414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.783 [2024-12-05 12:14:08.881446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.783 qpair failed and we were unable to recover it. 00:30:34.783 [2024-12-05 12:14:08.881619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.783 [2024-12-05 12:14:08.881650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.783 qpair failed and we were unable to recover it. 00:30:34.783 [2024-12-05 12:14:08.881829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.783 [2024-12-05 12:14:08.881860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.783 qpair failed and we were unable to recover it. 00:30:34.783 [2024-12-05 12:14:08.882103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.783 [2024-12-05 12:14:08.882135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.783 qpair failed and we were unable to recover it. 00:30:34.783 [2024-12-05 12:14:08.882319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.783 [2024-12-05 12:14:08.882350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.783 qpair failed and we were unable to recover it. 00:30:34.783 [2024-12-05 12:14:08.882533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.783 [2024-12-05 12:14:08.882569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.783 qpair failed and we were unable to recover it. 00:30:34.783 [2024-12-05 12:14:08.882701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.783 [2024-12-05 12:14:08.882732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.783 qpair failed and we were unable to recover it. 00:30:34.783 [2024-12-05 12:14:08.882846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.783 [2024-12-05 12:14:08.882875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.783 qpair failed and we were unable to recover it. 00:30:34.783 [2024-12-05 12:14:08.882982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.783 [2024-12-05 12:14:08.883013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.783 qpair failed and we were unable to recover it. 00:30:34.783 [2024-12-05 12:14:08.883136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.783 [2024-12-05 12:14:08.883166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.783 qpair failed and we were unable to recover it. 00:30:34.783 [2024-12-05 12:14:08.883380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.783 [2024-12-05 12:14:08.883412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.783 qpair failed and we were unable to recover it. 00:30:34.783 [2024-12-05 12:14:08.883586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.783 [2024-12-05 12:14:08.883618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.783 qpair failed and we were unable to recover it. 00:30:34.783 [2024-12-05 12:14:08.883722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.783 [2024-12-05 12:14:08.883753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.783 qpair failed and we were unable to recover it. 00:30:34.783 [2024-12-05 12:14:08.883923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.783 [2024-12-05 12:14:08.883955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.783 qpair failed and we were unable to recover it. 00:30:34.783 [2024-12-05 12:14:08.884146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.783 [2024-12-05 12:14:08.884178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.783 qpair failed and we were unable to recover it. 00:30:34.783 [2024-12-05 12:14:08.884388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.783 [2024-12-05 12:14:08.884421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.783 qpair failed and we were unable to recover it. 00:30:34.783 [2024-12-05 12:14:08.884562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.783 [2024-12-05 12:14:08.884594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.783 qpair failed and we were unable to recover it. 00:30:34.783 [2024-12-05 12:14:08.884707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.783 [2024-12-05 12:14:08.884738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.783 qpair failed and we were unable to recover it. 00:30:34.783 [2024-12-05 12:14:08.884976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.783 [2024-12-05 12:14:08.885007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.783 qpair failed and we were unable to recover it. 00:30:34.783 [2024-12-05 12:14:08.885251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.783 [2024-12-05 12:14:08.885283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.783 qpair failed and we were unable to recover it. 00:30:34.783 [2024-12-05 12:14:08.885457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.783 [2024-12-05 12:14:08.885490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.783 qpair failed and we were unable to recover it. 00:30:34.783 [2024-12-05 12:14:08.885662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.783 [2024-12-05 12:14:08.885694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.783 qpair failed and we were unable to recover it. 00:30:34.783 [2024-12-05 12:14:08.885801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.783 [2024-12-05 12:14:08.885832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.783 qpair failed and we were unable to recover it. 00:30:34.783 [2024-12-05 12:14:08.885956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.783 [2024-12-05 12:14:08.885988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.783 qpair failed and we were unable to recover it. 00:30:34.783 [2024-12-05 12:14:08.886104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.783 [2024-12-05 12:14:08.886135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.783 qpair failed and we were unable to recover it. 00:30:34.783 [2024-12-05 12:14:08.886243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.783 [2024-12-05 12:14:08.886273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.783 qpair failed and we were unable to recover it. 00:30:34.783 [2024-12-05 12:14:08.886381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.783 [2024-12-05 12:14:08.886412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.783 qpair failed and we were unable to recover it. 00:30:34.783 [2024-12-05 12:14:08.886529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.784 [2024-12-05 12:14:08.886561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.784 qpair failed and we were unable to recover it. 00:30:34.784 [2024-12-05 12:14:08.886799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.784 [2024-12-05 12:14:08.886830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.784 qpair failed and we were unable to recover it. 00:30:34.784 [2024-12-05 12:14:08.887075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.784 [2024-12-05 12:14:08.887112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.784 qpair failed and we were unable to recover it. 00:30:34.784 [2024-12-05 12:14:08.887290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.784 [2024-12-05 12:14:08.887324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.784 qpair failed and we were unable to recover it. 00:30:34.784 [2024-12-05 12:14:08.887515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.784 [2024-12-05 12:14:08.887548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.784 qpair failed and we were unable to recover it. 00:30:34.784 [2024-12-05 12:14:08.887724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.784 [2024-12-05 12:14:08.887754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.784 qpair failed and we were unable to recover it. 00:30:34.784 [2024-12-05 12:14:08.887867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.784 [2024-12-05 12:14:08.887899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.784 qpair failed and we were unable to recover it. 00:30:34.784 [2024-12-05 12:14:08.888002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.784 [2024-12-05 12:14:08.888033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.784 qpair failed and we were unable to recover it. 00:30:34.784 [2024-12-05 12:14:08.888267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.784 [2024-12-05 12:14:08.888299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.784 qpair failed and we were unable to recover it. 00:30:34.784 [2024-12-05 12:14:08.888484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.784 [2024-12-05 12:14:08.888517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.784 qpair failed and we were unable to recover it. 00:30:34.784 [2024-12-05 12:14:08.888756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.784 [2024-12-05 12:14:08.888787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.784 qpair failed and we were unable to recover it. 00:30:34.784 [2024-12-05 12:14:08.888962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.784 [2024-12-05 12:14:08.888994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.784 qpair failed and we were unable to recover it. 00:30:34.784 [2024-12-05 12:14:08.889265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.784 [2024-12-05 12:14:08.889297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.784 qpair failed and we were unable to recover it. 00:30:34.784 [2024-12-05 12:14:08.889537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.784 [2024-12-05 12:14:08.889570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.784 qpair failed and we were unable to recover it. 00:30:34.784 [2024-12-05 12:14:08.889760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.784 [2024-12-05 12:14:08.889792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.784 qpair failed and we were unable to recover it. 00:30:34.784 [2024-12-05 12:14:08.889900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.784 [2024-12-05 12:14:08.889932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.784 qpair failed and we were unable to recover it. 00:30:34.784 [2024-12-05 12:14:08.890044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.784 [2024-12-05 12:14:08.890075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.784 qpair failed and we were unable to recover it. 00:30:34.784 [2024-12-05 12:14:08.890272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.784 [2024-12-05 12:14:08.890305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.784 qpair failed and we were unable to recover it. 00:30:34.784 [2024-12-05 12:14:08.890421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.784 [2024-12-05 12:14:08.890454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.784 qpair failed and we were unable to recover it. 00:30:34.784 [2024-12-05 12:14:08.890635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.784 [2024-12-05 12:14:08.890667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.784 qpair failed and we were unable to recover it. 00:30:34.784 [2024-12-05 12:14:08.890863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.784 [2024-12-05 12:14:08.890894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.784 qpair failed and we were unable to recover it. 00:30:34.784 [2024-12-05 12:14:08.891001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.784 [2024-12-05 12:14:08.891033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.784 qpair failed and we were unable to recover it. 00:30:34.784 [2024-12-05 12:14:08.891208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.784 [2024-12-05 12:14:08.891239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.784 qpair failed and we were unable to recover it. 00:30:34.784 [2024-12-05 12:14:08.891416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.784 [2024-12-05 12:14:08.891448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.784 qpair failed and we were unable to recover it. 00:30:34.784 [2024-12-05 12:14:08.891554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.784 [2024-12-05 12:14:08.891585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.784 qpair failed and we were unable to recover it. 00:30:34.784 [2024-12-05 12:14:08.891774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.784 [2024-12-05 12:14:08.891806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.784 qpair failed and we were unable to recover it. 00:30:34.784 [2024-12-05 12:14:08.891919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.784 [2024-12-05 12:14:08.891950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.784 qpair failed and we were unable to recover it. 00:30:34.784 [2024-12-05 12:14:08.892147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.784 [2024-12-05 12:14:08.892179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.784 qpair failed and we were unable to recover it. 00:30:34.784 [2024-12-05 12:14:08.892304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.784 [2024-12-05 12:14:08.892335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.784 qpair failed and we were unable to recover it. 00:30:34.784 [2024-12-05 12:14:08.892528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.784 [2024-12-05 12:14:08.892560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.784 qpair failed and we were unable to recover it. 00:30:34.784 [2024-12-05 12:14:08.892802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.784 [2024-12-05 12:14:08.892833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.784 qpair failed and we were unable to recover it. 00:30:34.784 [2024-12-05 12:14:08.892941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.784 [2024-12-05 12:14:08.892971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.784 qpair failed and we were unable to recover it. 00:30:34.784 [2024-12-05 12:14:08.893088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.784 [2024-12-05 12:14:08.893121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.784 qpair failed and we were unable to recover it. 00:30:34.784 [2024-12-05 12:14:08.893238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.784 [2024-12-05 12:14:08.893269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.784 qpair failed and we were unable to recover it. 00:30:34.784 [2024-12-05 12:14:08.893443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.784 [2024-12-05 12:14:08.893476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.784 qpair failed and we were unable to recover it. 00:30:34.784 [2024-12-05 12:14:08.893653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.784 [2024-12-05 12:14:08.893685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.784 qpair failed and we were unable to recover it. 00:30:34.784 [2024-12-05 12:14:08.893787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.784 [2024-12-05 12:14:08.893818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.784 qpair failed and we were unable to recover it. 00:30:34.785 [2024-12-05 12:14:08.893996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.785 [2024-12-05 12:14:08.894027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.785 qpair failed and we were unable to recover it. 00:30:34.785 [2024-12-05 12:14:08.894199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.785 [2024-12-05 12:14:08.894230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.785 qpair failed and we were unable to recover it. 00:30:34.785 [2024-12-05 12:14:08.894339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.785 [2024-12-05 12:14:08.894378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.785 qpair failed and we were unable to recover it. 00:30:34.785 [2024-12-05 12:14:08.894588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.785 [2024-12-05 12:14:08.894620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.785 qpair failed and we were unable to recover it. 00:30:34.785 [2024-12-05 12:14:08.894756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.785 [2024-12-05 12:14:08.894789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.785 qpair failed and we were unable to recover it. 00:30:34.785 [2024-12-05 12:14:08.894897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.785 [2024-12-05 12:14:08.894939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.785 qpair failed and we were unable to recover it. 00:30:34.785 [2024-12-05 12:14:08.895118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.785 [2024-12-05 12:14:08.895149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.785 qpair failed and we were unable to recover it. 00:30:34.785 [2024-12-05 12:14:08.895268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.785 [2024-12-05 12:14:08.895300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.785 qpair failed and we were unable to recover it. 00:30:34.785 [2024-12-05 12:14:08.895536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.785 [2024-12-05 12:14:08.895568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.785 qpair failed and we were unable to recover it. 00:30:34.785 [2024-12-05 12:14:08.895671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.785 [2024-12-05 12:14:08.895703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.785 qpair failed and we were unable to recover it. 00:30:34.785 [2024-12-05 12:14:08.895958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.785 [2024-12-05 12:14:08.895989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.785 qpair failed and we were unable to recover it. 00:30:34.785 [2024-12-05 12:14:08.896231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.785 [2024-12-05 12:14:08.896261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.785 qpair failed and we were unable to recover it. 00:30:34.785 [2024-12-05 12:14:08.896392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.785 [2024-12-05 12:14:08.896424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.785 qpair failed and we were unable to recover it. 00:30:34.785 [2024-12-05 12:14:08.896608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.785 [2024-12-05 12:14:08.896640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.785 qpair failed and we were unable to recover it. 00:30:34.785 [2024-12-05 12:14:08.896828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.785 [2024-12-05 12:14:08.896860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.785 qpair failed and we were unable to recover it. 00:30:34.785 [2024-12-05 12:14:08.897098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.785 [2024-12-05 12:14:08.897130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.785 qpair failed and we were unable to recover it. 00:30:34.785 [2024-12-05 12:14:08.897238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.785 [2024-12-05 12:14:08.897269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.785 qpair failed and we were unable to recover it. 00:30:34.785 [2024-12-05 12:14:08.897459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.785 [2024-12-05 12:14:08.897492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.785 qpair failed and we were unable to recover it. 00:30:34.785 [2024-12-05 12:14:08.897612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.785 [2024-12-05 12:14:08.897643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.785 qpair failed and we were unable to recover it. 00:30:34.785 [2024-12-05 12:14:08.897822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.785 [2024-12-05 12:14:08.897854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.785 qpair failed and we were unable to recover it. 00:30:34.785 [2024-12-05 12:14:08.897976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.785 [2024-12-05 12:14:08.898008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.785 qpair failed and we were unable to recover it. 00:30:34.785 [2024-12-05 12:14:08.898183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.785 [2024-12-05 12:14:08.898214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.785 qpair failed and we were unable to recover it. 00:30:34.785 [2024-12-05 12:14:08.898458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.785 [2024-12-05 12:14:08.898491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.785 qpair failed and we were unable to recover it. 00:30:34.785 [2024-12-05 12:14:08.898600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.785 [2024-12-05 12:14:08.898632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.785 qpair failed and we were unable to recover it. 00:30:34.785 [2024-12-05 12:14:08.898881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.785 [2024-12-05 12:14:08.898912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.785 qpair failed and we were unable to recover it. 00:30:34.785 [2024-12-05 12:14:08.899096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.785 [2024-12-05 12:14:08.899128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.785 qpair failed and we were unable to recover it. 00:30:34.785 [2024-12-05 12:14:08.899417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.785 [2024-12-05 12:14:08.899450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.785 qpair failed and we were unable to recover it. 00:30:34.785 [2024-12-05 12:14:08.899595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.785 [2024-12-05 12:14:08.899628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.785 qpair failed and we were unable to recover it. 00:30:34.785 [2024-12-05 12:14:08.899743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.785 [2024-12-05 12:14:08.899774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.785 qpair failed and we were unable to recover it. 00:30:34.785 [2024-12-05 12:14:08.899949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.785 [2024-12-05 12:14:08.899982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.785 qpair failed and we were unable to recover it. 00:30:34.785 [2024-12-05 12:14:08.900178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.785 [2024-12-05 12:14:08.900210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.785 qpair failed and we were unable to recover it. 00:30:34.785 [2024-12-05 12:14:08.900403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.785 [2024-12-05 12:14:08.900435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.785 qpair failed and we were unable to recover it. 00:30:34.785 [2024-12-05 12:14:08.900624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.785 [2024-12-05 12:14:08.900655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.785 qpair failed and we were unable to recover it. 00:30:34.785 [2024-12-05 12:14:08.900834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.785 [2024-12-05 12:14:08.900865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.785 qpair failed and we were unable to recover it. 00:30:34.785 [2024-12-05 12:14:08.901048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.785 [2024-12-05 12:14:08.901080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.785 qpair failed and we were unable to recover it. 00:30:34.785 [2024-12-05 12:14:08.901182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.785 [2024-12-05 12:14:08.901212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.785 qpair failed and we were unable to recover it. 00:30:34.785 [2024-12-05 12:14:08.901390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.785 [2024-12-05 12:14:08.901425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.786 qpair failed and we were unable to recover it. 00:30:34.786 [2024-12-05 12:14:08.901547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.786 [2024-12-05 12:14:08.901578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.786 qpair failed and we were unable to recover it. 00:30:34.786 [2024-12-05 12:14:08.901751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.786 [2024-12-05 12:14:08.901783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.786 qpair failed and we were unable to recover it. 00:30:34.786 [2024-12-05 12:14:08.901921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.786 [2024-12-05 12:14:08.901953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.786 qpair failed and we were unable to recover it. 00:30:34.786 [2024-12-05 12:14:08.902057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.786 [2024-12-05 12:14:08.902089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:34.786 qpair failed and we were unable to recover it. 00:30:34.786 [2024-12-05 12:14:08.902344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.786 [2024-12-05 12:14:08.902430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.786 qpair failed and we were unable to recover it. 00:30:34.786 [2024-12-05 12:14:08.902636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.786 [2024-12-05 12:14:08.902674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.786 qpair failed and we were unable to recover it. 00:30:34.786 [2024-12-05 12:14:08.902920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.786 [2024-12-05 12:14:08.902952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.786 qpair failed and we were unable to recover it. 00:30:34.786 [2024-12-05 12:14:08.903133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.786 [2024-12-05 12:14:08.903165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.786 qpair failed and we were unable to recover it. 00:30:34.786 [2024-12-05 12:14:08.903356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.786 [2024-12-05 12:14:08.903410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.786 qpair failed and we were unable to recover it. 00:30:34.786 [2024-12-05 12:14:08.903592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.786 [2024-12-05 12:14:08.903624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.786 qpair failed and we were unable to recover it. 00:30:34.786 [2024-12-05 12:14:08.903809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.786 [2024-12-05 12:14:08.903842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.786 qpair failed and we were unable to recover it. 00:30:34.786 [2024-12-05 12:14:08.904032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.786 [2024-12-05 12:14:08.904064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.786 qpair failed and we were unable to recover it. 00:30:34.786 [2024-12-05 12:14:08.904250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.786 [2024-12-05 12:14:08.904281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.786 qpair failed and we were unable to recover it. 00:30:34.786 [2024-12-05 12:14:08.904393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.786 [2024-12-05 12:14:08.904440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.786 qpair failed and we were unable to recover it. 00:30:34.786 [2024-12-05 12:14:08.904557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.786 [2024-12-05 12:14:08.904588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.786 qpair failed and we were unable to recover it. 00:30:34.786 [2024-12-05 12:14:08.904782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.786 [2024-12-05 12:14:08.904814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.786 qpair failed and we were unable to recover it. 00:30:34.786 [2024-12-05 12:14:08.904919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.786 [2024-12-05 12:14:08.904950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.786 qpair failed and we were unable to recover it. 00:30:34.786 [2024-12-05 12:14:08.905134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.786 [2024-12-05 12:14:08.905166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.786 qpair failed and we were unable to recover it. 00:30:34.786 [2024-12-05 12:14:08.905353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.786 [2024-12-05 12:14:08.905394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.786 qpair failed and we were unable to recover it. 00:30:34.786 [2024-12-05 12:14:08.905509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.786 [2024-12-05 12:14:08.905541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.786 qpair failed and we were unable to recover it. 00:30:34.786 [2024-12-05 12:14:08.905654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.786 [2024-12-05 12:14:08.905686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.786 qpair failed and we were unable to recover it. 00:30:34.786 [2024-12-05 12:14:08.905859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.786 [2024-12-05 12:14:08.905890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.786 qpair failed and we were unable to recover it. 00:30:34.786 [2024-12-05 12:14:08.906027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.786 [2024-12-05 12:14:08.906059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.786 qpair failed and we were unable to recover it. 00:30:34.786 [2024-12-05 12:14:08.906242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.786 [2024-12-05 12:14:08.906273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.786 qpair failed and we were unable to recover it. 00:30:34.786 [2024-12-05 12:14:08.906448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.786 [2024-12-05 12:14:08.906480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.786 qpair failed and we were unable to recover it. 00:30:34.786 [2024-12-05 12:14:08.906661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.786 [2024-12-05 12:14:08.906692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.786 qpair failed and we were unable to recover it. 00:30:34.786 [2024-12-05 12:14:08.906860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.786 [2024-12-05 12:14:08.906890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.786 qpair failed and we were unable to recover it. 00:30:34.786 [2024-12-05 12:14:08.907011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.786 [2024-12-05 12:14:08.907043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.786 qpair failed and we were unable to recover it. 00:30:34.786 [2024-12-05 12:14:08.907278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.786 [2024-12-05 12:14:08.907310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.786 qpair failed and we were unable to recover it. 00:30:34.786 [2024-12-05 12:14:08.907586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.786 [2024-12-05 12:14:08.907621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.786 qpair failed and we were unable to recover it. 00:30:34.786 [2024-12-05 12:14:08.907739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.786 [2024-12-05 12:14:08.907770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.786 qpair failed and we were unable to recover it. 00:30:34.786 [2024-12-05 12:14:08.907885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.787 [2024-12-05 12:14:08.907916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.787 qpair failed and we were unable to recover it. 00:30:34.787 [2024-12-05 12:14:08.908159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.787 [2024-12-05 12:14:08.908191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.787 qpair failed and we were unable to recover it. 00:30:34.787 [2024-12-05 12:14:08.908305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.787 [2024-12-05 12:14:08.908336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.787 qpair failed and we were unable to recover it. 00:30:34.787 [2024-12-05 12:14:08.908473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.787 [2024-12-05 12:14:08.908507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.787 qpair failed and we were unable to recover it. 00:30:34.787 [2024-12-05 12:14:08.908615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.787 [2024-12-05 12:14:08.908646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.787 qpair failed and we were unable to recover it. 00:30:34.787 [2024-12-05 12:14:08.908768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.787 [2024-12-05 12:14:08.908801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.787 qpair failed and we were unable to recover it. 00:30:34.787 [2024-12-05 12:14:08.908993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.787 [2024-12-05 12:14:08.909025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.787 qpair failed and we were unable to recover it. 00:30:34.787 [2024-12-05 12:14:08.909140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.787 [2024-12-05 12:14:08.909171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.787 qpair failed and we were unable to recover it. 00:30:34.787 [2024-12-05 12:14:08.909275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.787 [2024-12-05 12:14:08.909308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.787 qpair failed and we were unable to recover it. 00:30:34.787 [2024-12-05 12:14:08.909499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.787 [2024-12-05 12:14:08.909532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.787 qpair failed and we were unable to recover it. 00:30:34.787 [2024-12-05 12:14:08.909645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.787 [2024-12-05 12:14:08.909678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.787 qpair failed and we were unable to recover it. 00:30:34.787 [2024-12-05 12:14:08.909801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.787 [2024-12-05 12:14:08.909833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.787 qpair failed and we were unable to recover it. 00:30:34.787 [2024-12-05 12:14:08.909954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.787 [2024-12-05 12:14:08.909986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.787 qpair failed and we were unable to recover it. 00:30:34.787 [2024-12-05 12:14:08.910120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.787 [2024-12-05 12:14:08.910152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.787 qpair failed and we were unable to recover it. 00:30:34.787 [2024-12-05 12:14:08.910271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.787 [2024-12-05 12:14:08.910304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.787 qpair failed and we were unable to recover it. 00:30:34.787 [2024-12-05 12:14:08.910582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.787 [2024-12-05 12:14:08.910615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.787 qpair failed and we were unable to recover it. 00:30:34.787 [2024-12-05 12:14:08.910806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.787 [2024-12-05 12:14:08.910838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.787 qpair failed and we were unable to recover it. 00:30:34.787 [2024-12-05 12:14:08.910960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.787 [2024-12-05 12:14:08.910993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.787 qpair failed and we were unable to recover it. 00:30:34.787 [2024-12-05 12:14:08.911182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.787 [2024-12-05 12:14:08.911218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:34.787 qpair failed and we were unable to recover it. 00:30:35.071 [2024-12-05 12:14:08.911412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.071 [2024-12-05 12:14:08.911446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.071 qpair failed and we were unable to recover it. 00:30:35.071 [2024-12-05 12:14:08.911581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.071 [2024-12-05 12:14:08.911614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.071 qpair failed and we were unable to recover it. 00:30:35.071 [2024-12-05 12:14:08.911794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.071 [2024-12-05 12:14:08.911826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.071 qpair failed and we were unable to recover it. 00:30:35.071 [2024-12-05 12:14:08.911998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.071 [2024-12-05 12:14:08.912032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.071 qpair failed and we were unable to recover it. 00:30:35.071 [2024-12-05 12:14:08.912263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.071 [2024-12-05 12:14:08.912294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.071 qpair failed and we were unable to recover it. 00:30:35.071 [2024-12-05 12:14:08.912411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.071 [2024-12-05 12:14:08.912443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.071 qpair failed and we were unable to recover it. 00:30:35.071 [2024-12-05 12:14:08.912631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.071 [2024-12-05 12:14:08.912663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.071 qpair failed and we were unable to recover it. 00:30:35.071 [2024-12-05 12:14:08.912786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.071 [2024-12-05 12:14:08.912817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.071 qpair failed and we were unable to recover it. 00:30:35.071 [2024-12-05 12:14:08.912939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.071 [2024-12-05 12:14:08.912970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.071 qpair failed and we were unable to recover it. 00:30:35.071 [2024-12-05 12:14:08.913160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.071 [2024-12-05 12:14:08.913192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.071 qpair failed and we were unable to recover it. 00:30:35.071 [2024-12-05 12:14:08.913457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.071 [2024-12-05 12:14:08.913490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.071 qpair failed and we were unable to recover it. 00:30:35.071 [2024-12-05 12:14:08.913689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.071 [2024-12-05 12:14:08.913720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.071 qpair failed and we were unable to recover it. 00:30:35.072 [2024-12-05 12:14:08.913834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.072 [2024-12-05 12:14:08.913864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.072 qpair failed and we were unable to recover it. 00:30:35.072 [2024-12-05 12:14:08.913990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.072 [2024-12-05 12:14:08.914023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.072 qpair failed and we were unable to recover it. 00:30:35.072 [2024-12-05 12:14:08.914151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.072 [2024-12-05 12:14:08.914183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.072 qpair failed and we were unable to recover it. 00:30:35.072 [2024-12-05 12:14:08.914397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.072 [2024-12-05 12:14:08.914430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.072 qpair failed and we were unable to recover it. 00:30:35.072 [2024-12-05 12:14:08.914551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.072 [2024-12-05 12:14:08.914582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.072 qpair failed and we were unable to recover it. 00:30:35.072 [2024-12-05 12:14:08.914780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.072 [2024-12-05 12:14:08.914813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.072 qpair failed and we were unable to recover it. 00:30:35.072 [2024-12-05 12:14:08.914943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.072 [2024-12-05 12:14:08.914976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.072 qpair failed and we were unable to recover it. 00:30:35.072 [2024-12-05 12:14:08.915175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.072 [2024-12-05 12:14:08.915208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.072 qpair failed and we were unable to recover it. 00:30:35.072 [2024-12-05 12:14:08.915344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.072 [2024-12-05 12:14:08.915387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.072 qpair failed and we were unable to recover it. 00:30:35.072 [2024-12-05 12:14:08.915505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.072 [2024-12-05 12:14:08.915538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.072 qpair failed and we were unable to recover it. 00:30:35.072 [2024-12-05 12:14:08.915647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.072 [2024-12-05 12:14:08.915679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.072 qpair failed and we were unable to recover it. 00:30:35.072 [2024-12-05 12:14:08.915807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.072 [2024-12-05 12:14:08.915838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.072 qpair failed and we were unable to recover it. 00:30:35.072 [2024-12-05 12:14:08.916149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.072 [2024-12-05 12:14:08.916183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.072 qpair failed and we were unable to recover it. 00:30:35.072 [2024-12-05 12:14:08.916366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.072 [2024-12-05 12:14:08.916405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.072 qpair failed and we were unable to recover it. 00:30:35.072 [2024-12-05 12:14:08.916530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.072 [2024-12-05 12:14:08.916567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.072 qpair failed and we were unable to recover it. 00:30:35.072 [2024-12-05 12:14:08.916809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.072 [2024-12-05 12:14:08.916840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.072 qpair failed and we were unable to recover it. 00:30:35.072 [2024-12-05 12:14:08.916964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.072 [2024-12-05 12:14:08.916998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.072 qpair failed and we were unable to recover it. 00:30:35.072 [2024-12-05 12:14:08.917228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.072 [2024-12-05 12:14:08.917258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.072 qpair failed and we were unable to recover it. 00:30:35.072 [2024-12-05 12:14:08.917384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.072 [2024-12-05 12:14:08.917417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.072 qpair failed and we were unable to recover it. 00:30:35.072 [2024-12-05 12:14:08.917539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.072 [2024-12-05 12:14:08.917571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.072 qpair failed and we were unable to recover it. 00:30:35.072 [2024-12-05 12:14:08.917782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.072 [2024-12-05 12:14:08.917814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.072 qpair failed and we were unable to recover it. 00:30:35.072 [2024-12-05 12:14:08.918009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.072 [2024-12-05 12:14:08.918041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.072 qpair failed and we were unable to recover it. 00:30:35.072 [2024-12-05 12:14:08.918174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.072 [2024-12-05 12:14:08.918206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.072 qpair failed and we were unable to recover it. 00:30:35.072 [2024-12-05 12:14:08.918326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.072 [2024-12-05 12:14:08.918357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.072 qpair failed and we were unable to recover it. 00:30:35.072 [2024-12-05 12:14:08.918479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.072 [2024-12-05 12:14:08.918513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.072 qpair failed and we were unable to recover it. 00:30:35.072 [2024-12-05 12:14:08.918618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.072 [2024-12-05 12:14:08.918649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.072 qpair failed and we were unable to recover it. 00:30:35.072 [2024-12-05 12:14:08.918868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.072 [2024-12-05 12:14:08.918901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.072 qpair failed and we were unable to recover it. 00:30:35.072 [2024-12-05 12:14:08.919081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.072 [2024-12-05 12:14:08.919115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.072 qpair failed and we were unable to recover it. 00:30:35.072 [2024-12-05 12:14:08.919266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.072 [2024-12-05 12:14:08.919337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.073 qpair failed and we were unable to recover it. 00:30:35.073 [2024-12-05 12:14:08.919582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.073 [2024-12-05 12:14:08.919619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.073 qpair failed and we were unable to recover it. 00:30:35.073 [2024-12-05 12:14:08.919799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.073 [2024-12-05 12:14:08.919831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.073 qpair failed and we were unable to recover it. 00:30:35.073 [2024-12-05 12:14:08.920072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.073 [2024-12-05 12:14:08.920106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.073 qpair failed and we were unable to recover it. 00:30:35.073 [2024-12-05 12:14:08.920233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.073 [2024-12-05 12:14:08.920263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.073 qpair failed and we were unable to recover it. 00:30:35.073 [2024-12-05 12:14:08.920385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.073 [2024-12-05 12:14:08.920417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.073 qpair failed and we were unable to recover it. 00:30:35.073 [2024-12-05 12:14:08.920530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.073 [2024-12-05 12:14:08.920561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.073 qpair failed and we were unable to recover it. 00:30:35.073 [2024-12-05 12:14:08.920735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.073 [2024-12-05 12:14:08.920767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.073 qpair failed and we were unable to recover it. 00:30:35.073 [2024-12-05 12:14:08.921009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.073 [2024-12-05 12:14:08.921041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.073 qpair failed and we were unable to recover it. 00:30:35.073 [2024-12-05 12:14:08.921165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.073 [2024-12-05 12:14:08.921197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.073 qpair failed and we were unable to recover it. 00:30:35.073 [2024-12-05 12:14:08.921332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.073 [2024-12-05 12:14:08.921364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.073 qpair failed and we were unable to recover it. 00:30:35.073 [2024-12-05 12:14:08.921506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.073 [2024-12-05 12:14:08.921538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.073 qpair failed and we were unable to recover it. 00:30:35.073 [2024-12-05 12:14:08.921664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.073 [2024-12-05 12:14:08.921696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.073 qpair failed and we were unable to recover it. 00:30:35.073 [2024-12-05 12:14:08.921804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.073 [2024-12-05 12:14:08.921844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.073 qpair failed and we were unable to recover it. 00:30:35.073 [2024-12-05 12:14:08.922036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.073 [2024-12-05 12:14:08.922068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.073 qpair failed and we were unable to recover it. 00:30:35.073 [2024-12-05 12:14:08.922187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.073 [2024-12-05 12:14:08.922219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.073 qpair failed and we were unable to recover it. 00:30:35.073 [2024-12-05 12:14:08.922326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.073 [2024-12-05 12:14:08.922356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.073 qpair failed and we were unable to recover it. 00:30:35.073 [2024-12-05 12:14:08.922552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.073 [2024-12-05 12:14:08.922583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.073 qpair failed and we were unable to recover it. 00:30:35.073 [2024-12-05 12:14:08.922697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.073 [2024-12-05 12:14:08.922728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.073 qpair failed and we were unable to recover it. 00:30:35.073 [2024-12-05 12:14:08.922932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.073 [2024-12-05 12:14:08.922963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.073 qpair failed and we were unable to recover it. 00:30:35.073 [2024-12-05 12:14:08.923083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.073 [2024-12-05 12:14:08.923114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.073 qpair failed and we were unable to recover it. 00:30:35.073 [2024-12-05 12:14:08.923330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.073 [2024-12-05 12:14:08.923363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.073 qpair failed and we were unable to recover it. 00:30:35.073 [2024-12-05 12:14:08.923501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.073 [2024-12-05 12:14:08.923533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.073 qpair failed and we were unable to recover it. 00:30:35.073 [2024-12-05 12:14:08.923718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.073 [2024-12-05 12:14:08.923750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.073 qpair failed and we were unable to recover it. 00:30:35.073 [2024-12-05 12:14:08.923866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.073 [2024-12-05 12:14:08.923898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.073 qpair failed and we were unable to recover it. 00:30:35.073 [2024-12-05 12:14:08.924009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.073 [2024-12-05 12:14:08.924040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.073 qpair failed and we were unable to recover it. 00:30:35.073 [2024-12-05 12:14:08.924164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.073 [2024-12-05 12:14:08.924196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.073 qpair failed and we were unable to recover it. 00:30:35.073 [2024-12-05 12:14:08.924396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.073 [2024-12-05 12:14:08.924430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.073 qpair failed and we were unable to recover it. 00:30:35.073 [2024-12-05 12:14:08.924551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.073 [2024-12-05 12:14:08.924583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.073 qpair failed and we were unable to recover it. 00:30:35.073 [2024-12-05 12:14:08.924698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.074 [2024-12-05 12:14:08.924730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.074 qpair failed and we were unable to recover it. 00:30:35.074 [2024-12-05 12:14:08.924850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.074 [2024-12-05 12:14:08.924881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.074 qpair failed and we were unable to recover it. 00:30:35.074 [2024-12-05 12:14:08.924987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.074 [2024-12-05 12:14:08.925020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.074 qpair failed and we were unable to recover it. 00:30:35.074 [2024-12-05 12:14:08.925130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.074 [2024-12-05 12:14:08.925161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.074 qpair failed and we were unable to recover it. 00:30:35.074 [2024-12-05 12:14:08.925283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.074 [2024-12-05 12:14:08.925313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.074 qpair failed and we were unable to recover it. 00:30:35.074 [2024-12-05 12:14:08.925428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.074 [2024-12-05 12:14:08.925460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.074 qpair failed and we were unable to recover it. 00:30:35.074 [2024-12-05 12:14:08.925634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.074 [2024-12-05 12:14:08.925666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.074 qpair failed and we were unable to recover it. 00:30:35.074 [2024-12-05 12:14:08.925790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.074 [2024-12-05 12:14:08.925821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.074 qpair failed and we were unable to recover it. 00:30:35.074 [2024-12-05 12:14:08.926107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.074 [2024-12-05 12:14:08.926139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.074 qpair failed and we were unable to recover it. 00:30:35.074 [2024-12-05 12:14:08.926253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.074 [2024-12-05 12:14:08.926285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.074 qpair failed and we were unable to recover it. 00:30:35.074 [2024-12-05 12:14:08.926391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.074 [2024-12-05 12:14:08.926423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.074 qpair failed and we were unable to recover it. 00:30:35.074 [2024-12-05 12:14:08.926624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.074 [2024-12-05 12:14:08.926656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.074 qpair failed and we were unable to recover it. 00:30:35.074 [2024-12-05 12:14:08.926885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.074 [2024-12-05 12:14:08.926916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.074 qpair failed and we were unable to recover it. 00:30:35.074 [2024-12-05 12:14:08.927031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.074 [2024-12-05 12:14:08.927063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.074 qpair failed and we were unable to recover it. 00:30:35.074 [2024-12-05 12:14:08.927186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.074 [2024-12-05 12:14:08.927217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.074 qpair failed and we were unable to recover it. 00:30:35.074 [2024-12-05 12:14:08.927417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.074 [2024-12-05 12:14:08.927449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.074 qpair failed and we were unable to recover it. 00:30:35.074 [2024-12-05 12:14:08.927566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.074 [2024-12-05 12:14:08.927597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.074 qpair failed and we were unable to recover it. 00:30:35.074 [2024-12-05 12:14:08.927718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.074 [2024-12-05 12:14:08.927748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.074 qpair failed and we were unable to recover it. 00:30:35.074 [2024-12-05 12:14:08.927991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.074 [2024-12-05 12:14:08.928024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.074 qpair failed and we were unable to recover it. 00:30:35.074 [2024-12-05 12:14:08.928142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.074 [2024-12-05 12:14:08.928176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.074 qpair failed and we were unable to recover it. 00:30:35.074 [2024-12-05 12:14:08.928361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.074 [2024-12-05 12:14:08.928421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.074 qpair failed and we were unable to recover it. 00:30:35.074 [2024-12-05 12:14:08.928532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.074 [2024-12-05 12:14:08.928565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.074 qpair failed and we were unable to recover it. 00:30:35.074 [2024-12-05 12:14:08.928670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.074 [2024-12-05 12:14:08.928701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.074 qpair failed and we were unable to recover it. 00:30:35.074 [2024-12-05 12:14:08.928876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.074 [2024-12-05 12:14:08.928909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.074 qpair failed and we were unable to recover it. 00:30:35.074 [2024-12-05 12:14:08.929090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.074 [2024-12-05 12:14:08.929127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.074 qpair failed and we were unable to recover it. 00:30:35.074 [2024-12-05 12:14:08.929310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.074 [2024-12-05 12:14:08.929343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.074 qpair failed and we were unable to recover it. 00:30:35.074 [2024-12-05 12:14:08.929469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.074 [2024-12-05 12:14:08.929502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.074 qpair failed and we were unable to recover it. 00:30:35.074 [2024-12-05 12:14:08.929612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.074 [2024-12-05 12:14:08.929645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.074 qpair failed and we were unable to recover it. 00:30:35.074 [2024-12-05 12:14:08.929758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.075 [2024-12-05 12:14:08.929789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.075 qpair failed and we were unable to recover it. 00:30:35.075 [2024-12-05 12:14:08.929897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.075 [2024-12-05 12:14:08.929929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.075 qpair failed and we were unable to recover it. 00:30:35.075 [2024-12-05 12:14:08.930040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.075 [2024-12-05 12:14:08.930072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.075 qpair failed and we were unable to recover it. 00:30:35.075 [2024-12-05 12:14:08.930175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.075 [2024-12-05 12:14:08.930206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.075 qpair failed and we were unable to recover it. 00:30:35.075 [2024-12-05 12:14:08.930315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.075 [2024-12-05 12:14:08.930347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.075 qpair failed and we were unable to recover it. 00:30:35.075 [2024-12-05 12:14:08.930499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.075 [2024-12-05 12:14:08.930531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.075 qpair failed and we were unable to recover it. 00:30:35.075 [2024-12-05 12:14:08.930732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.075 [2024-12-05 12:14:08.930763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.075 qpair failed and we were unable to recover it. 00:30:35.075 [2024-12-05 12:14:08.930868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.075 [2024-12-05 12:14:08.930902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.075 qpair failed and we were unable to recover it. 00:30:35.075 [2024-12-05 12:14:08.931003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.075 [2024-12-05 12:14:08.931033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.075 qpair failed and we were unable to recover it. 00:30:35.075 [2024-12-05 12:14:08.931219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.075 [2024-12-05 12:14:08.931249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.075 qpair failed and we were unable to recover it. 00:30:35.075 [2024-12-05 12:14:08.931387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.075 [2024-12-05 12:14:08.931421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.075 qpair failed and we were unable to recover it. 00:30:35.075 [2024-12-05 12:14:08.931608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.075 [2024-12-05 12:14:08.931640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.075 qpair failed and we were unable to recover it. 00:30:35.075 [2024-12-05 12:14:08.931771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.075 [2024-12-05 12:14:08.931804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.075 qpair failed and we were unable to recover it. 00:30:35.075 [2024-12-05 12:14:08.931917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.075 [2024-12-05 12:14:08.931950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.075 qpair failed and we were unable to recover it. 00:30:35.075 [2024-12-05 12:14:08.932090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.075 [2024-12-05 12:14:08.932122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.075 qpair failed and we were unable to recover it. 00:30:35.075 [2024-12-05 12:14:08.932243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.075 [2024-12-05 12:14:08.932275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.075 qpair failed and we were unable to recover it. 00:30:35.075 [2024-12-05 12:14:08.932388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.075 [2024-12-05 12:14:08.932421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.075 qpair failed and we were unable to recover it. 00:30:35.075 [2024-12-05 12:14:08.932636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.075 [2024-12-05 12:14:08.932668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.075 qpair failed and we were unable to recover it. 00:30:35.075 [2024-12-05 12:14:08.932772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.075 [2024-12-05 12:14:08.932804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.075 qpair failed and we were unable to recover it. 00:30:35.075 [2024-12-05 12:14:08.932979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.075 [2024-12-05 12:14:08.933011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.075 qpair failed and we were unable to recover it. 00:30:35.075 [2024-12-05 12:14:08.933140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.075 [2024-12-05 12:14:08.933172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.075 qpair failed and we were unable to recover it. 00:30:35.075 [2024-12-05 12:14:08.933294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.075 [2024-12-05 12:14:08.933326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.075 qpair failed and we were unable to recover it. 00:30:35.075 [2024-12-05 12:14:08.933446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.075 [2024-12-05 12:14:08.933478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.075 qpair failed and we were unable to recover it. 00:30:35.075 [2024-12-05 12:14:08.933612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.075 [2024-12-05 12:14:08.933644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.075 qpair failed and we were unable to recover it. 00:30:35.075 [2024-12-05 12:14:08.933755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.075 [2024-12-05 12:14:08.933787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.075 qpair failed and we were unable to recover it. 00:30:35.075 [2024-12-05 12:14:08.933892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.075 [2024-12-05 12:14:08.933923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.075 qpair failed and we were unable to recover it. 00:30:35.075 [2024-12-05 12:14:08.934100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.075 [2024-12-05 12:14:08.934132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.075 qpair failed and we were unable to recover it. 00:30:35.075 [2024-12-05 12:14:08.934302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.075 [2024-12-05 12:14:08.934335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.076 qpair failed and we were unable to recover it. 00:30:35.076 [2024-12-05 12:14:08.934463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.076 [2024-12-05 12:14:08.934495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.076 qpair failed and we were unable to recover it. 00:30:35.076 [2024-12-05 12:14:08.934666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.076 [2024-12-05 12:14:08.934697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.076 qpair failed and we were unable to recover it. 00:30:35.076 [2024-12-05 12:14:08.934805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.076 [2024-12-05 12:14:08.934837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.076 qpair failed and we were unable to recover it. 00:30:35.076 [2024-12-05 12:14:08.935005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.076 [2024-12-05 12:14:08.935040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.076 qpair failed and we were unable to recover it. 00:30:35.076 [2024-12-05 12:14:08.935167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.076 [2024-12-05 12:14:08.935198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.076 qpair failed and we were unable to recover it. 00:30:35.076 [2024-12-05 12:14:08.935322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.076 [2024-12-05 12:14:08.935353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.076 qpair failed and we were unable to recover it. 00:30:35.076 [2024-12-05 12:14:08.935490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.076 [2024-12-05 12:14:08.935522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.076 qpair failed and we were unable to recover it. 00:30:35.076 [2024-12-05 12:14:08.935638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.076 [2024-12-05 12:14:08.935667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.076 qpair failed and we were unable to recover it. 00:30:35.076 [2024-12-05 12:14:08.937068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.076 [2024-12-05 12:14:08.937128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.076 qpair failed and we were unable to recover it. 00:30:35.076 [2024-12-05 12:14:08.937336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.076 [2024-12-05 12:14:08.937383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.076 qpair failed and we were unable to recover it. 00:30:35.076 [2024-12-05 12:14:08.937630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.076 [2024-12-05 12:14:08.937666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.076 qpair failed and we were unable to recover it. 00:30:35.076 [2024-12-05 12:14:08.937852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.076 [2024-12-05 12:14:08.937883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.076 qpair failed and we were unable to recover it. 00:30:35.076 [2024-12-05 12:14:08.938125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.076 [2024-12-05 12:14:08.938158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.076 qpair failed and we were unable to recover it. 00:30:35.076 [2024-12-05 12:14:08.938356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.076 [2024-12-05 12:14:08.938402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.076 qpair failed and we were unable to recover it. 00:30:35.076 [2024-12-05 12:14:08.938653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.076 [2024-12-05 12:14:08.938685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.076 qpair failed and we were unable to recover it. 00:30:35.076 [2024-12-05 12:14:08.938933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.076 [2024-12-05 12:14:08.938966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.076 qpair failed and we were unable to recover it. 00:30:35.076 [2024-12-05 12:14:08.939073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.076 [2024-12-05 12:14:08.939102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.076 qpair failed and we were unable to recover it. 00:30:35.076 [2024-12-05 12:14:08.939209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.076 [2024-12-05 12:14:08.939239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.076 qpair failed and we were unable to recover it. 00:30:35.076 [2024-12-05 12:14:08.939363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.076 [2024-12-05 12:14:08.939404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.076 qpair failed and we were unable to recover it. 00:30:35.076 [2024-12-05 12:14:08.939588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.076 [2024-12-05 12:14:08.939621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.076 qpair failed and we were unable to recover it. 00:30:35.076 [2024-12-05 12:14:08.939795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.076 [2024-12-05 12:14:08.939829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.076 qpair failed and we were unable to recover it. 00:30:35.076 [2024-12-05 12:14:08.939938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.076 [2024-12-05 12:14:08.939969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.076 qpair failed and we were unable to recover it. 00:30:35.076 [2024-12-05 12:14:08.940158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.076 [2024-12-05 12:14:08.940191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.076 qpair failed and we were unable to recover it. 00:30:35.076 [2024-12-05 12:14:08.940397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.076 [2024-12-05 12:14:08.940431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.076 qpair failed and we were unable to recover it. 00:30:35.076 [2024-12-05 12:14:08.940533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.076 [2024-12-05 12:14:08.940562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.076 qpair failed and we were unable to recover it. 00:30:35.076 [2024-12-05 12:14:08.940747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.076 [2024-12-05 12:14:08.940777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.076 qpair failed and we were unable to recover it. 00:30:35.076 [2024-12-05 12:14:08.941014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.076 [2024-12-05 12:14:08.941047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.076 qpair failed and we were unable to recover it. 00:30:35.076 [2024-12-05 12:14:08.941163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.076 [2024-12-05 12:14:08.941194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.076 qpair failed and we were unable to recover it. 00:30:35.076 [2024-12-05 12:14:08.941344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.077 [2024-12-05 12:14:08.941425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.077 qpair failed and we were unable to recover it. 00:30:35.077 [2024-12-05 12:14:08.941631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.077 [2024-12-05 12:14:08.941668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.077 qpair failed and we were unable to recover it. 00:30:35.077 [2024-12-05 12:14:08.941847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.077 [2024-12-05 12:14:08.941881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.077 qpair failed and we were unable to recover it. 00:30:35.077 [2024-12-05 12:14:08.942067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.077 [2024-12-05 12:14:08.942099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.077 qpair failed and we were unable to recover it. 00:30:35.077 [2024-12-05 12:14:08.942216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.077 [2024-12-05 12:14:08.942246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.077 qpair failed and we were unable to recover it. 00:30:35.077 [2024-12-05 12:14:08.942357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.077 [2024-12-05 12:14:08.942401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.077 qpair failed and we were unable to recover it. 00:30:35.077 [2024-12-05 12:14:08.942574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.077 [2024-12-05 12:14:08.942606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.077 qpair failed and we were unable to recover it. 00:30:35.077 [2024-12-05 12:14:08.942832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.077 [2024-12-05 12:14:08.942903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.077 qpair failed and we were unable to recover it. 00:30:35.077 [2024-12-05 12:14:08.943125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.077 [2024-12-05 12:14:08.943160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.077 qpair failed and we were unable to recover it. 00:30:35.077 [2024-12-05 12:14:08.943352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.077 [2024-12-05 12:14:08.943394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.077 qpair failed and we were unable to recover it. 00:30:35.077 [2024-12-05 12:14:08.943516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.077 [2024-12-05 12:14:08.943547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.077 qpair failed and we were unable to recover it. 00:30:35.077 [2024-12-05 12:14:08.943659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.077 [2024-12-05 12:14:08.943688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.077 qpair failed and we were unable to recover it. 00:30:35.077 [2024-12-05 12:14:08.943871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.077 [2024-12-05 12:14:08.943902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.077 qpair failed and we were unable to recover it. 00:30:35.077 [2024-12-05 12:14:08.944145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.077 [2024-12-05 12:14:08.944178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.077 qpair failed and we were unable to recover it. 00:30:35.077 [2024-12-05 12:14:08.944298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.077 [2024-12-05 12:14:08.944329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.077 qpair failed and we were unable to recover it. 00:30:35.077 [2024-12-05 12:14:08.944531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.077 [2024-12-05 12:14:08.944565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.077 qpair failed and we were unable to recover it. 00:30:35.077 [2024-12-05 12:14:08.944744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.077 [2024-12-05 12:14:08.944776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.077 qpair failed and we were unable to recover it. 00:30:35.077 [2024-12-05 12:14:08.944901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.077 [2024-12-05 12:14:08.944931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.077 qpair failed and we were unable to recover it. 00:30:35.077 [2024-12-05 12:14:08.945048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.077 [2024-12-05 12:14:08.945079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.077 qpair failed and we were unable to recover it. 00:30:35.077 [2024-12-05 12:14:08.945185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.077 [2024-12-05 12:14:08.945216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.077 qpair failed and we were unable to recover it. 00:30:35.077 [2024-12-05 12:14:08.945413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.077 [2024-12-05 12:14:08.945452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.077 qpair failed and we were unable to recover it. 00:30:35.077 [2024-12-05 12:14:08.945632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.077 [2024-12-05 12:14:08.945665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.077 qpair failed and we were unable to recover it. 00:30:35.077 [2024-12-05 12:14:08.945842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.077 [2024-12-05 12:14:08.945874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.077 qpair failed and we were unable to recover it. 00:30:35.077 [2024-12-05 12:14:08.946068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.077 [2024-12-05 12:14:08.946101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.077 qpair failed and we were unable to recover it. 00:30:35.077 [2024-12-05 12:14:08.946286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.077 [2024-12-05 12:14:08.946318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.077 qpair failed and we were unable to recover it. 00:30:35.077 [2024-12-05 12:14:08.946449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.077 [2024-12-05 12:14:08.946481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.077 qpair failed and we were unable to recover it. 00:30:35.077 [2024-12-05 12:14:08.946618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.078 [2024-12-05 12:14:08.946648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.078 qpair failed and we were unable to recover it. 00:30:35.078 [2024-12-05 12:14:08.946848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.078 [2024-12-05 12:14:08.946882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.078 qpair failed and we were unable to recover it. 00:30:35.078 [2024-12-05 12:14:08.947055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.078 [2024-12-05 12:14:08.947086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.078 qpair failed and we were unable to recover it. 00:30:35.078 [2024-12-05 12:14:08.947273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.078 [2024-12-05 12:14:08.947305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.078 qpair failed and we were unable to recover it. 00:30:35.078 [2024-12-05 12:14:08.947595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.078 [2024-12-05 12:14:08.947627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.078 qpair failed and we were unable to recover it. 00:30:35.078 [2024-12-05 12:14:08.947750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.078 [2024-12-05 12:14:08.947780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.078 qpair failed and we were unable to recover it. 00:30:35.078 [2024-12-05 12:14:08.947969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.078 [2024-12-05 12:14:08.948001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.078 qpair failed and we were unable to recover it. 00:30:35.078 [2024-12-05 12:14:08.948193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.078 [2024-12-05 12:14:08.948226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.078 qpair failed and we were unable to recover it. 00:30:35.078 [2024-12-05 12:14:08.948354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.078 [2024-12-05 12:14:08.948396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.078 qpair failed and we were unable to recover it. 00:30:35.078 [2024-12-05 12:14:08.948513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.078 [2024-12-05 12:14:08.948546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.078 qpair failed and we were unable to recover it. 00:30:35.078 [2024-12-05 12:14:08.948723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.078 [2024-12-05 12:14:08.948754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.078 qpair failed and we were unable to recover it. 00:30:35.078 [2024-12-05 12:14:08.948852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.078 [2024-12-05 12:14:08.948883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.078 qpair failed and we were unable to recover it. 00:30:35.078 [2024-12-05 12:14:08.949054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.078 [2024-12-05 12:14:08.949086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.078 qpair failed and we were unable to recover it. 00:30:35.078 [2024-12-05 12:14:08.949190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.078 [2024-12-05 12:14:08.949223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.078 qpair failed and we were unable to recover it. 00:30:35.078 [2024-12-05 12:14:08.949339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.078 [2024-12-05 12:14:08.949398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.078 qpair failed and we were unable to recover it. 00:30:35.078 [2024-12-05 12:14:08.949626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.078 [2024-12-05 12:14:08.949658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.078 qpair failed and we were unable to recover it. 00:30:35.078 [2024-12-05 12:14:08.949765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.078 [2024-12-05 12:14:08.949798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.078 qpair failed and we were unable to recover it. 00:30:35.078 [2024-12-05 12:14:08.949970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.078 [2024-12-05 12:14:08.950002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.078 qpair failed and we were unable to recover it. 00:30:35.078 [2024-12-05 12:14:08.950117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.078 [2024-12-05 12:14:08.950147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.078 qpair failed and we were unable to recover it. 00:30:35.078 [2024-12-05 12:14:08.950415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.078 [2024-12-05 12:14:08.950448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.078 qpair failed and we were unable to recover it. 00:30:35.078 [2024-12-05 12:14:08.950643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.078 [2024-12-05 12:14:08.950674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.078 qpair failed and we were unable to recover it. 00:30:35.078 [2024-12-05 12:14:08.950868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.078 [2024-12-05 12:14:08.950904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.078 qpair failed and we were unable to recover it. 00:30:35.078 [2024-12-05 12:14:08.951080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.078 [2024-12-05 12:14:08.951112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.078 qpair failed and we were unable to recover it. 00:30:35.078 [2024-12-05 12:14:08.951381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.078 [2024-12-05 12:14:08.951415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.078 qpair failed and we were unable to recover it. 00:30:35.078 [2024-12-05 12:14:08.951554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.078 [2024-12-05 12:14:08.951585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.078 qpair failed and we were unable to recover it. 00:30:35.078 [2024-12-05 12:14:08.951788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.078 [2024-12-05 12:14:08.951819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.078 qpair failed and we were unable to recover it. 00:30:35.078 [2024-12-05 12:14:08.951938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.078 [2024-12-05 12:14:08.951969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.078 qpair failed and we were unable to recover it. 00:30:35.078 [2024-12-05 12:14:08.952176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.078 [2024-12-05 12:14:08.952207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.078 qpair failed and we were unable to recover it. 00:30:35.078 [2024-12-05 12:14:08.952460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.078 [2024-12-05 12:14:08.952493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.078 qpair failed and we were unable to recover it. 00:30:35.079 [2024-12-05 12:14:08.952617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.079 [2024-12-05 12:14:08.952649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.079 qpair failed and we were unable to recover it. 00:30:35.079 [2024-12-05 12:14:08.952783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.079 [2024-12-05 12:14:08.952813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.079 qpair failed and we were unable to recover it. 00:30:35.079 [2024-12-05 12:14:08.952937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.079 [2024-12-05 12:14:08.952967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.079 qpair failed and we were unable to recover it. 00:30:35.079 [2024-12-05 12:14:08.953168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.079 [2024-12-05 12:14:08.953200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.079 qpair failed and we were unable to recover it. 00:30:35.079 [2024-12-05 12:14:08.953333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.079 [2024-12-05 12:14:08.953364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.079 qpair failed and we were unable to recover it. 00:30:35.079 [2024-12-05 12:14:08.953495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.079 [2024-12-05 12:14:08.953527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.079 qpair failed and we were unable to recover it. 00:30:35.079 [2024-12-05 12:14:08.953661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.079 [2024-12-05 12:14:08.953692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.079 qpair failed and we were unable to recover it. 00:30:35.079 [2024-12-05 12:14:08.953864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.079 [2024-12-05 12:14:08.953897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.079 qpair failed and we were unable to recover it. 00:30:35.079 [2024-12-05 12:14:08.954042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.079 [2024-12-05 12:14:08.954074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.079 qpair failed and we were unable to recover it. 00:30:35.079 [2024-12-05 12:14:08.954252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.079 [2024-12-05 12:14:08.954284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.079 qpair failed and we were unable to recover it. 00:30:35.079 [2024-12-05 12:14:08.954417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.079 [2024-12-05 12:14:08.954450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.079 qpair failed and we were unable to recover it. 00:30:35.079 [2024-12-05 12:14:08.954593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.079 [2024-12-05 12:14:08.954625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.079 qpair failed and we were unable to recover it. 00:30:35.079 [2024-12-05 12:14:08.954875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.079 [2024-12-05 12:14:08.954906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.079 qpair failed and we were unable to recover it. 00:30:35.079 [2024-12-05 12:14:08.955079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.079 [2024-12-05 12:14:08.955111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.079 qpair failed and we were unable to recover it. 00:30:35.079 [2024-12-05 12:14:08.955242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.079 [2024-12-05 12:14:08.955274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.079 qpair failed and we were unable to recover it. 00:30:35.079 [2024-12-05 12:14:08.955454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.079 [2024-12-05 12:14:08.955486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.079 qpair failed and we were unable to recover it. 00:30:35.079 [2024-12-05 12:14:08.955686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.079 [2024-12-05 12:14:08.955716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.079 qpair failed and we were unable to recover it. 00:30:35.079 [2024-12-05 12:14:08.955889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.079 [2024-12-05 12:14:08.955921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.079 qpair failed and we were unable to recover it. 00:30:35.079 [2024-12-05 12:14:08.956032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.079 [2024-12-05 12:14:08.956064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.079 qpair failed and we were unable to recover it. 00:30:35.079 [2024-12-05 12:14:08.956189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.079 [2024-12-05 12:14:08.956220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.079 qpair failed and we were unable to recover it. 00:30:35.079 [2024-12-05 12:14:08.956325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.079 [2024-12-05 12:14:08.956356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.079 qpair failed and we were unable to recover it. 00:30:35.079 [2024-12-05 12:14:08.956475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.079 [2024-12-05 12:14:08.956507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.079 qpair failed and we were unable to recover it. 00:30:35.079 [2024-12-05 12:14:08.956609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.079 [2024-12-05 12:14:08.956642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.079 qpair failed and we were unable to recover it. 00:30:35.079 [2024-12-05 12:14:08.956822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.079 [2024-12-05 12:14:08.956854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.079 qpair failed and we were unable to recover it. 00:30:35.079 [2024-12-05 12:14:08.957116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.079 [2024-12-05 12:14:08.957148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.079 qpair failed and we were unable to recover it. 00:30:35.079 [2024-12-05 12:14:08.957256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.079 [2024-12-05 12:14:08.957289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.079 qpair failed and we were unable to recover it. 00:30:35.079 [2024-12-05 12:14:08.957470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.079 [2024-12-05 12:14:08.957503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.079 qpair failed and we were unable to recover it. 00:30:35.079 [2024-12-05 12:14:08.957704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.079 [2024-12-05 12:14:08.957737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.079 qpair failed and we were unable to recover it. 00:30:35.079 [2024-12-05 12:14:08.957850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.080 [2024-12-05 12:14:08.957882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.080 qpair failed and we were unable to recover it. 00:30:35.080 [2024-12-05 12:14:08.958122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.080 [2024-12-05 12:14:08.958154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.080 qpair failed and we were unable to recover it. 00:30:35.080 [2024-12-05 12:14:08.958280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.080 [2024-12-05 12:14:08.958311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.080 qpair failed and we were unable to recover it. 00:30:35.080 [2024-12-05 12:14:08.958510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.080 [2024-12-05 12:14:08.958545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.080 qpair failed and we were unable to recover it. 00:30:35.080 [2024-12-05 12:14:08.958743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.080 [2024-12-05 12:14:08.958780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.080 qpair failed and we were unable to recover it. 00:30:35.080 [2024-12-05 12:14:08.958962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.080 [2024-12-05 12:14:08.958995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.080 qpair failed and we were unable to recover it. 00:30:35.080 [2024-12-05 12:14:08.959179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.080 [2024-12-05 12:14:08.959210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.080 qpair failed and we were unable to recover it. 00:30:35.080 [2024-12-05 12:14:08.959390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.080 [2024-12-05 12:14:08.959425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.080 qpair failed and we were unable to recover it. 00:30:35.080 [2024-12-05 12:14:08.959615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.080 [2024-12-05 12:14:08.959647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.080 qpair failed and we were unable to recover it. 00:30:35.080 [2024-12-05 12:14:08.959771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.080 [2024-12-05 12:14:08.959803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.080 qpair failed and we were unable to recover it. 00:30:35.080 [2024-12-05 12:14:08.960001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.080 [2024-12-05 12:14:08.960040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.080 qpair failed and we were unable to recover it. 00:30:35.080 [2024-12-05 12:14:08.960230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.080 [2024-12-05 12:14:08.960261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.080 qpair failed and we were unable to recover it. 00:30:35.080 [2024-12-05 12:14:08.960386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.080 [2024-12-05 12:14:08.960420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.080 qpair failed and we were unable to recover it. 00:30:35.080 [2024-12-05 12:14:08.960535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.080 [2024-12-05 12:14:08.960566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.080 qpair failed and we were unable to recover it. 00:30:35.080 [2024-12-05 12:14:08.960736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.080 [2024-12-05 12:14:08.960768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.080 qpair failed and we were unable to recover it. 00:30:35.080 [2024-12-05 12:14:08.960954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.080 [2024-12-05 12:14:08.960985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.080 qpair failed and we were unable to recover it. 00:30:35.080 [2024-12-05 12:14:08.961122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.080 [2024-12-05 12:14:08.961154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.080 qpair failed and we were unable to recover it. 00:30:35.080 [2024-12-05 12:14:08.961278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.080 [2024-12-05 12:14:08.961309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.080 qpair failed and we were unable to recover it. 00:30:35.080 [2024-12-05 12:14:08.961492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.080 [2024-12-05 12:14:08.961525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.080 qpair failed and we were unable to recover it. 00:30:35.080 [2024-12-05 12:14:08.961786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.080 [2024-12-05 12:14:08.961818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.080 qpair failed and we were unable to recover it. 00:30:35.080 [2024-12-05 12:14:08.962001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.080 [2024-12-05 12:14:08.962032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.080 qpair failed and we were unable to recover it. 00:30:35.080 [2024-12-05 12:14:08.962204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.080 [2024-12-05 12:14:08.962236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.080 qpair failed and we were unable to recover it. 00:30:35.080 [2024-12-05 12:14:08.962344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.080 [2024-12-05 12:14:08.962383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.080 qpair failed and we were unable to recover it. 00:30:35.080 [2024-12-05 12:14:08.962645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.080 [2024-12-05 12:14:08.962677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.080 qpair failed and we were unable to recover it. 00:30:35.080 [2024-12-05 12:14:08.962849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.080 [2024-12-05 12:14:08.962880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.080 qpair failed and we were unable to recover it. 00:30:35.080 [2024-12-05 12:14:08.963052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.080 [2024-12-05 12:14:08.963083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.080 qpair failed and we were unable to recover it. 00:30:35.080 [2024-12-05 12:14:08.963263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.080 [2024-12-05 12:14:08.963294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.080 qpair failed and we were unable to recover it. 00:30:35.080 [2024-12-05 12:14:08.963535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.080 [2024-12-05 12:14:08.963568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.080 qpair failed and we were unable to recover it. 00:30:35.080 [2024-12-05 12:14:08.963706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.080 [2024-12-05 12:14:08.963738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.080 qpair failed and we were unable to recover it. 00:30:35.081 [2024-12-05 12:14:08.963893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.081 [2024-12-05 12:14:08.963924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.081 qpair failed and we were unable to recover it. 00:30:35.081 [2024-12-05 12:14:08.964043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.081 [2024-12-05 12:14:08.964074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.081 qpair failed and we were unable to recover it. 00:30:35.081 [2024-12-05 12:14:08.964272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.081 [2024-12-05 12:14:08.964305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.081 qpair failed and we were unable to recover it. 00:30:35.081 [2024-12-05 12:14:08.964430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.081 [2024-12-05 12:14:08.964462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.081 qpair failed and we were unable to recover it. 00:30:35.081 [2024-12-05 12:14:08.964641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.081 [2024-12-05 12:14:08.964674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.081 qpair failed and we were unable to recover it. 00:30:35.081 [2024-12-05 12:14:08.964913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.081 [2024-12-05 12:14:08.964944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.081 qpair failed and we were unable to recover it. 00:30:35.081 [2024-12-05 12:14:08.965156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.081 [2024-12-05 12:14:08.965189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.081 qpair failed and we were unable to recover it. 00:30:35.081 [2024-12-05 12:14:08.965429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.081 [2024-12-05 12:14:08.965463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.081 qpair failed and we were unable to recover it. 00:30:35.081 [2024-12-05 12:14:08.965576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.081 [2024-12-05 12:14:08.965607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.081 qpair failed and we were unable to recover it. 00:30:35.081 [2024-12-05 12:14:08.965743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.081 [2024-12-05 12:14:08.965775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.081 qpair failed and we were unable to recover it. 00:30:35.081 [2024-12-05 12:14:08.966015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.081 [2024-12-05 12:14:08.966047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.081 qpair failed and we were unable to recover it. 00:30:35.081 [2024-12-05 12:14:08.966227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.081 [2024-12-05 12:14:08.966258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.081 qpair failed and we were unable to recover it. 00:30:35.081 [2024-12-05 12:14:08.966385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.081 [2024-12-05 12:14:08.966418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.081 qpair failed and we were unable to recover it. 00:30:35.081 [2024-12-05 12:14:08.966543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.081 [2024-12-05 12:14:08.966575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.081 qpair failed and we were unable to recover it. 00:30:35.081 [2024-12-05 12:14:08.966754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.081 [2024-12-05 12:14:08.966785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.081 qpair failed and we were unable to recover it. 00:30:35.081 [2024-12-05 12:14:08.966981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.081 [2024-12-05 12:14:08.967019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.081 qpair failed and we were unable to recover it. 00:30:35.081 [2024-12-05 12:14:08.967212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.081 [2024-12-05 12:14:08.967244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.081 qpair failed and we were unable to recover it. 00:30:35.081 [2024-12-05 12:14:08.967472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.081 [2024-12-05 12:14:08.967504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.081 qpair failed and we were unable to recover it. 00:30:35.081 [2024-12-05 12:14:08.967711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.081 [2024-12-05 12:14:08.967743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.081 qpair failed and we were unable to recover it. 00:30:35.081 [2024-12-05 12:14:08.967992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.081 [2024-12-05 12:14:08.968024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.081 qpair failed and we were unable to recover it. 00:30:35.081 [2024-12-05 12:14:08.968214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.081 [2024-12-05 12:14:08.968246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.081 qpair failed and we were unable to recover it. 00:30:35.081 [2024-12-05 12:14:08.968386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.081 [2024-12-05 12:14:08.968419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.081 qpair failed and we were unable to recover it. 00:30:35.081 [2024-12-05 12:14:08.968594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.081 [2024-12-05 12:14:08.968624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.081 qpair failed and we were unable to recover it. 00:30:35.081 [2024-12-05 12:14:08.968726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.081 [2024-12-05 12:14:08.968763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.081 qpair failed and we were unable to recover it. 00:30:35.081 [2024-12-05 12:14:08.968996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.081 [2024-12-05 12:14:08.969027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.081 qpair failed and we were unable to recover it. 00:30:35.081 [2024-12-05 12:14:08.969236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.081 [2024-12-05 12:14:08.969267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.081 qpair failed and we were unable to recover it. 00:30:35.081 [2024-12-05 12:14:08.969449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.081 [2024-12-05 12:14:08.969482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.081 qpair failed and we were unable to recover it. 00:30:35.081 [2024-12-05 12:14:08.969599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.081 [2024-12-05 12:14:08.969630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.081 qpair failed and we were unable to recover it. 00:30:35.081 [2024-12-05 12:14:08.969808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.081 [2024-12-05 12:14:08.969839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.081 qpair failed and we were unable to recover it. 00:30:35.081 [2024-12-05 12:14:08.970037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.081 [2024-12-05 12:14:08.970069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.081 qpair failed and we were unable to recover it. 00:30:35.082 [2024-12-05 12:14:08.970254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.082 [2024-12-05 12:14:08.970285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.082 qpair failed and we were unable to recover it. 00:30:35.082 [2024-12-05 12:14:08.970400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.082 [2024-12-05 12:14:08.970432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.082 qpair failed and we were unable to recover it. 00:30:35.082 [2024-12-05 12:14:08.970622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.082 [2024-12-05 12:14:08.970653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.082 qpair failed and we were unable to recover it. 00:30:35.082 [2024-12-05 12:14:08.970790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.082 [2024-12-05 12:14:08.970822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.082 qpair failed and we were unable to recover it. 00:30:35.082 [2024-12-05 12:14:08.971003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.082 [2024-12-05 12:14:08.971034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.082 qpair failed and we were unable to recover it. 00:30:35.082 [2024-12-05 12:14:08.971157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.082 [2024-12-05 12:14:08.971188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.082 qpair failed and we were unable to recover it. 00:30:35.082 [2024-12-05 12:14:08.971310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.082 [2024-12-05 12:14:08.971342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.082 qpair failed and we were unable to recover it. 00:30:35.082 [2024-12-05 12:14:08.971562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.082 [2024-12-05 12:14:08.971595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.082 qpair failed and we were unable to recover it. 00:30:35.082 [2024-12-05 12:14:08.971721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.082 [2024-12-05 12:14:08.971753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.082 qpair failed and we were unable to recover it. 00:30:35.082 [2024-12-05 12:14:08.971936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.082 [2024-12-05 12:14:08.971967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.082 qpair failed and we were unable to recover it. 00:30:35.082 [2024-12-05 12:14:08.972072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.082 [2024-12-05 12:14:08.972103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.082 qpair failed and we were unable to recover it. 00:30:35.082 [2024-12-05 12:14:08.972241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.082 [2024-12-05 12:14:08.972272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.082 qpair failed and we were unable to recover it. 00:30:35.082 [2024-12-05 12:14:08.972451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.082 [2024-12-05 12:14:08.972493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.082 qpair failed and we were unable to recover it. 00:30:35.082 [2024-12-05 12:14:08.972646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.082 [2024-12-05 12:14:08.972678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.082 qpair failed and we were unable to recover it. 00:30:35.082 [2024-12-05 12:14:08.972787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.082 [2024-12-05 12:14:08.972819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.082 qpair failed and we were unable to recover it. 00:30:35.082 [2024-12-05 12:14:08.972942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.082 [2024-12-05 12:14:08.972973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.082 qpair failed and we were unable to recover it. 00:30:35.082 [2024-12-05 12:14:08.973176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.082 [2024-12-05 12:14:08.973207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.082 qpair failed and we were unable to recover it. 00:30:35.082 [2024-12-05 12:14:08.973472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.082 [2024-12-05 12:14:08.973504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.082 qpair failed and we were unable to recover it. 00:30:35.082 [2024-12-05 12:14:08.973631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.082 [2024-12-05 12:14:08.973662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.082 qpair failed and we were unable to recover it. 00:30:35.082 [2024-12-05 12:14:08.973778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.082 [2024-12-05 12:14:08.973810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.082 qpair failed and we were unable to recover it. 00:30:35.082 [2024-12-05 12:14:08.973915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.082 [2024-12-05 12:14:08.973945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.082 qpair failed and we were unable to recover it. 00:30:35.082 [2024-12-05 12:14:08.974115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.082 [2024-12-05 12:14:08.974146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.082 qpair failed and we were unable to recover it. 00:30:35.082 [2024-12-05 12:14:08.974402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.082 [2024-12-05 12:14:08.974435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.082 qpair failed and we were unable to recover it. 00:30:35.082 [2024-12-05 12:14:08.974566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.082 [2024-12-05 12:14:08.974597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.082 qpair failed and we were unable to recover it. 00:30:35.082 [2024-12-05 12:14:08.974768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.082 [2024-12-05 12:14:08.974800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.082 qpair failed and we were unable to recover it. 00:30:35.082 [2024-12-05 12:14:08.974985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.082 [2024-12-05 12:14:08.975023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.082 qpair failed and we were unable to recover it. 00:30:35.082 [2024-12-05 12:14:08.975189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.082 [2024-12-05 12:14:08.975220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.082 qpair failed and we were unable to recover it. 00:30:35.082 [2024-12-05 12:14:08.975323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.083 [2024-12-05 12:14:08.975354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.083 qpair failed and we were unable to recover it. 00:30:35.083 [2024-12-05 12:14:08.975490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.083 [2024-12-05 12:14:08.975523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.083 qpair failed and we were unable to recover it. 00:30:35.083 [2024-12-05 12:14:08.975710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.083 [2024-12-05 12:14:08.975742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.083 qpair failed and we were unable to recover it. 00:30:35.083 [2024-12-05 12:14:08.975853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.083 [2024-12-05 12:14:08.975884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.083 qpair failed and we were unable to recover it. 00:30:35.083 [2024-12-05 12:14:08.975991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.083 [2024-12-05 12:14:08.976023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.083 qpair failed and we were unable to recover it. 00:30:35.083 [2024-12-05 12:14:08.976135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.083 [2024-12-05 12:14:08.976167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.083 qpair failed and we were unable to recover it. 00:30:35.083 [2024-12-05 12:14:08.976356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.083 [2024-12-05 12:14:08.976401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.083 qpair failed and we were unable to recover it. 00:30:35.083 [2024-12-05 12:14:08.976539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.083 [2024-12-05 12:14:08.976570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.083 qpair failed and we were unable to recover it. 00:30:35.083 [2024-12-05 12:14:08.976693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.083 [2024-12-05 12:14:08.976726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.083 qpair failed and we were unable to recover it. 00:30:35.083 [2024-12-05 12:14:08.976893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.083 [2024-12-05 12:14:08.976923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.083 qpair failed and we were unable to recover it. 00:30:35.083 [2024-12-05 12:14:08.977218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.083 [2024-12-05 12:14:08.977249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.083 qpair failed and we were unable to recover it. 00:30:35.083 [2024-12-05 12:14:08.977388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.083 [2024-12-05 12:14:08.977422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.083 qpair failed and we were unable to recover it. 00:30:35.083 [2024-12-05 12:14:08.977538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.083 [2024-12-05 12:14:08.977570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.083 qpair failed and we were unable to recover it. 00:30:35.083 [2024-12-05 12:14:08.977674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.083 [2024-12-05 12:14:08.977706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.083 qpair failed and we were unable to recover it. 00:30:35.083 [2024-12-05 12:14:08.977890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.083 [2024-12-05 12:14:08.977921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.083 qpair failed and we were unable to recover it. 00:30:35.083 [2024-12-05 12:14:08.978026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.083 [2024-12-05 12:14:08.978057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.083 qpair failed and we were unable to recover it. 00:30:35.083 [2024-12-05 12:14:08.978230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.083 [2024-12-05 12:14:08.978262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.083 qpair failed and we were unable to recover it. 00:30:35.083 [2024-12-05 12:14:08.978454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.083 [2024-12-05 12:14:08.978488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.083 qpair failed and we were unable to recover it. 00:30:35.083 [2024-12-05 12:14:08.978686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.083 [2024-12-05 12:14:08.978718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.083 qpair failed and we were unable to recover it. 00:30:35.083 [2024-12-05 12:14:08.978958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.083 [2024-12-05 12:14:08.978990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.083 qpair failed and we were unable to recover it. 00:30:35.083 [2024-12-05 12:14:08.979110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.083 [2024-12-05 12:14:08.979142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.083 qpair failed and we were unable to recover it. 00:30:35.083 [2024-12-05 12:14:08.979261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.083 [2024-12-05 12:14:08.979292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.083 qpair failed and we were unable to recover it. 00:30:35.083 [2024-12-05 12:14:08.979403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.083 [2024-12-05 12:14:08.979436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.083 qpair failed and we were unable to recover it. 00:30:35.084 [2024-12-05 12:14:08.979549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.084 [2024-12-05 12:14:08.979580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.084 qpair failed and we were unable to recover it. 00:30:35.084 [2024-12-05 12:14:08.979752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.084 [2024-12-05 12:14:08.979784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.084 qpair failed and we were unable to recover it. 00:30:35.084 [2024-12-05 12:14:08.979979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.084 [2024-12-05 12:14:08.980011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.084 qpair failed and we were unable to recover it. 00:30:35.084 [2024-12-05 12:14:08.980191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.084 [2024-12-05 12:14:08.980222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.084 qpair failed and we were unable to recover it. 00:30:35.084 [2024-12-05 12:14:08.980349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.084 [2024-12-05 12:14:08.980390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.084 qpair failed and we were unable to recover it. 00:30:35.084 [2024-12-05 12:14:08.980532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.084 [2024-12-05 12:14:08.980564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.084 qpair failed and we were unable to recover it. 00:30:35.084 [2024-12-05 12:14:08.980745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.084 [2024-12-05 12:14:08.980776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.084 qpair failed and we were unable to recover it. 00:30:35.084 [2024-12-05 12:14:08.980885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.084 [2024-12-05 12:14:08.980917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.084 qpair failed and we were unable to recover it. 00:30:35.084 [2024-12-05 12:14:08.981117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.084 [2024-12-05 12:14:08.981148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.084 qpair failed and we were unable to recover it. 00:30:35.084 [2024-12-05 12:14:08.981255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.084 [2024-12-05 12:14:08.981286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.084 qpair failed and we were unable to recover it. 00:30:35.084 [2024-12-05 12:14:08.981394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.084 [2024-12-05 12:14:08.981426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.084 qpair failed and we were unable to recover it. 00:30:35.084 [2024-12-05 12:14:08.981689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.084 [2024-12-05 12:14:08.981722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.084 qpair failed and we were unable to recover it. 00:30:35.084 [2024-12-05 12:14:08.981894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.084 [2024-12-05 12:14:08.981925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.084 qpair failed and we were unable to recover it. 00:30:35.084 [2024-12-05 12:14:08.982037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.084 [2024-12-05 12:14:08.982069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.084 qpair failed and we were unable to recover it. 00:30:35.084 [2024-12-05 12:14:08.982176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.084 [2024-12-05 12:14:08.982208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.084 qpair failed and we were unable to recover it. 00:30:35.084 [2024-12-05 12:14:08.982392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.084 [2024-12-05 12:14:08.982430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.084 qpair failed and we were unable to recover it. 00:30:35.084 [2024-12-05 12:14:08.982601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.084 [2024-12-05 12:14:08.982632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.084 qpair failed and we were unable to recover it. 00:30:35.084 [2024-12-05 12:14:08.982762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.084 [2024-12-05 12:14:08.982793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.084 qpair failed and we were unable to recover it. 00:30:35.084 [2024-12-05 12:14:08.982976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.084 [2024-12-05 12:14:08.983008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.084 qpair failed and we were unable to recover it. 00:30:35.084 [2024-12-05 12:14:08.983187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.084 [2024-12-05 12:14:08.983218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.084 qpair failed and we were unable to recover it. 00:30:35.084 [2024-12-05 12:14:08.983403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.084 [2024-12-05 12:14:08.983436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.084 qpair failed and we were unable to recover it. 00:30:35.084 [2024-12-05 12:14:08.983608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.084 [2024-12-05 12:14:08.983639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.084 qpair failed and we were unable to recover it. 00:30:35.084 [2024-12-05 12:14:08.983753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.084 [2024-12-05 12:14:08.983785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.084 qpair failed and we were unable to recover it. 00:30:35.084 [2024-12-05 12:14:08.983953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.084 [2024-12-05 12:14:08.983984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.084 qpair failed and we were unable to recover it. 00:30:35.084 [2024-12-05 12:14:08.984112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.084 [2024-12-05 12:14:08.984143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.084 qpair failed and we were unable to recover it. 00:30:35.084 [2024-12-05 12:14:08.984327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.084 [2024-12-05 12:14:08.984359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.084 qpair failed and we were unable to recover it. 00:30:35.084 [2024-12-05 12:14:08.984539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.084 [2024-12-05 12:14:08.984571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.084 qpair failed and we were unable to recover it. 00:30:35.084 [2024-12-05 12:14:08.984740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.084 [2024-12-05 12:14:08.984771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.084 qpair failed and we were unable to recover it. 00:30:35.084 [2024-12-05 12:14:08.984961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.084 [2024-12-05 12:14:08.984993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.084 qpair failed and we were unable to recover it. 00:30:35.085 [2024-12-05 12:14:08.985175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.085 [2024-12-05 12:14:08.985207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.085 qpair failed and we were unable to recover it. 00:30:35.085 [2024-12-05 12:14:08.985471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.085 [2024-12-05 12:14:08.985503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.085 qpair failed and we were unable to recover it. 00:30:35.085 [2024-12-05 12:14:08.985763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.085 [2024-12-05 12:14:08.985795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.085 qpair failed and we were unable to recover it. 00:30:35.085 [2024-12-05 12:14:08.986079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.085 [2024-12-05 12:14:08.986110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.085 qpair failed and we were unable to recover it. 00:30:35.085 [2024-12-05 12:14:08.986304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.085 [2024-12-05 12:14:08.986334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.085 qpair failed and we were unable to recover it. 00:30:35.085 [2024-12-05 12:14:08.986448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.085 [2024-12-05 12:14:08.986482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.085 qpair failed and we were unable to recover it. 00:30:35.085 [2024-12-05 12:14:08.986665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.085 [2024-12-05 12:14:08.986696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.085 qpair failed and we were unable to recover it. 00:30:35.085 [2024-12-05 12:14:08.986893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.085 [2024-12-05 12:14:08.986925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.085 qpair failed and we were unable to recover it. 00:30:35.085 [2024-12-05 12:14:08.987115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.085 [2024-12-05 12:14:08.987148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.085 qpair failed and we were unable to recover it. 00:30:35.085 [2024-12-05 12:14:08.987353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.085 [2024-12-05 12:14:08.987393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.085 qpair failed and we were unable to recover it. 00:30:35.085 [2024-12-05 12:14:08.987585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.085 [2024-12-05 12:14:08.987617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.085 qpair failed and we were unable to recover it. 00:30:35.085 [2024-12-05 12:14:08.987789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.085 [2024-12-05 12:14:08.987820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.085 qpair failed and we were unable to recover it. 00:30:35.085 [2024-12-05 12:14:08.987955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.085 [2024-12-05 12:14:08.987987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.085 qpair failed and we were unable to recover it. 00:30:35.085 [2024-12-05 12:14:08.988112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.085 [2024-12-05 12:14:08.988145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.085 qpair failed and we were unable to recover it. 00:30:35.085 [2024-12-05 12:14:08.988406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.085 [2024-12-05 12:14:08.988439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.085 qpair failed and we were unable to recover it. 00:30:35.085 [2024-12-05 12:14:08.988570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.085 [2024-12-05 12:14:08.988602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.085 qpair failed and we were unable to recover it. 00:30:35.085 [2024-12-05 12:14:08.988731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.085 [2024-12-05 12:14:08.988762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.085 qpair failed and we were unable to recover it. 00:30:35.085 [2024-12-05 12:14:08.988934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.085 [2024-12-05 12:14:08.988965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.085 qpair failed and we were unable to recover it. 00:30:35.085 [2024-12-05 12:14:08.989155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.085 [2024-12-05 12:14:08.989186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.085 qpair failed and we were unable to recover it. 00:30:35.085 [2024-12-05 12:14:08.989424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.085 [2024-12-05 12:14:08.989456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.085 qpair failed and we were unable to recover it. 00:30:35.085 [2024-12-05 12:14:08.989645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.085 [2024-12-05 12:14:08.989677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.085 qpair failed and we were unable to recover it. 00:30:35.085 [2024-12-05 12:14:08.989865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.085 [2024-12-05 12:14:08.989896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.085 qpair failed and we were unable to recover it. 00:30:35.085 [2024-12-05 12:14:08.990081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.085 [2024-12-05 12:14:08.990112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.085 qpair failed and we were unable to recover it. 00:30:35.085 [2024-12-05 12:14:08.990226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.085 [2024-12-05 12:14:08.990257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.085 qpair failed and we were unable to recover it. 00:30:35.085 [2024-12-05 12:14:08.990450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.085 [2024-12-05 12:14:08.990483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.085 qpair failed and we were unable to recover it. 00:30:35.085 [2024-12-05 12:14:08.990584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.085 [2024-12-05 12:14:08.990616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.085 qpair failed and we were unable to recover it. 00:30:35.085 [2024-12-05 12:14:08.990851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.085 [2024-12-05 12:14:08.990887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.085 qpair failed and we were unable to recover it. 00:30:35.085 [2024-12-05 12:14:08.990993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.085 [2024-12-05 12:14:08.991024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.085 qpair failed and we were unable to recover it. 00:30:35.085 [2024-12-05 12:14:08.991205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.086 [2024-12-05 12:14:08.991236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.086 qpair failed and we were unable to recover it. 00:30:35.086 [2024-12-05 12:14:08.991426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.086 [2024-12-05 12:14:08.991458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.086 qpair failed and we were unable to recover it. 00:30:35.086 [2024-12-05 12:14:08.991643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.086 [2024-12-05 12:14:08.991675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.086 qpair failed and we were unable to recover it. 00:30:35.086 [2024-12-05 12:14:08.991850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.086 [2024-12-05 12:14:08.991881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.086 qpair failed and we were unable to recover it. 00:30:35.086 [2024-12-05 12:14:08.992063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.086 [2024-12-05 12:14:08.992095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.086 qpair failed and we were unable to recover it. 00:30:35.086 [2024-12-05 12:14:08.992283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.086 [2024-12-05 12:14:08.992314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.086 qpair failed and we were unable to recover it. 00:30:35.086 [2024-12-05 12:14:08.992461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.086 [2024-12-05 12:14:08.992494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.086 qpair failed and we were unable to recover it. 00:30:35.086 [2024-12-05 12:14:08.992759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.086 [2024-12-05 12:14:08.992791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.086 qpair failed and we were unable to recover it. 00:30:35.086 [2024-12-05 12:14:08.992916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.086 [2024-12-05 12:14:08.992948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.086 qpair failed and we were unable to recover it. 00:30:35.086 [2024-12-05 12:14:08.993124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.086 [2024-12-05 12:14:08.993156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.086 qpair failed and we were unable to recover it. 00:30:35.086 [2024-12-05 12:14:08.993328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.086 [2024-12-05 12:14:08.993360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.086 qpair failed and we were unable to recover it. 00:30:35.086 [2024-12-05 12:14:08.993500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.086 [2024-12-05 12:14:08.993532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.086 qpair failed and we were unable to recover it. 00:30:35.086 [2024-12-05 12:14:08.993796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.086 [2024-12-05 12:14:08.993828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.086 qpair failed and we were unable to recover it. 00:30:35.086 [2024-12-05 12:14:08.993961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.086 [2024-12-05 12:14:08.993992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.086 qpair failed and we were unable to recover it. 00:30:35.086 [2024-12-05 12:14:08.994171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.086 [2024-12-05 12:14:08.994202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.086 qpair failed and we were unable to recover it. 00:30:35.086 [2024-12-05 12:14:08.994388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.086 [2024-12-05 12:14:08.994421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.086 qpair failed and we were unable to recover it. 00:30:35.086 [2024-12-05 12:14:08.994606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.086 [2024-12-05 12:14:08.994638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.086 qpair failed and we were unable to recover it. 00:30:35.086 [2024-12-05 12:14:08.994878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.086 [2024-12-05 12:14:08.994909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.086 qpair failed and we were unable to recover it. 00:30:35.086 [2024-12-05 12:14:08.995016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.086 [2024-12-05 12:14:08.995048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.086 qpair failed and we were unable to recover it. 00:30:35.086 [2024-12-05 12:14:08.995148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.086 [2024-12-05 12:14:08.995179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.086 qpair failed and we were unable to recover it. 00:30:35.086 [2024-12-05 12:14:08.995359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.086 [2024-12-05 12:14:08.995420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.086 qpair failed and we were unable to recover it. 00:30:35.086 [2024-12-05 12:14:08.995520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.086 [2024-12-05 12:14:08.995550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.086 qpair failed and we were unable to recover it. 00:30:35.086 [2024-12-05 12:14:08.995764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.086 [2024-12-05 12:14:08.995795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.086 qpair failed and we were unable to recover it. 00:30:35.086 [2024-12-05 12:14:08.995966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.086 [2024-12-05 12:14:08.995998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.086 qpair failed and we were unable to recover it. 00:30:35.086 [2024-12-05 12:14:08.996254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.086 [2024-12-05 12:14:08.996286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.086 qpair failed and we were unable to recover it. 00:30:35.086 [2024-12-05 12:14:08.996416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.086 [2024-12-05 12:14:08.996458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.086 qpair failed and we were unable to recover it. 00:30:35.086 [2024-12-05 12:14:08.996643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.086 [2024-12-05 12:14:08.996675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.086 qpair failed and we were unable to recover it. 00:30:35.086 [2024-12-05 12:14:08.996844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.086 [2024-12-05 12:14:08.996875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.086 qpair failed and we were unable to recover it. 00:30:35.086 [2024-12-05 12:14:08.997145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.086 [2024-12-05 12:14:08.997176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.086 qpair failed and we were unable to recover it. 00:30:35.086 [2024-12-05 12:14:08.997437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.086 [2024-12-05 12:14:08.997469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.086 qpair failed and we were unable to recover it. 00:30:35.086 [2024-12-05 12:14:08.997656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.086 [2024-12-05 12:14:08.997687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.086 qpair failed and we were unable to recover it. 00:30:35.087 [2024-12-05 12:14:08.997889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.087 [2024-12-05 12:14:08.997921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.087 qpair failed and we were unable to recover it. 00:30:35.087 [2024-12-05 12:14:08.998199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.087 [2024-12-05 12:14:08.998230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.087 qpair failed and we were unable to recover it. 00:30:35.087 [2024-12-05 12:14:08.998495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.087 [2024-12-05 12:14:08.998527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.087 qpair failed and we were unable to recover it. 00:30:35.087 [2024-12-05 12:14:08.998768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.087 [2024-12-05 12:14:08.998800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.087 qpair failed and we were unable to recover it. 00:30:35.087 [2024-12-05 12:14:08.998903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.087 [2024-12-05 12:14:08.998933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.087 qpair failed and we were unable to recover it. 00:30:35.087 [2024-12-05 12:14:08.999198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.087 [2024-12-05 12:14:08.999230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.087 qpair failed and we were unable to recover it. 00:30:35.087 [2024-12-05 12:14:08.999343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.087 [2024-12-05 12:14:08.999382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.087 qpair failed and we were unable to recover it. 00:30:35.087 [2024-12-05 12:14:08.999523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.087 [2024-12-05 12:14:08.999560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.087 qpair failed and we were unable to recover it. 00:30:35.087 [2024-12-05 12:14:08.999771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.087 [2024-12-05 12:14:08.999803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.087 qpair failed and we were unable to recover it. 00:30:35.087 [2024-12-05 12:14:09.000004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.087 [2024-12-05 12:14:09.000037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.087 qpair failed and we were unable to recover it. 00:30:35.087 [2024-12-05 12:14:09.000300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.087 [2024-12-05 12:14:09.000332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.087 qpair failed and we were unable to recover it. 00:30:35.087 [2024-12-05 12:14:09.000611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.087 [2024-12-05 12:14:09.000644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.087 qpair failed and we were unable to recover it. 00:30:35.087 [2024-12-05 12:14:09.000785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.087 [2024-12-05 12:14:09.000817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.087 qpair failed and we were unable to recover it. 00:30:35.087 [2024-12-05 12:14:09.000994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.087 [2024-12-05 12:14:09.001025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.087 qpair failed and we were unable to recover it. 00:30:35.087 [2024-12-05 12:14:09.001222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.087 [2024-12-05 12:14:09.001253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.087 qpair failed and we were unable to recover it. 00:30:35.087 [2024-12-05 12:14:09.001378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.087 [2024-12-05 12:14:09.001412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.087 qpair failed and we were unable to recover it. 00:30:35.087 [2024-12-05 12:14:09.001590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.087 [2024-12-05 12:14:09.001622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.087 qpair failed and we were unable to recover it. 00:30:35.087 [2024-12-05 12:14:09.001862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.087 [2024-12-05 12:14:09.001894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.087 qpair failed and we were unable to recover it. 00:30:35.087 [2024-12-05 12:14:09.002161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.087 [2024-12-05 12:14:09.002192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.087 qpair failed and we were unable to recover it. 00:30:35.087 [2024-12-05 12:14:09.002382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.087 [2024-12-05 12:14:09.002414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.087 qpair failed and we were unable to recover it. 00:30:35.087 [2024-12-05 12:14:09.002673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.087 [2024-12-05 12:14:09.002705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.087 qpair failed and we were unable to recover it. 00:30:35.087 [2024-12-05 12:14:09.002928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.087 [2024-12-05 12:14:09.002960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.087 qpair failed and we were unable to recover it. 00:30:35.087 [2024-12-05 12:14:09.003084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.087 [2024-12-05 12:14:09.003115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.087 qpair failed and we were unable to recover it. 00:30:35.087 [2024-12-05 12:14:09.003398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.087 [2024-12-05 12:14:09.003430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.087 qpair failed and we were unable to recover it. 00:30:35.087 [2024-12-05 12:14:09.003619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.087 [2024-12-05 12:14:09.003651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.087 qpair failed and we were unable to recover it. 00:30:35.087 [2024-12-05 12:14:09.003857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.087 [2024-12-05 12:14:09.003888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.087 qpair failed and we were unable to recover it. 00:30:35.087 [2024-12-05 12:14:09.004087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.087 [2024-12-05 12:14:09.004118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.087 qpair failed and we were unable to recover it. 00:30:35.087 [2024-12-05 12:14:09.004322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.088 [2024-12-05 12:14:09.004353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.088 qpair failed and we were unable to recover it. 00:30:35.088 [2024-12-05 12:14:09.004551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.088 [2024-12-05 12:14:09.004583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.088 qpair failed and we were unable to recover it. 00:30:35.088 [2024-12-05 12:14:09.004755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.088 [2024-12-05 12:14:09.004788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.088 qpair failed and we were unable to recover it. 00:30:35.088 [2024-12-05 12:14:09.005024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.088 [2024-12-05 12:14:09.005055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.088 qpair failed and we were unable to recover it. 00:30:35.088 [2024-12-05 12:14:09.005301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.088 [2024-12-05 12:14:09.005333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.088 qpair failed and we were unable to recover it. 00:30:35.088 [2024-12-05 12:14:09.005584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.088 [2024-12-05 12:14:09.005616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.088 qpair failed and we were unable to recover it. 00:30:35.088 [2024-12-05 12:14:09.005758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.088 [2024-12-05 12:14:09.005789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.088 qpair failed and we were unable to recover it. 00:30:35.088 [2024-12-05 12:14:09.005930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.088 [2024-12-05 12:14:09.005962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.088 qpair failed and we were unable to recover it. 00:30:35.088 [2024-12-05 12:14:09.006137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.088 [2024-12-05 12:14:09.006169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.088 qpair failed and we were unable to recover it. 00:30:35.088 [2024-12-05 12:14:09.006345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.088 [2024-12-05 12:14:09.006385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.088 qpair failed and we were unable to recover it. 00:30:35.088 [2024-12-05 12:14:09.006521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.088 [2024-12-05 12:14:09.006553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.088 qpair failed and we were unable to recover it. 00:30:35.088 [2024-12-05 12:14:09.006795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.088 [2024-12-05 12:14:09.006826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.088 qpair failed and we were unable to recover it. 00:30:35.088 [2024-12-05 12:14:09.006943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.088 [2024-12-05 12:14:09.006975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.088 qpair failed and we were unable to recover it. 00:30:35.088 [2024-12-05 12:14:09.007095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.088 [2024-12-05 12:14:09.007126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.088 qpair failed and we were unable to recover it. 00:30:35.088 [2024-12-05 12:14:09.007310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.088 [2024-12-05 12:14:09.007341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.088 qpair failed and we were unable to recover it. 00:30:35.088 [2024-12-05 12:14:09.007519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.088 [2024-12-05 12:14:09.007552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.088 qpair failed and we were unable to recover it. 00:30:35.088 [2024-12-05 12:14:09.007788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.088 [2024-12-05 12:14:09.007819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.088 qpair failed and we were unable to recover it. 00:30:35.088 [2024-12-05 12:14:09.007919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.088 [2024-12-05 12:14:09.007950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.088 qpair failed and we were unable to recover it. 00:30:35.088 [2024-12-05 12:14:09.008059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.088 [2024-12-05 12:14:09.008090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.088 qpair failed and we were unable to recover it. 00:30:35.088 [2024-12-05 12:14:09.008350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.088 [2024-12-05 12:14:09.008391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.088 qpair failed and we were unable to recover it. 00:30:35.088 [2024-12-05 12:14:09.008576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.088 [2024-12-05 12:14:09.008617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.088 qpair failed and we were unable to recover it. 00:30:35.088 [2024-12-05 12:14:09.008883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.088 [2024-12-05 12:14:09.008915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.088 qpair failed and we were unable to recover it. 00:30:35.088 [2024-12-05 12:14:09.009050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.088 [2024-12-05 12:14:09.009081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.088 qpair failed and we were unable to recover it. 00:30:35.088 [2024-12-05 12:14:09.009210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.088 [2024-12-05 12:14:09.009241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.088 qpair failed and we were unable to recover it. 00:30:35.088 [2024-12-05 12:14:09.009470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.088 [2024-12-05 12:14:09.009504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.088 qpair failed and we were unable to recover it. 00:30:35.088 [2024-12-05 12:14:09.009683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.088 [2024-12-05 12:14:09.009715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.088 qpair failed and we were unable to recover it. 00:30:35.088 [2024-12-05 12:14:09.009965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.088 [2024-12-05 12:14:09.009997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.088 qpair failed and we were unable to recover it. 00:30:35.088 [2024-12-05 12:14:09.010275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.088 [2024-12-05 12:14:09.010306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.088 qpair failed and we were unable to recover it. 00:30:35.088 [2024-12-05 12:14:09.010436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.088 [2024-12-05 12:14:09.010468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.088 qpair failed and we were unable to recover it. 00:30:35.088 [2024-12-05 12:14:09.010661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.088 [2024-12-05 12:14:09.010692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.088 qpair failed and we were unable to recover it. 00:30:35.088 [2024-12-05 12:14:09.010957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.088 [2024-12-05 12:14:09.010989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.088 qpair failed and we were unable to recover it. 00:30:35.089 [2024-12-05 12:14:09.011094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.089 [2024-12-05 12:14:09.011126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.089 qpair failed and we were unable to recover it. 00:30:35.089 [2024-12-05 12:14:09.011405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.089 [2024-12-05 12:14:09.011439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.089 qpair failed and we were unable to recover it. 00:30:35.089 [2024-12-05 12:14:09.011679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.089 [2024-12-05 12:14:09.011712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.089 qpair failed and we were unable to recover it. 00:30:35.089 [2024-12-05 12:14:09.011923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.089 [2024-12-05 12:14:09.011954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.089 qpair failed and we were unable to recover it. 00:30:35.089 [2024-12-05 12:14:09.012199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.089 [2024-12-05 12:14:09.012230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.089 qpair failed and we were unable to recover it. 00:30:35.089 [2024-12-05 12:14:09.012416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.089 [2024-12-05 12:14:09.012447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.089 qpair failed and we were unable to recover it. 00:30:35.089 [2024-12-05 12:14:09.012563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.089 [2024-12-05 12:14:09.012595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.089 qpair failed and we were unable to recover it. 00:30:35.089 [2024-12-05 12:14:09.012840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.089 [2024-12-05 12:14:09.012872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.089 qpair failed and we were unable to recover it. 00:30:35.089 [2024-12-05 12:14:09.013135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.089 [2024-12-05 12:14:09.013166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.089 qpair failed and we were unable to recover it. 00:30:35.089 [2024-12-05 12:14:09.013352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.089 [2024-12-05 12:14:09.013393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.089 qpair failed and we were unable to recover it. 00:30:35.089 [2024-12-05 12:14:09.013498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.089 [2024-12-05 12:14:09.013530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.089 qpair failed and we were unable to recover it. 00:30:35.089 [2024-12-05 12:14:09.013651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.089 [2024-12-05 12:14:09.013681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.089 qpair failed and we were unable to recover it. 00:30:35.089 [2024-12-05 12:14:09.013856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.089 [2024-12-05 12:14:09.013888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.089 qpair failed and we were unable to recover it. 00:30:35.089 [2024-12-05 12:14:09.014091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.089 [2024-12-05 12:14:09.014122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.089 qpair failed and we were unable to recover it. 00:30:35.089 [2024-12-05 12:14:09.014376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.089 [2024-12-05 12:14:09.014409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.089 qpair failed and we were unable to recover it. 00:30:35.089 [2024-12-05 12:14:09.014524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.089 [2024-12-05 12:14:09.014555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.089 qpair failed and we were unable to recover it. 00:30:35.089 [2024-12-05 12:14:09.014741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.089 [2024-12-05 12:14:09.014773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.089 qpair failed and we were unable to recover it. 00:30:35.089 [2024-12-05 12:14:09.014904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.089 [2024-12-05 12:14:09.014935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.089 qpair failed and we were unable to recover it. 00:30:35.089 [2024-12-05 12:14:09.015139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.089 [2024-12-05 12:14:09.015170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.089 qpair failed and we were unable to recover it. 00:30:35.089 [2024-12-05 12:14:09.015404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.089 [2024-12-05 12:14:09.015438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.089 qpair failed and we were unable to recover it. 00:30:35.089 [2024-12-05 12:14:09.015618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.089 [2024-12-05 12:14:09.015648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.089 qpair failed and we were unable to recover it. 00:30:35.089 [2024-12-05 12:14:09.015912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.089 [2024-12-05 12:14:09.015944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.089 qpair failed and we were unable to recover it. 00:30:35.089 [2024-12-05 12:14:09.016152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.089 [2024-12-05 12:14:09.016183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.089 qpair failed and we were unable to recover it. 00:30:35.089 [2024-12-05 12:14:09.016295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.089 [2024-12-05 12:14:09.016326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.089 qpair failed and we were unable to recover it. 00:30:35.089 [2024-12-05 12:14:09.016455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.089 [2024-12-05 12:14:09.016496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.089 qpair failed and we were unable to recover it. 00:30:35.089 [2024-12-05 12:14:09.016677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.090 [2024-12-05 12:14:09.016708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.090 qpair failed and we were unable to recover it. 00:30:35.090 [2024-12-05 12:14:09.016963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.090 [2024-12-05 12:14:09.016994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.090 qpair failed and we were unable to recover it. 00:30:35.090 [2024-12-05 12:14:09.017180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.090 [2024-12-05 12:14:09.017212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.090 qpair failed and we were unable to recover it. 00:30:35.090 [2024-12-05 12:14:09.017427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.090 [2024-12-05 12:14:09.017459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.090 qpair failed and we were unable to recover it. 00:30:35.090 [2024-12-05 12:14:09.017691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.090 [2024-12-05 12:14:09.017728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.090 qpair failed and we were unable to recover it. 00:30:35.090 [2024-12-05 12:14:09.017860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.090 [2024-12-05 12:14:09.017891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.090 qpair failed and we were unable to recover it. 00:30:35.090 [2024-12-05 12:14:09.018082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.090 [2024-12-05 12:14:09.018113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.090 qpair failed and we were unable to recover it. 00:30:35.090 [2024-12-05 12:14:09.018290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.090 [2024-12-05 12:14:09.018322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.090 qpair failed and we were unable to recover it. 00:30:35.090 [2024-12-05 12:14:09.018504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.090 [2024-12-05 12:14:09.018536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.090 qpair failed and we were unable to recover it. 00:30:35.090 [2024-12-05 12:14:09.018791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.090 [2024-12-05 12:14:09.018822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.090 qpair failed and we were unable to recover it. 00:30:35.090 [2024-12-05 12:14:09.019065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.090 [2024-12-05 12:14:09.019096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.090 qpair failed and we were unable to recover it. 00:30:35.090 [2024-12-05 12:14:09.019236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.090 [2024-12-05 12:14:09.019267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.090 qpair failed and we were unable to recover it. 00:30:35.090 [2024-12-05 12:14:09.019450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.090 [2024-12-05 12:14:09.019483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.090 qpair failed and we were unable to recover it. 00:30:35.090 [2024-12-05 12:14:09.019673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.090 [2024-12-05 12:14:09.019704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.090 qpair failed and we were unable to recover it. 00:30:35.090 [2024-12-05 12:14:09.019892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.090 [2024-12-05 12:14:09.019923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.090 qpair failed and we were unable to recover it. 00:30:35.090 [2024-12-05 12:14:09.020094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.090 [2024-12-05 12:14:09.020124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.090 qpair failed and we were unable to recover it. 00:30:35.090 [2024-12-05 12:14:09.020303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.090 [2024-12-05 12:14:09.020335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.090 qpair failed and we were unable to recover it. 00:30:35.090 [2024-12-05 12:14:09.020454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.090 [2024-12-05 12:14:09.020486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.090 qpair failed and we were unable to recover it. 00:30:35.090 [2024-12-05 12:14:09.020763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.090 [2024-12-05 12:14:09.020795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.090 qpair failed and we were unable to recover it. 00:30:35.090 [2024-12-05 12:14:09.021057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.090 [2024-12-05 12:14:09.021089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.090 qpair failed and we were unable to recover it. 00:30:35.090 [2024-12-05 12:14:09.021276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.090 [2024-12-05 12:14:09.021306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.090 qpair failed and we were unable to recover it. 00:30:35.090 [2024-12-05 12:14:09.021553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.090 [2024-12-05 12:14:09.021586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.090 qpair failed and we were unable to recover it. 00:30:35.090 [2024-12-05 12:14:09.021774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.090 [2024-12-05 12:14:09.021806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.090 qpair failed and we were unable to recover it. 00:30:35.090 [2024-12-05 12:14:09.021948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.090 [2024-12-05 12:14:09.021979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.090 qpair failed and we were unable to recover it. 00:30:35.090 [2024-12-05 12:14:09.022095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.090 [2024-12-05 12:14:09.022127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.090 qpair failed and we were unable to recover it. 00:30:35.090 [2024-12-05 12:14:09.022240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.090 [2024-12-05 12:14:09.022272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.090 qpair failed and we were unable to recover it. 00:30:35.090 [2024-12-05 12:14:09.022447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.090 [2024-12-05 12:14:09.022479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.090 qpair failed and we were unable to recover it. 00:30:35.090 [2024-12-05 12:14:09.022665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.090 [2024-12-05 12:14:09.022697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.090 qpair failed and we were unable to recover it. 00:30:35.090 [2024-12-05 12:14:09.022814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.090 [2024-12-05 12:14:09.022846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.090 qpair failed and we were unable to recover it. 00:30:35.090 [2024-12-05 12:14:09.023050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.090 [2024-12-05 12:14:09.023081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.090 qpair failed and we were unable to recover it. 00:30:35.090 [2024-12-05 12:14:09.023203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.090 [2024-12-05 12:14:09.023234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.090 qpair failed and we were unable to recover it. 00:30:35.090 [2024-12-05 12:14:09.023549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.090 [2024-12-05 12:14:09.023619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.090 qpair failed and we were unable to recover it. 00:30:35.090 [2024-12-05 12:14:09.023774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.090 [2024-12-05 12:14:09.023810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.090 qpair failed and we were unable to recover it. 00:30:35.090 [2024-12-05 12:14:09.024078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.090 [2024-12-05 12:14:09.024111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.090 qpair failed and we were unable to recover it. 00:30:35.090 [2024-12-05 12:14:09.024317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.090 [2024-12-05 12:14:09.024348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.090 qpair failed and we were unable to recover it. 00:30:35.090 [2024-12-05 12:14:09.024564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.090 [2024-12-05 12:14:09.024596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.090 qpair failed and we were unable to recover it. 00:30:35.090 [2024-12-05 12:14:09.024718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.090 [2024-12-05 12:14:09.024749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.090 qpair failed and we were unable to recover it. 00:30:35.090 [2024-12-05 12:14:09.024920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.090 [2024-12-05 12:14:09.024951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.090 qpair failed and we were unable to recover it. 00:30:35.091 [2024-12-05 12:14:09.025208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.091 [2024-12-05 12:14:09.025240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.091 qpair failed and we were unable to recover it. 00:30:35.091 [2024-12-05 12:14:09.025382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.091 [2024-12-05 12:14:09.025415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.091 qpair failed and we were unable to recover it. 00:30:35.091 [2024-12-05 12:14:09.025591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.091 [2024-12-05 12:14:09.025622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.091 qpair failed and we were unable to recover it. 00:30:35.091 [2024-12-05 12:14:09.025801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.091 [2024-12-05 12:14:09.025832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.091 qpair failed and we were unable to recover it. 00:30:35.091 [2024-12-05 12:14:09.026033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.091 [2024-12-05 12:14:09.026063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.091 qpair failed and we were unable to recover it. 00:30:35.091 [2024-12-05 12:14:09.026330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.091 [2024-12-05 12:14:09.026361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.091 qpair failed and we were unable to recover it. 00:30:35.091 [2024-12-05 12:14:09.026611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.091 [2024-12-05 12:14:09.026652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.091 qpair failed and we were unable to recover it. 00:30:35.091 [2024-12-05 12:14:09.026912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.091 [2024-12-05 12:14:09.026943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.091 qpair failed and we were unable to recover it. 00:30:35.091 [2024-12-05 12:14:09.027048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.091 [2024-12-05 12:14:09.027079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.091 qpair failed and we were unable to recover it. 00:30:35.091 [2024-12-05 12:14:09.027257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.091 [2024-12-05 12:14:09.027288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.091 qpair failed and we were unable to recover it. 00:30:35.091 [2024-12-05 12:14:09.027490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.091 [2024-12-05 12:14:09.027523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.091 qpair failed and we were unable to recover it. 00:30:35.091 [2024-12-05 12:14:09.027644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.091 [2024-12-05 12:14:09.027674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.091 qpair failed and we were unable to recover it. 00:30:35.091 [2024-12-05 12:14:09.027856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.091 [2024-12-05 12:14:09.027887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.091 qpair failed and we were unable to recover it. 00:30:35.091 [2024-12-05 12:14:09.028094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.091 [2024-12-05 12:14:09.028126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.091 qpair failed and we were unable to recover it. 00:30:35.091 [2024-12-05 12:14:09.028252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.091 [2024-12-05 12:14:09.028283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.091 qpair failed and we were unable to recover it. 00:30:35.091 [2024-12-05 12:14:09.028475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.091 [2024-12-05 12:14:09.028507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.091 qpair failed and we were unable to recover it. 00:30:35.091 [2024-12-05 12:14:09.028722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.091 [2024-12-05 12:14:09.028753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.091 qpair failed and we were unable to recover it. 00:30:35.091 [2024-12-05 12:14:09.028886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.091 [2024-12-05 12:14:09.028917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.091 qpair failed and we were unable to recover it. 00:30:35.091 [2024-12-05 12:14:09.029130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.091 [2024-12-05 12:14:09.029162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.091 qpair failed and we were unable to recover it. 00:30:35.091 [2024-12-05 12:14:09.029358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.091 [2024-12-05 12:14:09.029410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.091 qpair failed and we were unable to recover it. 00:30:35.091 [2024-12-05 12:14:09.029543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.091 [2024-12-05 12:14:09.029575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.091 qpair failed and we were unable to recover it. 00:30:35.091 [2024-12-05 12:14:09.029765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.091 [2024-12-05 12:14:09.029795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.091 qpair failed and we were unable to recover it. 00:30:35.091 [2024-12-05 12:14:09.029909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.091 [2024-12-05 12:14:09.029940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.091 qpair failed and we were unable to recover it. 00:30:35.091 [2024-12-05 12:14:09.030193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.091 [2024-12-05 12:14:09.030224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.091 qpair failed and we were unable to recover it. 00:30:35.091 [2024-12-05 12:14:09.030458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.091 [2024-12-05 12:14:09.030492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.091 qpair failed and we were unable to recover it. 00:30:35.091 [2024-12-05 12:14:09.030665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.091 [2024-12-05 12:14:09.030697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.091 qpair failed and we were unable to recover it. 00:30:35.091 [2024-12-05 12:14:09.030899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.091 [2024-12-05 12:14:09.030930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.091 qpair failed and we were unable to recover it. 00:30:35.091 [2024-12-05 12:14:09.031189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.091 [2024-12-05 12:14:09.031221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.091 qpair failed and we were unable to recover it. 00:30:35.091 [2024-12-05 12:14:09.031458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.091 [2024-12-05 12:14:09.031490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.091 qpair failed and we were unable to recover it. 00:30:35.091 [2024-12-05 12:14:09.031721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.091 [2024-12-05 12:14:09.031752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.091 qpair failed and we were unable to recover it. 00:30:35.091 [2024-12-05 12:14:09.031992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.091 [2024-12-05 12:14:09.032024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.091 qpair failed and we were unable to recover it. 00:30:35.091 [2024-12-05 12:14:09.032216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.091 [2024-12-05 12:14:09.032246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.091 qpair failed and we were unable to recover it. 00:30:35.091 [2024-12-05 12:14:09.032461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.091 [2024-12-05 12:14:09.032493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.091 qpair failed and we were unable to recover it. 00:30:35.091 [2024-12-05 12:14:09.032738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.091 [2024-12-05 12:14:09.032770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.091 qpair failed and we were unable to recover it. 00:30:35.091 [2024-12-05 12:14:09.032955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.091 [2024-12-05 12:14:09.032988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.091 qpair failed and we were unable to recover it. 00:30:35.091 [2024-12-05 12:14:09.033108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.091 [2024-12-05 12:14:09.033139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.091 qpair failed and we were unable to recover it. 00:30:35.091 [2024-12-05 12:14:09.033335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.091 [2024-12-05 12:14:09.033376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.091 qpair failed and we were unable to recover it. 00:30:35.091 [2024-12-05 12:14:09.033560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.091 [2024-12-05 12:14:09.033592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.092 qpair failed and we were unable to recover it. 00:30:35.092 [2024-12-05 12:14:09.033715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.092 [2024-12-05 12:14:09.033747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.092 qpair failed and we were unable to recover it. 00:30:35.092 [2024-12-05 12:14:09.033935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.092 [2024-12-05 12:14:09.033967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.092 qpair failed and we were unable to recover it. 00:30:35.092 [2024-12-05 12:14:09.034181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.092 [2024-12-05 12:14:09.034212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.092 qpair failed and we were unable to recover it. 00:30:35.092 [2024-12-05 12:14:09.034386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.092 [2024-12-05 12:14:09.034420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.092 qpair failed and we were unable to recover it. 00:30:35.092 [2024-12-05 12:14:09.034605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.092 [2024-12-05 12:14:09.034637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.092 qpair failed and we were unable to recover it. 00:30:35.092 [2024-12-05 12:14:09.034813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.092 [2024-12-05 12:14:09.034844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.092 qpair failed and we were unable to recover it. 00:30:35.092 [2024-12-05 12:14:09.035018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.092 [2024-12-05 12:14:09.035050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.092 qpair failed and we were unable to recover it. 00:30:35.092 [2024-12-05 12:14:09.035159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.092 [2024-12-05 12:14:09.035190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.092 qpair failed and we were unable to recover it. 00:30:35.092 [2024-12-05 12:14:09.035389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.092 [2024-12-05 12:14:09.035428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.092 qpair failed and we were unable to recover it. 00:30:35.092 [2024-12-05 12:14:09.035691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.092 [2024-12-05 12:14:09.035723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.092 qpair failed and we were unable to recover it. 00:30:35.092 [2024-12-05 12:14:09.035839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.092 [2024-12-05 12:14:09.035869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.092 qpair failed and we were unable to recover it. 00:30:35.092 [2024-12-05 12:14:09.036128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.092 [2024-12-05 12:14:09.036159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.092 qpair failed and we were unable to recover it. 00:30:35.092 [2024-12-05 12:14:09.036330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.092 [2024-12-05 12:14:09.036361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.092 qpair failed and we were unable to recover it. 00:30:35.092 [2024-12-05 12:14:09.036507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.092 [2024-12-05 12:14:09.036539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.092 qpair failed and we were unable to recover it. 00:30:35.092 [2024-12-05 12:14:09.036721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.092 [2024-12-05 12:14:09.036752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.092 qpair failed and we were unable to recover it. 00:30:35.092 [2024-12-05 12:14:09.036881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.092 [2024-12-05 12:14:09.036911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.092 qpair failed and we were unable to recover it. 00:30:35.092 [2024-12-05 12:14:09.037157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.092 [2024-12-05 12:14:09.037189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.092 qpair failed and we were unable to recover it. 00:30:35.092 [2024-12-05 12:14:09.037428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.092 [2024-12-05 12:14:09.037460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.092 qpair failed and we were unable to recover it. 00:30:35.092 [2024-12-05 12:14:09.037650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.092 [2024-12-05 12:14:09.037681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.092 qpair failed and we were unable to recover it. 00:30:35.092 [2024-12-05 12:14:09.037796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.092 [2024-12-05 12:14:09.037827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.092 qpair failed and we were unable to recover it. 00:30:35.092 [2024-12-05 12:14:09.038068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.092 [2024-12-05 12:14:09.038100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.092 qpair failed and we were unable to recover it. 00:30:35.092 [2024-12-05 12:14:09.038221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.092 [2024-12-05 12:14:09.038253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.092 qpair failed and we were unable to recover it. 00:30:35.092 [2024-12-05 12:14:09.038526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.092 [2024-12-05 12:14:09.038559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.092 qpair failed and we were unable to recover it. 00:30:35.092 [2024-12-05 12:14:09.038843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.092 [2024-12-05 12:14:09.038875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.092 qpair failed and we were unable to recover it. 00:30:35.092 [2024-12-05 12:14:09.039006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.092 [2024-12-05 12:14:09.039038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.092 qpair failed and we were unable to recover it. 00:30:35.092 [2024-12-05 12:14:09.039226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.092 [2024-12-05 12:14:09.039258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.092 qpair failed and we were unable to recover it. 00:30:35.092 [2024-12-05 12:14:09.039469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.092 [2024-12-05 12:14:09.039501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.092 qpair failed and we were unable to recover it. 00:30:35.092 [2024-12-05 12:14:09.039634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.092 [2024-12-05 12:14:09.039666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.092 qpair failed and we were unable to recover it. 00:30:35.092 [2024-12-05 12:14:09.039843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.092 [2024-12-05 12:14:09.039875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.092 qpair failed and we were unable to recover it. 00:30:35.092 [2024-12-05 12:14:09.040058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.092 [2024-12-05 12:14:09.040090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.092 qpair failed and we were unable to recover it. 00:30:35.092 [2024-12-05 12:14:09.040281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.092 [2024-12-05 12:14:09.040312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.092 qpair failed and we were unable to recover it. 00:30:35.092 [2024-12-05 12:14:09.040559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.092 [2024-12-05 12:14:09.040592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.092 qpair failed and we were unable to recover it. 00:30:35.092 [2024-12-05 12:14:09.040716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.092 [2024-12-05 12:14:09.040746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.092 qpair failed and we were unable to recover it. 00:30:35.092 [2024-12-05 12:14:09.040873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.092 [2024-12-05 12:14:09.040905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.092 qpair failed and we were unable to recover it. 00:30:35.092 [2024-12-05 12:14:09.041146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.092 [2024-12-05 12:14:09.041177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.092 qpair failed and we were unable to recover it. 00:30:35.092 [2024-12-05 12:14:09.041441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.092 [2024-12-05 12:14:09.041474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.092 qpair failed and we were unable to recover it. 00:30:35.092 [2024-12-05 12:14:09.041681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.092 [2024-12-05 12:14:09.041712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.092 qpair failed and we were unable to recover it. 00:30:35.092 [2024-12-05 12:14:09.041976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.092 [2024-12-05 12:14:09.042007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.093 qpair failed and we were unable to recover it. 00:30:35.093 [2024-12-05 12:14:09.042267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.093 [2024-12-05 12:14:09.042298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.093 qpair failed and we were unable to recover it. 00:30:35.093 [2024-12-05 12:14:09.042487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.093 [2024-12-05 12:14:09.042518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.093 qpair failed and we were unable to recover it. 00:30:35.093 [2024-12-05 12:14:09.042702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.093 [2024-12-05 12:14:09.042733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.093 qpair failed and we were unable to recover it. 00:30:35.093 [2024-12-05 12:14:09.042936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.093 [2024-12-05 12:14:09.042968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.093 qpair failed and we were unable to recover it. 00:30:35.093 [2024-12-05 12:14:09.043091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.093 [2024-12-05 12:14:09.043121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.093 qpair failed and we were unable to recover it. 00:30:35.093 [2024-12-05 12:14:09.043361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.093 [2024-12-05 12:14:09.043404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.093 qpair failed and we were unable to recover it. 00:30:35.093 [2024-12-05 12:14:09.043530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.093 [2024-12-05 12:14:09.043561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.093 qpair failed and we were unable to recover it. 00:30:35.093 [2024-12-05 12:14:09.043832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.093 [2024-12-05 12:14:09.043864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.093 qpair failed and we were unable to recover it. 00:30:35.093 [2024-12-05 12:14:09.044039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.093 [2024-12-05 12:14:09.044070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.093 qpair failed and we were unable to recover it. 00:30:35.093 [2024-12-05 12:14:09.044281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.093 [2024-12-05 12:14:09.044312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.093 qpair failed and we were unable to recover it. 00:30:35.093 [2024-12-05 12:14:09.044433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.093 [2024-12-05 12:14:09.044471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.093 qpair failed and we were unable to recover it. 00:30:35.093 [2024-12-05 12:14:09.044736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.093 [2024-12-05 12:14:09.044768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.093 qpair failed and we were unable to recover it. 00:30:35.093 [2024-12-05 12:14:09.044946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.093 [2024-12-05 12:14:09.044977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.093 qpair failed and we were unable to recover it. 00:30:35.093 [2024-12-05 12:14:09.045192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.093 [2024-12-05 12:14:09.045224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.093 qpair failed and we were unable to recover it. 00:30:35.093 [2024-12-05 12:14:09.045410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.093 [2024-12-05 12:14:09.045444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.093 qpair failed and we were unable to recover it. 00:30:35.093 [2024-12-05 12:14:09.045655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.093 [2024-12-05 12:14:09.045687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.093 qpair failed and we were unable to recover it. 00:30:35.093 [2024-12-05 12:14:09.045801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.093 [2024-12-05 12:14:09.045832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.093 qpair failed and we were unable to recover it. 00:30:35.093 [2024-12-05 12:14:09.045943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.093 [2024-12-05 12:14:09.045974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.093 qpair failed and we were unable to recover it. 00:30:35.093 [2024-12-05 12:14:09.046218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.093 [2024-12-05 12:14:09.046250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.093 qpair failed and we were unable to recover it. 00:30:35.093 [2024-12-05 12:14:09.046419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.093 [2024-12-05 12:14:09.046450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.093 qpair failed and we were unable to recover it. 00:30:35.093 [2024-12-05 12:14:09.046590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.093 [2024-12-05 12:14:09.046622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.093 qpair failed and we were unable to recover it. 00:30:35.093 [2024-12-05 12:14:09.046798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.093 [2024-12-05 12:14:09.046831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.093 qpair failed and we were unable to recover it. 00:30:35.093 [2024-12-05 12:14:09.047037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.093 [2024-12-05 12:14:09.047068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.093 qpair failed and we were unable to recover it. 00:30:35.093 [2024-12-05 12:14:09.047308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.093 [2024-12-05 12:14:09.047339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.093 qpair failed and we were unable to recover it. 00:30:35.093 [2024-12-05 12:14:09.047465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.093 [2024-12-05 12:14:09.047498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.093 qpair failed and we were unable to recover it. 00:30:35.093 [2024-12-05 12:14:09.047694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.093 [2024-12-05 12:14:09.047724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.093 qpair failed and we were unable to recover it. 00:30:35.093 [2024-12-05 12:14:09.047915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.093 [2024-12-05 12:14:09.047946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.093 qpair failed and we were unable to recover it. 00:30:35.093 [2024-12-05 12:14:09.048130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.093 [2024-12-05 12:14:09.048162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.093 qpair failed and we were unable to recover it. 00:30:35.093 [2024-12-05 12:14:09.048379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.093 [2024-12-05 12:14:09.048412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.093 qpair failed and we were unable to recover it. 00:30:35.093 [2024-12-05 12:14:09.048599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.093 [2024-12-05 12:14:09.048631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.093 qpair failed and we were unable to recover it. 00:30:35.093 [2024-12-05 12:14:09.048820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.093 [2024-12-05 12:14:09.048851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.093 qpair failed and we were unable to recover it. 00:30:35.093 [2024-12-05 12:14:09.049047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.093 [2024-12-05 12:14:09.049078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.093 qpair failed and we were unable to recover it. 00:30:35.093 [2024-12-05 12:14:09.049254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.093 [2024-12-05 12:14:09.049286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.093 qpair failed and we were unable to recover it. 00:30:35.093 [2024-12-05 12:14:09.049525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.093 [2024-12-05 12:14:09.049557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.093 qpair failed and we were unable to recover it. 00:30:35.093 [2024-12-05 12:14:09.049835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.093 [2024-12-05 12:14:09.049867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.093 qpair failed and we were unable to recover it. 00:30:35.093 [2024-12-05 12:14:09.050078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.093 [2024-12-05 12:14:09.050109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.093 qpair failed and we were unable to recover it. 00:30:35.093 [2024-12-05 12:14:09.050326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.093 [2024-12-05 12:14:09.050357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.093 qpair failed and we were unable to recover it. 00:30:35.093 [2024-12-05 12:14:09.050576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.094 [2024-12-05 12:14:09.050609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.094 qpair failed and we were unable to recover it. 00:30:35.094 [2024-12-05 12:14:09.050799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.094 [2024-12-05 12:14:09.050830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.094 qpair failed and we were unable to recover it. 00:30:35.094 [2024-12-05 12:14:09.051076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.094 [2024-12-05 12:14:09.051108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.094 qpair failed and we were unable to recover it. 00:30:35.094 [2024-12-05 12:14:09.051354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.094 [2024-12-05 12:14:09.051412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.094 qpair failed and we were unable to recover it. 00:30:35.094 [2024-12-05 12:14:09.051599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.094 [2024-12-05 12:14:09.051630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.094 qpair failed and we were unable to recover it. 00:30:35.094 [2024-12-05 12:14:09.051812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.094 [2024-12-05 12:14:09.051844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.094 qpair failed and we were unable to recover it. 00:30:35.094 [2024-12-05 12:14:09.051949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.094 [2024-12-05 12:14:09.051980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.094 qpair failed and we were unable to recover it. 00:30:35.094 [2024-12-05 12:14:09.052223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.094 [2024-12-05 12:14:09.052254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.094 qpair failed and we were unable to recover it. 00:30:35.094 [2024-12-05 12:14:09.052428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.094 [2024-12-05 12:14:09.052461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.094 qpair failed and we were unable to recover it. 00:30:35.094 [2024-12-05 12:14:09.052722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.094 [2024-12-05 12:14:09.052754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.094 qpair failed and we were unable to recover it. 00:30:35.094 [2024-12-05 12:14:09.052888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.094 [2024-12-05 12:14:09.052919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.094 qpair failed and we were unable to recover it. 00:30:35.094 [2024-12-05 12:14:09.053105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.094 [2024-12-05 12:14:09.053137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.094 qpair failed and we were unable to recover it. 00:30:35.094 [2024-12-05 12:14:09.053317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.094 [2024-12-05 12:14:09.053348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.094 qpair failed and we were unable to recover it. 00:30:35.094 [2024-12-05 12:14:09.053642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.094 [2024-12-05 12:14:09.053681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.094 qpair failed and we were unable to recover it. 00:30:35.094 [2024-12-05 12:14:09.053793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.094 [2024-12-05 12:14:09.053825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.094 qpair failed and we were unable to recover it. 00:30:35.094 [2024-12-05 12:14:09.054112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.094 [2024-12-05 12:14:09.054143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.094 qpair failed and we were unable to recover it. 00:30:35.094 [2024-12-05 12:14:09.054429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.094 [2024-12-05 12:14:09.054461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.094 qpair failed and we were unable to recover it. 00:30:35.094 [2024-12-05 12:14:09.054582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.094 [2024-12-05 12:14:09.054613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.094 qpair failed and we were unable to recover it. 00:30:35.094 [2024-12-05 12:14:09.054741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.094 [2024-12-05 12:14:09.054773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.094 qpair failed and we were unable to recover it. 00:30:35.094 [2024-12-05 12:14:09.054960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.094 [2024-12-05 12:14:09.054992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.094 qpair failed and we were unable to recover it. 00:30:35.094 [2024-12-05 12:14:09.055171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.094 [2024-12-05 12:14:09.055202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.094 qpair failed and we were unable to recover it. 00:30:35.094 [2024-12-05 12:14:09.055404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.094 [2024-12-05 12:14:09.055437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.094 qpair failed and we were unable to recover it. 00:30:35.094 [2024-12-05 12:14:09.055608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.094 [2024-12-05 12:14:09.055640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.094 qpair failed and we were unable to recover it. 00:30:35.094 [2024-12-05 12:14:09.055784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.094 [2024-12-05 12:14:09.055816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.094 qpair failed and we were unable to recover it. 00:30:35.094 [2024-12-05 12:14:09.056013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.094 [2024-12-05 12:14:09.056044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.094 qpair failed and we were unable to recover it. 00:30:35.094 [2024-12-05 12:14:09.056304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.094 [2024-12-05 12:14:09.056335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.094 qpair failed and we were unable to recover it. 00:30:35.094 [2024-12-05 12:14:09.056515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.094 [2024-12-05 12:14:09.056548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.094 qpair failed and we were unable to recover it. 00:30:35.094 [2024-12-05 12:14:09.056743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.094 [2024-12-05 12:14:09.056775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.094 qpair failed and we were unable to recover it. 00:30:35.094 [2024-12-05 12:14:09.056959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.094 [2024-12-05 12:14:09.056990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.094 qpair failed and we were unable to recover it. 00:30:35.094 [2024-12-05 12:14:09.057119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.094 [2024-12-05 12:14:09.057150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.094 qpair failed and we were unable to recover it. 00:30:35.094 [2024-12-05 12:14:09.057335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.094 [2024-12-05 12:14:09.057366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.094 qpair failed and we were unable to recover it. 00:30:35.094 [2024-12-05 12:14:09.057568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.094 [2024-12-05 12:14:09.057600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.094 qpair failed and we were unable to recover it. 00:30:35.094 [2024-12-05 12:14:09.057802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.094 [2024-12-05 12:14:09.057834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.094 qpair failed and we were unable to recover it. 00:30:35.094 [2024-12-05 12:14:09.058024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.095 [2024-12-05 12:14:09.058056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.095 qpair failed and we were unable to recover it. 00:30:35.095 [2024-12-05 12:14:09.058307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.095 [2024-12-05 12:14:09.058338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.095 qpair failed and we were unable to recover it. 00:30:35.095 [2024-12-05 12:14:09.058463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.095 [2024-12-05 12:14:09.058496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.095 qpair failed and we were unable to recover it. 00:30:35.095 [2024-12-05 12:14:09.058601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.095 [2024-12-05 12:14:09.058633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.095 qpair failed and we were unable to recover it. 00:30:35.095 [2024-12-05 12:14:09.058817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.095 [2024-12-05 12:14:09.058849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.095 qpair failed and we were unable to recover it. 00:30:35.095 [2024-12-05 12:14:09.059109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.095 [2024-12-05 12:14:09.059141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.095 qpair failed and we were unable to recover it. 00:30:35.095 [2024-12-05 12:14:09.059324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.095 [2024-12-05 12:14:09.059356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.095 qpair failed and we were unable to recover it. 00:30:35.095 [2024-12-05 12:14:09.059560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.095 [2024-12-05 12:14:09.059599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.095 qpair failed and we were unable to recover it. 00:30:35.095 [2024-12-05 12:14:09.059800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.095 [2024-12-05 12:14:09.059832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.095 qpair failed and we were unable to recover it. 00:30:35.095 [2024-12-05 12:14:09.059937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.095 [2024-12-05 12:14:09.059969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.095 qpair failed and we were unable to recover it. 00:30:35.095 [2024-12-05 12:14:09.060149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.095 [2024-12-05 12:14:09.060180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.095 qpair failed and we were unable to recover it. 00:30:35.095 [2024-12-05 12:14:09.060350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.095 [2024-12-05 12:14:09.060394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.095 qpair failed and we were unable to recover it. 00:30:35.095 [2024-12-05 12:14:09.060535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.095 [2024-12-05 12:14:09.060566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.095 qpair failed and we were unable to recover it. 00:30:35.095 [2024-12-05 12:14:09.060700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.095 [2024-12-05 12:14:09.060732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.095 qpair failed and we were unable to recover it. 00:30:35.095 [2024-12-05 12:14:09.060992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.095 [2024-12-05 12:14:09.061024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.095 qpair failed and we were unable to recover it. 00:30:35.095 [2024-12-05 12:14:09.061194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.095 [2024-12-05 12:14:09.061226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.095 qpair failed and we were unable to recover it. 00:30:35.095 [2024-12-05 12:14:09.061339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.095 [2024-12-05 12:14:09.061378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.095 qpair failed and we were unable to recover it. 00:30:35.095 [2024-12-05 12:14:09.061550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.095 [2024-12-05 12:14:09.061581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.095 qpair failed and we were unable to recover it. 00:30:35.095 [2024-12-05 12:14:09.061815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.095 [2024-12-05 12:14:09.061846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.095 qpair failed and we were unable to recover it. 00:30:35.095 [2024-12-05 12:14:09.062090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.095 [2024-12-05 12:14:09.062121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.095 qpair failed and we were unable to recover it. 00:30:35.095 [2024-12-05 12:14:09.062243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.095 [2024-12-05 12:14:09.062274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.095 qpair failed and we were unable to recover it. 00:30:35.095 [2024-12-05 12:14:09.062571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.095 [2024-12-05 12:14:09.062605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.095 qpair failed and we were unable to recover it. 00:30:35.095 [2024-12-05 12:14:09.062787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.095 [2024-12-05 12:14:09.062819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.095 qpair failed and we were unable to recover it. 00:30:35.095 [2024-12-05 12:14:09.062941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.095 [2024-12-05 12:14:09.062973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.095 qpair failed and we were unable to recover it. 00:30:35.095 [2024-12-05 12:14:09.063210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.095 [2024-12-05 12:14:09.063241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.095 qpair failed and we were unable to recover it. 00:30:35.095 [2024-12-05 12:14:09.063400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.095 [2024-12-05 12:14:09.063433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.095 qpair failed and we were unable to recover it. 00:30:35.095 [2024-12-05 12:14:09.063677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.095 [2024-12-05 12:14:09.063709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.095 qpair failed and we were unable to recover it. 00:30:35.095 [2024-12-05 12:14:09.063880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.095 [2024-12-05 12:14:09.063912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.095 qpair failed and we were unable to recover it. 00:30:35.095 [2024-12-05 12:14:09.064202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.095 [2024-12-05 12:14:09.064234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.095 qpair failed and we were unable to recover it. 00:30:35.095 [2024-12-05 12:14:09.064414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.095 [2024-12-05 12:14:09.064447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.095 qpair failed and we were unable to recover it. 00:30:35.095 [2024-12-05 12:14:09.064547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.095 [2024-12-05 12:14:09.064578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.095 qpair failed and we were unable to recover it. 00:30:35.095 [2024-12-05 12:14:09.064680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.095 [2024-12-05 12:14:09.064711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.095 qpair failed and we were unable to recover it. 00:30:35.095 [2024-12-05 12:14:09.064963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.095 [2024-12-05 12:14:09.064995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.095 qpair failed and we were unable to recover it. 00:30:35.095 [2024-12-05 12:14:09.065105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.095 [2024-12-05 12:14:09.065136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.095 qpair failed and we were unable to recover it. 00:30:35.095 [2024-12-05 12:14:09.065274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.095 [2024-12-05 12:14:09.065306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.095 qpair failed and we were unable to recover it. 00:30:35.095 [2024-12-05 12:14:09.065441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.095 [2024-12-05 12:14:09.065474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.095 qpair failed and we were unable to recover it. 00:30:35.095 [2024-12-05 12:14:09.065688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.095 [2024-12-05 12:14:09.065720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.095 qpair failed and we were unable to recover it. 00:30:35.095 [2024-12-05 12:14:09.065929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.095 [2024-12-05 12:14:09.065961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.095 qpair failed and we were unable to recover it. 00:30:35.095 [2024-12-05 12:14:09.066084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.096 [2024-12-05 12:14:09.066116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.096 qpair failed and we were unable to recover it. 00:30:35.096 [2024-12-05 12:14:09.066307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.096 [2024-12-05 12:14:09.066339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.096 qpair failed and we were unable to recover it. 00:30:35.096 [2024-12-05 12:14:09.066535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.096 [2024-12-05 12:14:09.066568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.096 qpair failed and we were unable to recover it. 00:30:35.096 [2024-12-05 12:14:09.066761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.096 [2024-12-05 12:14:09.066792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.096 qpair failed and we were unable to recover it. 00:30:35.096 [2024-12-05 12:14:09.066976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.096 [2024-12-05 12:14:09.067007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.096 qpair failed and we were unable to recover it. 00:30:35.096 [2024-12-05 12:14:09.067202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.096 [2024-12-05 12:14:09.067233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.096 qpair failed and we were unable to recover it. 00:30:35.096 [2024-12-05 12:14:09.067363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.096 [2024-12-05 12:14:09.067416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.096 qpair failed and we were unable to recover it. 00:30:35.096 [2024-12-05 12:14:09.067615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.096 [2024-12-05 12:14:09.067647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.096 qpair failed and we were unable to recover it. 00:30:35.096 [2024-12-05 12:14:09.067907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.096 [2024-12-05 12:14:09.067939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.096 qpair failed and we were unable to recover it. 00:30:35.096 [2024-12-05 12:14:09.068064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.096 [2024-12-05 12:14:09.068100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.096 qpair failed and we were unable to recover it. 00:30:35.096 [2024-12-05 12:14:09.068376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.096 [2024-12-05 12:14:09.068410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.096 qpair failed and we were unable to recover it. 00:30:35.096 [2024-12-05 12:14:09.068674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.096 [2024-12-05 12:14:09.068706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.096 qpair failed and we were unable to recover it. 00:30:35.096 [2024-12-05 12:14:09.068877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.096 [2024-12-05 12:14:09.068909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.096 qpair failed and we were unable to recover it. 00:30:35.096 [2024-12-05 12:14:09.069031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.096 [2024-12-05 12:14:09.069063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.096 qpair failed and we were unable to recover it. 00:30:35.096 [2024-12-05 12:14:09.069326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.096 [2024-12-05 12:14:09.069357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.096 qpair failed and we were unable to recover it. 00:30:35.096 [2024-12-05 12:14:09.069557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.096 [2024-12-05 12:14:09.069589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.096 qpair failed and we were unable to recover it. 00:30:35.096 [2024-12-05 12:14:09.069822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.096 [2024-12-05 12:14:09.069854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.096 qpair failed and we were unable to recover it. 00:30:35.096 [2024-12-05 12:14:09.069987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.096 [2024-12-05 12:14:09.070018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.096 qpair failed and we were unable to recover it. 00:30:35.096 [2024-12-05 12:14:09.070145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.096 [2024-12-05 12:14:09.070177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.096 qpair failed and we were unable to recover it. 00:30:35.096 [2024-12-05 12:14:09.070419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.096 [2024-12-05 12:14:09.070452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.096 qpair failed and we were unable to recover it. 00:30:35.096 [2024-12-05 12:14:09.070626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.096 [2024-12-05 12:14:09.070657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.096 qpair failed and we were unable to recover it. 00:30:35.096 [2024-12-05 12:14:09.070833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.096 [2024-12-05 12:14:09.070872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.096 qpair failed and we were unable to recover it. 00:30:35.096 [2024-12-05 12:14:09.071054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.096 [2024-12-05 12:14:09.071086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.096 qpair failed and we were unable to recover it. 00:30:35.096 [2024-12-05 12:14:09.071383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.096 [2024-12-05 12:14:09.071417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.096 qpair failed and we were unable to recover it. 00:30:35.096 [2024-12-05 12:14:09.071660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.096 [2024-12-05 12:14:09.071692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.096 qpair failed and we were unable to recover it. 00:30:35.096 [2024-12-05 12:14:09.071799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.096 [2024-12-05 12:14:09.071830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.096 qpair failed and we were unable to recover it. 00:30:35.096 [2024-12-05 12:14:09.072067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.096 [2024-12-05 12:14:09.072098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.096 qpair failed and we were unable to recover it. 00:30:35.096 [2024-12-05 12:14:09.072305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.096 [2024-12-05 12:14:09.072336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.096 qpair failed and we were unable to recover it. 00:30:35.096 [2024-12-05 12:14:09.072556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.096 [2024-12-05 12:14:09.072590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.096 qpair failed and we were unable to recover it. 00:30:35.096 [2024-12-05 12:14:09.072879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.096 [2024-12-05 12:14:09.072910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.096 qpair failed and we were unable to recover it. 00:30:35.096 [2024-12-05 12:14:09.073170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.096 [2024-12-05 12:14:09.073202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.096 qpair failed and we were unable to recover it. 00:30:35.096 [2024-12-05 12:14:09.073386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.096 [2024-12-05 12:14:09.073419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.096 qpair failed and we were unable to recover it. 00:30:35.096 [2024-12-05 12:14:09.073547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.096 [2024-12-05 12:14:09.073580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.096 qpair failed and we were unable to recover it. 00:30:35.096 [2024-12-05 12:14:09.073706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.096 [2024-12-05 12:14:09.073738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.096 qpair failed and we were unable to recover it. 00:30:35.096 [2024-12-05 12:14:09.073860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.096 [2024-12-05 12:14:09.073892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.096 qpair failed and we were unable to recover it. 00:30:35.096 [2024-12-05 12:14:09.074131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.096 [2024-12-05 12:14:09.074163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.096 qpair failed and we were unable to recover it. 00:30:35.096 [2024-12-05 12:14:09.074279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.096 [2024-12-05 12:14:09.074311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.096 qpair failed and we were unable to recover it. 00:30:35.096 [2024-12-05 12:14:09.074504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.096 [2024-12-05 12:14:09.074536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.097 qpair failed and we were unable to recover it. 00:30:35.097 [2024-12-05 12:14:09.074659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.097 [2024-12-05 12:14:09.074691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.097 qpair failed and we were unable to recover it. 00:30:35.097 [2024-12-05 12:14:09.074897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.097 [2024-12-05 12:14:09.074927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.097 qpair failed and we were unable to recover it. 00:30:35.097 [2024-12-05 12:14:09.075041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.097 [2024-12-05 12:14:09.075073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.097 qpair failed and we were unable to recover it. 00:30:35.097 [2024-12-05 12:14:09.075257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.097 [2024-12-05 12:14:09.075288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.097 qpair failed and we were unable to recover it. 00:30:35.097 [2024-12-05 12:14:09.075403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.097 [2024-12-05 12:14:09.075436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.097 qpair failed and we were unable to recover it. 00:30:35.097 [2024-12-05 12:14:09.075678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.097 [2024-12-05 12:14:09.075710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.097 qpair failed and we were unable to recover it. 00:30:35.097 [2024-12-05 12:14:09.075888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.097 [2024-12-05 12:14:09.075919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.097 qpair failed and we were unable to recover it. 00:30:35.097 [2024-12-05 12:14:09.076117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.097 [2024-12-05 12:14:09.076149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.097 qpair failed and we were unable to recover it. 00:30:35.097 [2024-12-05 12:14:09.076263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.097 [2024-12-05 12:14:09.076295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.097 qpair failed and we were unable to recover it. 00:30:35.097 [2024-12-05 12:14:09.076577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.097 [2024-12-05 12:14:09.076610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.097 qpair failed and we were unable to recover it. 00:30:35.097 [2024-12-05 12:14:09.076874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.097 [2024-12-05 12:14:09.076905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.097 qpair failed and we were unable to recover it. 00:30:35.097 [2024-12-05 12:14:09.077037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.097 [2024-12-05 12:14:09.077074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.097 qpair failed and we were unable to recover it. 00:30:35.097 [2024-12-05 12:14:09.077261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.097 [2024-12-05 12:14:09.077293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.097 qpair failed and we were unable to recover it. 00:30:35.097 [2024-12-05 12:14:09.077564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.097 [2024-12-05 12:14:09.077595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.097 qpair failed and we were unable to recover it. 00:30:35.097 [2024-12-05 12:14:09.077765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.097 [2024-12-05 12:14:09.077796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.097 qpair failed and we were unable to recover it. 00:30:35.097 [2024-12-05 12:14:09.077988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.097 [2024-12-05 12:14:09.078021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.097 qpair failed and we were unable to recover it. 00:30:35.097 [2024-12-05 12:14:09.078194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.097 [2024-12-05 12:14:09.078226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.097 qpair failed and we were unable to recover it. 00:30:35.097 [2024-12-05 12:14:09.078462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.097 [2024-12-05 12:14:09.078494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.097 qpair failed and we were unable to recover it. 00:30:35.097 [2024-12-05 12:14:09.078620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.097 [2024-12-05 12:14:09.078651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.097 qpair failed and we were unable to recover it. 00:30:35.097 [2024-12-05 12:14:09.078845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.097 [2024-12-05 12:14:09.078876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.097 qpair failed and we were unable to recover it. 00:30:35.097 [2024-12-05 12:14:09.079136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.097 [2024-12-05 12:14:09.079167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.097 qpair failed and we were unable to recover it. 00:30:35.097 [2024-12-05 12:14:09.079428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.097 [2024-12-05 12:14:09.079460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.097 qpair failed and we were unable to recover it. 00:30:35.097 [2024-12-05 12:14:09.079649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.097 [2024-12-05 12:14:09.079681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.097 qpair failed and we were unable to recover it. 00:30:35.097 [2024-12-05 12:14:09.079803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.097 [2024-12-05 12:14:09.079833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.097 qpair failed and we were unable to recover it. 00:30:35.097 [2024-12-05 12:14:09.080023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.097 [2024-12-05 12:14:09.080055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.097 qpair failed and we were unable to recover it. 00:30:35.097 [2024-12-05 12:14:09.080198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.097 [2024-12-05 12:14:09.080229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.097 qpair failed and we were unable to recover it. 00:30:35.097 [2024-12-05 12:14:09.080491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.097 [2024-12-05 12:14:09.080523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.097 qpair failed and we were unable to recover it. 00:30:35.097 [2024-12-05 12:14:09.080696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.097 [2024-12-05 12:14:09.080728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.097 qpair failed and we were unable to recover it. 00:30:35.097 [2024-12-05 12:14:09.080916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.097 [2024-12-05 12:14:09.080947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.097 qpair failed and we were unable to recover it. 00:30:35.097 [2024-12-05 12:14:09.081120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.097 [2024-12-05 12:14:09.081151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.097 qpair failed and we were unable to recover it. 00:30:35.097 [2024-12-05 12:14:09.081282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.097 [2024-12-05 12:14:09.081313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.097 qpair failed and we were unable to recover it. 00:30:35.097 [2024-12-05 12:14:09.081559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.097 [2024-12-05 12:14:09.081591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.097 qpair failed and we were unable to recover it. 00:30:35.097 [2024-12-05 12:14:09.081785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.097 [2024-12-05 12:14:09.081818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.097 qpair failed and we were unable to recover it. 00:30:35.097 [2024-12-05 12:14:09.081936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.097 [2024-12-05 12:14:09.081967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.097 qpair failed and we were unable to recover it. 00:30:35.097 [2024-12-05 12:14:09.082163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.097 [2024-12-05 12:14:09.082194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.097 qpair failed and we were unable to recover it. 00:30:35.097 [2024-12-05 12:14:09.082323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.097 [2024-12-05 12:14:09.082355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.097 qpair failed and we were unable to recover it. 00:30:35.097 [2024-12-05 12:14:09.082537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.097 [2024-12-05 12:14:09.082569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.097 qpair failed and we were unable to recover it. 00:30:35.097 [2024-12-05 12:14:09.082740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.098 [2024-12-05 12:14:09.082770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.098 qpair failed and we were unable to recover it. 00:30:35.098 [2024-12-05 12:14:09.083036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.098 [2024-12-05 12:14:09.083068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.098 qpair failed and we were unable to recover it. 00:30:35.098 [2024-12-05 12:14:09.083258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.098 [2024-12-05 12:14:09.083290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.098 qpair failed and we were unable to recover it. 00:30:35.098 [2024-12-05 12:14:09.083502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.098 [2024-12-05 12:14:09.083536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.098 qpair failed and we were unable to recover it. 00:30:35.098 [2024-12-05 12:14:09.083751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.098 [2024-12-05 12:14:09.083784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.098 qpair failed and we were unable to recover it. 00:30:35.098 [2024-12-05 12:14:09.083963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.098 [2024-12-05 12:14:09.083996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.098 qpair failed and we were unable to recover it. 00:30:35.098 [2024-12-05 12:14:09.084254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.098 [2024-12-05 12:14:09.084287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.098 qpair failed and we were unable to recover it. 00:30:35.098 [2024-12-05 12:14:09.084405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.098 [2024-12-05 12:14:09.084438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.098 qpair failed and we were unable to recover it. 00:30:35.098 [2024-12-05 12:14:09.084627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.098 [2024-12-05 12:14:09.084658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.098 qpair failed and we were unable to recover it. 00:30:35.098 [2024-12-05 12:14:09.084894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.098 [2024-12-05 12:14:09.084925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.098 qpair failed and we were unable to recover it. 00:30:35.098 [2024-12-05 12:14:09.085106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.098 [2024-12-05 12:14:09.085137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.098 qpair failed and we were unable to recover it. 00:30:35.098 [2024-12-05 12:14:09.085313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.098 [2024-12-05 12:14:09.085345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.098 qpair failed and we were unable to recover it. 00:30:35.098 [2024-12-05 12:14:09.085564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.098 [2024-12-05 12:14:09.085597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.098 qpair failed and we were unable to recover it. 00:30:35.098 [2024-12-05 12:14:09.085862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.098 [2024-12-05 12:14:09.085893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.098 qpair failed and we were unable to recover it. 00:30:35.098 [2024-12-05 12:14:09.086016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.098 [2024-12-05 12:14:09.086053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.098 qpair failed and we were unable to recover it. 00:30:35.098 [2024-12-05 12:14:09.086193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.098 [2024-12-05 12:14:09.086224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.098 qpair failed and we were unable to recover it. 00:30:35.098 [2024-12-05 12:14:09.086415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.098 [2024-12-05 12:14:09.086447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.098 qpair failed and we were unable to recover it. 00:30:35.098 [2024-12-05 12:14:09.086587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.098 [2024-12-05 12:14:09.086619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.098 qpair failed and we were unable to recover it. 00:30:35.098 [2024-12-05 12:14:09.086725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.098 [2024-12-05 12:14:09.086756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.098 qpair failed and we were unable to recover it. 00:30:35.098 [2024-12-05 12:14:09.086931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.098 [2024-12-05 12:14:09.086964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.098 qpair failed and we were unable to recover it. 00:30:35.098 [2024-12-05 12:14:09.087174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.098 [2024-12-05 12:14:09.087206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.098 qpair failed and we were unable to recover it. 00:30:35.098 [2024-12-05 12:14:09.087470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.098 [2024-12-05 12:14:09.087504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.098 qpair failed and we were unable to recover it. 00:30:35.098 [2024-12-05 12:14:09.087611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.098 [2024-12-05 12:14:09.087642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.098 qpair failed and we were unable to recover it. 00:30:35.098 [2024-12-05 12:14:09.087882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.098 [2024-12-05 12:14:09.087914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.098 qpair failed and we were unable to recover it. 00:30:35.098 [2024-12-05 12:14:09.088097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.098 [2024-12-05 12:14:09.088128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.098 qpair failed and we were unable to recover it. 00:30:35.098 [2024-12-05 12:14:09.088229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.098 [2024-12-05 12:14:09.088261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.098 qpair failed and we were unable to recover it. 00:30:35.098 [2024-12-05 12:14:09.088522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.098 [2024-12-05 12:14:09.088554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.098 qpair failed and we were unable to recover it. 00:30:35.098 [2024-12-05 12:14:09.088686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.098 [2024-12-05 12:14:09.088717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.098 qpair failed and we were unable to recover it. 00:30:35.098 [2024-12-05 12:14:09.088861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.098 [2024-12-05 12:14:09.088893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.098 qpair failed and we were unable to recover it. 00:30:35.098 [2024-12-05 12:14:09.089020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.098 [2024-12-05 12:14:09.089052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.098 qpair failed and we were unable to recover it. 00:30:35.098 [2024-12-05 12:14:09.089227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.098 [2024-12-05 12:14:09.089258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.098 qpair failed and we were unable to recover it. 00:30:35.098 [2024-12-05 12:14:09.089448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.098 [2024-12-05 12:14:09.089479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.098 qpair failed and we were unable to recover it. 00:30:35.098 [2024-12-05 12:14:09.089689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.098 [2024-12-05 12:14:09.089720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.098 qpair failed and we were unable to recover it. 00:30:35.098 [2024-12-05 12:14:09.089895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.098 [2024-12-05 12:14:09.089927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.098 qpair failed and we were unable to recover it. 00:30:35.098 [2024-12-05 12:14:09.090202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.098 [2024-12-05 12:14:09.090232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.099 qpair failed and we were unable to recover it. 00:30:35.099 [2024-12-05 12:14:09.090353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.099 [2024-12-05 12:14:09.090393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.099 qpair failed and we were unable to recover it. 00:30:35.099 [2024-12-05 12:14:09.090572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.099 [2024-12-05 12:14:09.090604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.099 qpair failed and we were unable to recover it. 00:30:35.099 [2024-12-05 12:14:09.090789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.099 [2024-12-05 12:14:09.090821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.099 qpair failed and we were unable to recover it. 00:30:35.099 [2024-12-05 12:14:09.090992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.099 [2024-12-05 12:14:09.091024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.099 qpair failed and we were unable to recover it. 00:30:35.099 [2024-12-05 12:14:09.091283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.099 [2024-12-05 12:14:09.091315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.099 qpair failed and we were unable to recover it. 00:30:35.099 [2024-12-05 12:14:09.091597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.099 [2024-12-05 12:14:09.091629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.099 qpair failed and we were unable to recover it. 00:30:35.099 [2024-12-05 12:14:09.091820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.099 [2024-12-05 12:14:09.091852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.099 qpair failed and we were unable to recover it. 00:30:35.099 [2024-12-05 12:14:09.092034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.099 [2024-12-05 12:14:09.092065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.099 qpair failed and we were unable to recover it. 00:30:35.099 [2024-12-05 12:14:09.092174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.099 [2024-12-05 12:14:09.092205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.099 qpair failed and we were unable to recover it. 00:30:35.099 [2024-12-05 12:14:09.092384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.099 [2024-12-05 12:14:09.092417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.099 qpair failed and we were unable to recover it. 00:30:35.099 [2024-12-05 12:14:09.092682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.099 [2024-12-05 12:14:09.092714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.099 qpair failed and we were unable to recover it. 00:30:35.099 [2024-12-05 12:14:09.092886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.099 [2024-12-05 12:14:09.092917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.099 qpair failed and we were unable to recover it. 00:30:35.099 [2024-12-05 12:14:09.093107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.099 [2024-12-05 12:14:09.093138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.099 qpair failed and we were unable to recover it. 00:30:35.099 [2024-12-05 12:14:09.093332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.099 [2024-12-05 12:14:09.093364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.099 qpair failed and we were unable to recover it. 00:30:35.099 [2024-12-05 12:14:09.093545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.099 [2024-12-05 12:14:09.093577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.099 qpair failed and we were unable to recover it. 00:30:35.099 [2024-12-05 12:14:09.093865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.099 [2024-12-05 12:14:09.093897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.099 qpair failed and we were unable to recover it. 00:30:35.099 [2024-12-05 12:14:09.094158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.099 [2024-12-05 12:14:09.094189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.099 qpair failed and we were unable to recover it. 00:30:35.099 [2024-12-05 12:14:09.094465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.099 [2024-12-05 12:14:09.094499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.099 qpair failed and we were unable to recover it. 00:30:35.099 [2024-12-05 12:14:09.094685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.099 [2024-12-05 12:14:09.094717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.099 qpair failed and we were unable to recover it. 00:30:35.099 [2024-12-05 12:14:09.094997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.099 [2024-12-05 12:14:09.095034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.099 qpair failed and we were unable to recover it. 00:30:35.099 [2024-12-05 12:14:09.095297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.099 [2024-12-05 12:14:09.095329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.099 qpair failed and we were unable to recover it. 00:30:35.099 [2024-12-05 12:14:09.095542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.099 [2024-12-05 12:14:09.095575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.099 qpair failed and we were unable to recover it. 00:30:35.099 [2024-12-05 12:14:09.095834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.099 [2024-12-05 12:14:09.095866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.099 qpair failed and we were unable to recover it. 00:30:35.099 [2024-12-05 12:14:09.096054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.099 [2024-12-05 12:14:09.096085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.099 qpair failed and we were unable to recover it. 00:30:35.099 [2024-12-05 12:14:09.096353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.099 [2024-12-05 12:14:09.096392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.099 qpair failed and we were unable to recover it. 00:30:35.099 [2024-12-05 12:14:09.096630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.099 [2024-12-05 12:14:09.096662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.099 qpair failed and we were unable to recover it. 00:30:35.099 [2024-12-05 12:14:09.096867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.099 [2024-12-05 12:14:09.096898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.099 qpair failed and we were unable to recover it. 00:30:35.099 [2024-12-05 12:14:09.097158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.099 [2024-12-05 12:14:09.097189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.099 qpair failed and we were unable to recover it. 00:30:35.099 [2024-12-05 12:14:09.097321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.099 [2024-12-05 12:14:09.097352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.099 qpair failed and we were unable to recover it. 00:30:35.099 [2024-12-05 12:14:09.097625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.099 [2024-12-05 12:14:09.097657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.099 qpair failed and we were unable to recover it. 00:30:35.099 [2024-12-05 12:14:09.097843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.099 [2024-12-05 12:14:09.097875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.099 qpair failed and we were unable to recover it. 00:30:35.099 [2024-12-05 12:14:09.098070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.099 [2024-12-05 12:14:09.098101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.099 qpair failed and we were unable to recover it. 00:30:35.099 [2024-12-05 12:14:09.098270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.099 [2024-12-05 12:14:09.098300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.099 qpair failed and we were unable to recover it. 00:30:35.099 [2024-12-05 12:14:09.098569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.099 [2024-12-05 12:14:09.098601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.099 qpair failed and we were unable to recover it. 00:30:35.099 [2024-12-05 12:14:09.098863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.099 [2024-12-05 12:14:09.098895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.099 qpair failed and we were unable to recover it. 00:30:35.099 [2024-12-05 12:14:09.099085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.099 [2024-12-05 12:14:09.099117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.099 qpair failed and we were unable to recover it. 00:30:35.099 [2024-12-05 12:14:09.099266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.099 [2024-12-05 12:14:09.099297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.099 qpair failed and we were unable to recover it. 00:30:35.099 [2024-12-05 12:14:09.099477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.100 [2024-12-05 12:14:09.099510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.100 qpair failed and we were unable to recover it. 00:30:35.100 [2024-12-05 12:14:09.099633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.100 [2024-12-05 12:14:09.099664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.100 qpair failed and we were unable to recover it. 00:30:35.100 [2024-12-05 12:14:09.099765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.100 [2024-12-05 12:14:09.099797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.100 qpair failed and we were unable to recover it. 00:30:35.100 [2024-12-05 12:14:09.100035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.100 [2024-12-05 12:14:09.100067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.100 qpair failed and we were unable to recover it. 00:30:35.100 [2024-12-05 12:14:09.100276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.100 [2024-12-05 12:14:09.100307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.100 qpair failed and we were unable to recover it. 00:30:35.100 [2024-12-05 12:14:09.100571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.100 [2024-12-05 12:14:09.100604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.100 qpair failed and we were unable to recover it. 00:30:35.100 [2024-12-05 12:14:09.100778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.100 [2024-12-05 12:14:09.100809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.100 qpair failed and we were unable to recover it. 00:30:35.100 [2024-12-05 12:14:09.101010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.100 [2024-12-05 12:14:09.101041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.100 qpair failed and we were unable to recover it. 00:30:35.100 [2024-12-05 12:14:09.101236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.100 [2024-12-05 12:14:09.101268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.100 qpair failed and we were unable to recover it. 00:30:35.100 [2024-12-05 12:14:09.101521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.100 [2024-12-05 12:14:09.101555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.100 qpair failed and we were unable to recover it. 00:30:35.100 [2024-12-05 12:14:09.101813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.100 [2024-12-05 12:14:09.101845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.100 qpair failed and we were unable to recover it. 00:30:35.100 [2024-12-05 12:14:09.101971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.100 [2024-12-05 12:14:09.102002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.100 qpair failed and we were unable to recover it. 00:30:35.100 [2024-12-05 12:14:09.102173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.100 [2024-12-05 12:14:09.102206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.100 qpair failed and we were unable to recover it. 00:30:35.100 [2024-12-05 12:14:09.102392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.100 [2024-12-05 12:14:09.102424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.100 qpair failed and we were unable to recover it. 00:30:35.100 [2024-12-05 12:14:09.102605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.100 [2024-12-05 12:14:09.102636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.100 qpair failed and we were unable to recover it. 00:30:35.100 [2024-12-05 12:14:09.102871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.100 [2024-12-05 12:14:09.102902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.100 qpair failed and we were unable to recover it. 00:30:35.100 [2024-12-05 12:14:09.103033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.100 [2024-12-05 12:14:09.103063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.100 qpair failed and we were unable to recover it. 00:30:35.100 [2024-12-05 12:14:09.103315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.100 [2024-12-05 12:14:09.103347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.100 qpair failed and we were unable to recover it. 00:30:35.100 [2024-12-05 12:14:09.103475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.100 [2024-12-05 12:14:09.103507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.100 qpair failed and we were unable to recover it. 00:30:35.100 [2024-12-05 12:14:09.103630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.100 [2024-12-05 12:14:09.103661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.100 qpair failed and we were unable to recover it. 00:30:35.100 [2024-12-05 12:14:09.103838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.100 [2024-12-05 12:14:09.103869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.100 qpair failed and we were unable to recover it. 00:30:35.100 [2024-12-05 12:14:09.104051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.100 [2024-12-05 12:14:09.104082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.100 qpair failed and we were unable to recover it. 00:30:35.100 [2024-12-05 12:14:09.104349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.100 [2024-12-05 12:14:09.104397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.100 qpair failed and we were unable to recover it. 00:30:35.100 [2024-12-05 12:14:09.104667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.100 [2024-12-05 12:14:09.104699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.100 qpair failed and we were unable to recover it. 00:30:35.100 [2024-12-05 12:14:09.104966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.100 [2024-12-05 12:14:09.104998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.100 qpair failed and we were unable to recover it. 00:30:35.100 [2024-12-05 12:14:09.105188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.100 [2024-12-05 12:14:09.105219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.100 qpair failed and we were unable to recover it. 00:30:35.100 [2024-12-05 12:14:09.105505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.100 [2024-12-05 12:14:09.105538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.100 qpair failed and we were unable to recover it. 00:30:35.100 [2024-12-05 12:14:09.105796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.100 [2024-12-05 12:14:09.105828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.100 qpair failed and we were unable to recover it. 00:30:35.100 [2024-12-05 12:14:09.105952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.100 [2024-12-05 12:14:09.105982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.100 qpair failed and we were unable to recover it. 00:30:35.100 [2024-12-05 12:14:09.106241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.100 [2024-12-05 12:14:09.106273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.100 qpair failed and we were unable to recover it. 00:30:35.100 [2024-12-05 12:14:09.106490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.100 [2024-12-05 12:14:09.106523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.100 qpair failed and we were unable to recover it. 00:30:35.100 [2024-12-05 12:14:09.106698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.100 [2024-12-05 12:14:09.106729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.100 qpair failed and we were unable to recover it. 00:30:35.100 [2024-12-05 12:14:09.106937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.100 [2024-12-05 12:14:09.106970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.100 qpair failed and we were unable to recover it. 00:30:35.100 [2024-12-05 12:14:09.107141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.100 [2024-12-05 12:14:09.107172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.100 qpair failed and we were unable to recover it. 00:30:35.100 [2024-12-05 12:14:09.107308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.100 [2024-12-05 12:14:09.107339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.100 qpair failed and we were unable to recover it. 00:30:35.100 [2024-12-05 12:14:09.107521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.100 [2024-12-05 12:14:09.107554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.100 qpair failed and we were unable to recover it. 00:30:35.100 [2024-12-05 12:14:09.107705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.100 [2024-12-05 12:14:09.107737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.100 qpair failed and we were unable to recover it. 00:30:35.100 [2024-12-05 12:14:09.107941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.100 [2024-12-05 12:14:09.107973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.100 qpair failed and we were unable to recover it. 00:30:35.100 [2024-12-05 12:14:09.108100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.101 [2024-12-05 12:14:09.108131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.101 qpair failed and we were unable to recover it. 00:30:35.101 [2024-12-05 12:14:09.108379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.101 [2024-12-05 12:14:09.108413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.101 qpair failed and we were unable to recover it. 00:30:35.101 [2024-12-05 12:14:09.108601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.101 [2024-12-05 12:14:09.108632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.101 qpair failed and we were unable to recover it. 00:30:35.101 [2024-12-05 12:14:09.108754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.101 [2024-12-05 12:14:09.108785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.101 qpair failed and we were unable to recover it. 00:30:35.101 [2024-12-05 12:14:09.109024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.101 [2024-12-05 12:14:09.109055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.101 qpair failed and we were unable to recover it. 00:30:35.101 [2024-12-05 12:14:09.109258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.101 [2024-12-05 12:14:09.109289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.101 qpair failed and we were unable to recover it. 00:30:35.101 [2024-12-05 12:14:09.109410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.101 [2024-12-05 12:14:09.109443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.101 qpair failed and we were unable to recover it. 00:30:35.101 [2024-12-05 12:14:09.109628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.101 [2024-12-05 12:14:09.109660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.101 qpair failed and we were unable to recover it. 00:30:35.101 [2024-12-05 12:14:09.109779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.101 [2024-12-05 12:14:09.109810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.101 qpair failed and we were unable to recover it. 00:30:35.101 [2024-12-05 12:14:09.110056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.101 [2024-12-05 12:14:09.110088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.101 qpair failed and we were unable to recover it. 00:30:35.101 [2024-12-05 12:14:09.110261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.101 [2024-12-05 12:14:09.110293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.101 qpair failed and we were unable to recover it. 00:30:35.101 [2024-12-05 12:14:09.110412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.101 [2024-12-05 12:14:09.110444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.101 qpair failed and we were unable to recover it. 00:30:35.101 [2024-12-05 12:14:09.110714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.101 [2024-12-05 12:14:09.110746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.101 qpair failed and we were unable to recover it. 00:30:35.101 [2024-12-05 12:14:09.111004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.101 [2024-12-05 12:14:09.111035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.101 qpair failed and we were unable to recover it. 00:30:35.101 [2024-12-05 12:14:09.111314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.101 [2024-12-05 12:14:09.111346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.101 qpair failed and we were unable to recover it. 00:30:35.101 [2024-12-05 12:14:09.111615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.101 [2024-12-05 12:14:09.111647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.101 qpair failed and we were unable to recover it. 00:30:35.101 [2024-12-05 12:14:09.111765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.101 [2024-12-05 12:14:09.111797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.101 qpair failed and we were unable to recover it. 00:30:35.101 [2024-12-05 12:14:09.112062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.101 [2024-12-05 12:14:09.112094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.101 qpair failed and we were unable to recover it. 00:30:35.101 [2024-12-05 12:14:09.112217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.101 [2024-12-05 12:14:09.112249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.101 qpair failed and we were unable to recover it. 00:30:35.101 [2024-12-05 12:14:09.112430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.101 [2024-12-05 12:14:09.112484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.101 qpair failed and we were unable to recover it. 00:30:35.101 [2024-12-05 12:14:09.112594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.101 [2024-12-05 12:14:09.112625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.101 qpair failed and we were unable to recover it. 00:30:35.101 [2024-12-05 12:14:09.112797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.101 [2024-12-05 12:14:09.112830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.101 qpair failed and we were unable to recover it. 00:30:35.101 [2024-12-05 12:14:09.113018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.101 [2024-12-05 12:14:09.113050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.101 qpair failed and we were unable to recover it. 00:30:35.101 [2024-12-05 12:14:09.113311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.101 [2024-12-05 12:14:09.113343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.101 qpair failed and we were unable to recover it. 00:30:35.101 [2024-12-05 12:14:09.113525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.101 [2024-12-05 12:14:09.113564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.101 qpair failed and we were unable to recover it. 00:30:35.101 [2024-12-05 12:14:09.113676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.101 [2024-12-05 12:14:09.113707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.101 qpair failed and we were unable to recover it. 00:30:35.101 [2024-12-05 12:14:09.113949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.101 [2024-12-05 12:14:09.113981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.101 qpair failed and we were unable to recover it. 00:30:35.101 [2024-12-05 12:14:09.114162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.101 [2024-12-05 12:14:09.114193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.101 qpair failed and we were unable to recover it. 00:30:35.101 [2024-12-05 12:14:09.114366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.101 [2024-12-05 12:14:09.114407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.101 qpair failed and we were unable to recover it. 00:30:35.101 [2024-12-05 12:14:09.114590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.101 [2024-12-05 12:14:09.114621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.101 qpair failed and we were unable to recover it. 00:30:35.101 [2024-12-05 12:14:09.114809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.101 [2024-12-05 12:14:09.114840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.101 qpair failed and we were unable to recover it. 00:30:35.101 [2024-12-05 12:14:09.115148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.101 [2024-12-05 12:14:09.115179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.101 qpair failed and we were unable to recover it. 00:30:35.101 [2024-12-05 12:14:09.115444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.101 [2024-12-05 12:14:09.115477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.101 qpair failed and we were unable to recover it. 00:30:35.101 [2024-12-05 12:14:09.115662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.101 [2024-12-05 12:14:09.115694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.101 qpair failed and we were unable to recover it. 00:30:35.101 [2024-12-05 12:14:09.115964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.101 [2024-12-05 12:14:09.115995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.101 qpair failed and we were unable to recover it. 00:30:35.101 [2024-12-05 12:14:09.116117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.101 [2024-12-05 12:14:09.116149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.101 qpair failed and we were unable to recover it. 00:30:35.101 [2024-12-05 12:14:09.116336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.101 [2024-12-05 12:14:09.116377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.101 qpair failed and we were unable to recover it. 00:30:35.101 [2024-12-05 12:14:09.116577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.101 [2024-12-05 12:14:09.116609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.101 qpair failed and we were unable to recover it. 00:30:35.101 [2024-12-05 12:14:09.116722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.102 [2024-12-05 12:14:09.116754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.102 qpair failed and we were unable to recover it. 00:30:35.102 [2024-12-05 12:14:09.116952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.102 [2024-12-05 12:14:09.116985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.102 qpair failed and we were unable to recover it. 00:30:35.102 [2024-12-05 12:14:09.117157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.102 [2024-12-05 12:14:09.117189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.102 qpair failed and we were unable to recover it. 00:30:35.102 [2024-12-05 12:14:09.117326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.102 [2024-12-05 12:14:09.117358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.102 qpair failed and we were unable to recover it. 00:30:35.102 [2024-12-05 12:14:09.117493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.102 [2024-12-05 12:14:09.117526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.102 qpair failed and we were unable to recover it. 00:30:35.102 [2024-12-05 12:14:09.117633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.102 [2024-12-05 12:14:09.117665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.102 qpair failed and we were unable to recover it. 00:30:35.102 [2024-12-05 12:14:09.117844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.102 [2024-12-05 12:14:09.117875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.102 qpair failed and we were unable to recover it. 00:30:35.102 [2024-12-05 12:14:09.118057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.102 [2024-12-05 12:14:09.118089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.102 qpair failed and we were unable to recover it. 00:30:35.102 [2024-12-05 12:14:09.118191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.102 [2024-12-05 12:14:09.118222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.102 qpair failed and we were unable to recover it. 00:30:35.102 [2024-12-05 12:14:09.118414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.102 [2024-12-05 12:14:09.118446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.102 qpair failed and we were unable to recover it. 00:30:35.102 [2024-12-05 12:14:09.118730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.102 [2024-12-05 12:14:09.118763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.102 qpair failed and we were unable to recover it. 00:30:35.102 [2024-12-05 12:14:09.118898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.102 [2024-12-05 12:14:09.118930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.102 qpair failed and we were unable to recover it. 00:30:35.102 [2024-12-05 12:14:09.119175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.102 [2024-12-05 12:14:09.119207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.102 qpair failed and we were unable to recover it. 00:30:35.102 [2024-12-05 12:14:09.119331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.102 [2024-12-05 12:14:09.119363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.102 qpair failed and we were unable to recover it. 00:30:35.102 [2024-12-05 12:14:09.119546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.102 [2024-12-05 12:14:09.119579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.102 qpair failed and we were unable to recover it. 00:30:35.102 [2024-12-05 12:14:09.119679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.102 [2024-12-05 12:14:09.119710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.102 qpair failed and we were unable to recover it. 00:30:35.102 [2024-12-05 12:14:09.119959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.102 [2024-12-05 12:14:09.119991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.102 qpair failed and we were unable to recover it. 00:30:35.102 [2024-12-05 12:14:09.120234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.102 [2024-12-05 12:14:09.120266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.102 qpair failed and we were unable to recover it. 00:30:35.102 [2024-12-05 12:14:09.120458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.102 [2024-12-05 12:14:09.120491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.102 qpair failed and we were unable to recover it. 00:30:35.102 [2024-12-05 12:14:09.120730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.102 [2024-12-05 12:14:09.120762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.102 qpair failed and we were unable to recover it. 00:30:35.102 [2024-12-05 12:14:09.120893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.102 [2024-12-05 12:14:09.120925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.102 qpair failed and we were unable to recover it. 00:30:35.102 [2024-12-05 12:14:09.121110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.102 [2024-12-05 12:14:09.121141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.102 qpair failed and we were unable to recover it. 00:30:35.102 [2024-12-05 12:14:09.121327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.102 [2024-12-05 12:14:09.121357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.102 qpair failed and we were unable to recover it. 00:30:35.102 [2024-12-05 12:14:09.121575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.102 [2024-12-05 12:14:09.121607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.102 qpair failed and we were unable to recover it. 00:30:35.102 [2024-12-05 12:14:09.121792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.102 [2024-12-05 12:14:09.121824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.102 qpair failed and we were unable to recover it. 00:30:35.102 [2024-12-05 12:14:09.121941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.102 [2024-12-05 12:14:09.121973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.102 qpair failed and we were unable to recover it. 00:30:35.102 [2024-12-05 12:14:09.122160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.102 [2024-12-05 12:14:09.122198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.102 qpair failed and we were unable to recover it. 00:30:35.102 [2024-12-05 12:14:09.122435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.102 [2024-12-05 12:14:09.122467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.102 qpair failed and we were unable to recover it. 00:30:35.102 [2024-12-05 12:14:09.122569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.102 [2024-12-05 12:14:09.122601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.102 qpair failed and we were unable to recover it. 00:30:35.102 [2024-12-05 12:14:09.122802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.102 [2024-12-05 12:14:09.122833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.102 qpair failed and we were unable to recover it. 00:30:35.102 [2024-12-05 12:14:09.122950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.102 [2024-12-05 12:14:09.122982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.102 qpair failed and we were unable to recover it. 00:30:35.102 [2024-12-05 12:14:09.123163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.102 [2024-12-05 12:14:09.123195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.102 qpair failed and we were unable to recover it. 00:30:35.102 [2024-12-05 12:14:09.123437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.102 [2024-12-05 12:14:09.123470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.102 qpair failed and we were unable to recover it. 00:30:35.102 [2024-12-05 12:14:09.123658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.102 [2024-12-05 12:14:09.123690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.102 qpair failed and we were unable to recover it. 00:30:35.102 [2024-12-05 12:14:09.123815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.102 [2024-12-05 12:14:09.123847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.102 qpair failed and we were unable to recover it. 00:30:35.102 [2024-12-05 12:14:09.124032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.102 [2024-12-05 12:14:09.124063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.102 qpair failed and we were unable to recover it. 00:30:35.102 [2024-12-05 12:14:09.124336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.102 [2024-12-05 12:14:09.124375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.102 qpair failed and we were unable to recover it. 00:30:35.102 [2024-12-05 12:14:09.124516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.102 [2024-12-05 12:14:09.124549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.102 qpair failed and we were unable to recover it. 00:30:35.102 [2024-12-05 12:14:09.124663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.103 [2024-12-05 12:14:09.124695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.103 qpair failed and we were unable to recover it. 00:30:35.103 [2024-12-05 12:14:09.124891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.103 [2024-12-05 12:14:09.124923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.103 qpair failed and we were unable to recover it. 00:30:35.103 [2024-12-05 12:14:09.125106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.103 [2024-12-05 12:14:09.125139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.103 qpair failed and we were unable to recover it. 00:30:35.103 [2024-12-05 12:14:09.125388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.103 [2024-12-05 12:14:09.125422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.103 qpair failed and we were unable to recover it. 00:30:35.103 [2024-12-05 12:14:09.125609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.103 [2024-12-05 12:14:09.125641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.103 qpair failed and we were unable to recover it. 00:30:35.103 [2024-12-05 12:14:09.125826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.103 [2024-12-05 12:14:09.125858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.103 qpair failed and we were unable to recover it. 00:30:35.103 [2024-12-05 12:14:09.126010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.103 [2024-12-05 12:14:09.126041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.103 qpair failed and we were unable to recover it. 00:30:35.103 [2024-12-05 12:14:09.126216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.103 [2024-12-05 12:14:09.126248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.103 qpair failed and we were unable to recover it. 00:30:35.103 [2024-12-05 12:14:09.126456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.103 [2024-12-05 12:14:09.126489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.103 qpair failed and we were unable to recover it. 00:30:35.103 [2024-12-05 12:14:09.126730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.103 [2024-12-05 12:14:09.126761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.103 qpair failed and we were unable to recover it. 00:30:35.103 [2024-12-05 12:14:09.126898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.103 [2024-12-05 12:14:09.126929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.103 qpair failed and we were unable to recover it. 00:30:35.103 [2024-12-05 12:14:09.127139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.103 [2024-12-05 12:14:09.127170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.103 qpair failed and we were unable to recover it. 00:30:35.103 [2024-12-05 12:14:09.127406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.103 [2024-12-05 12:14:09.127439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.103 qpair failed and we were unable to recover it. 00:30:35.103 [2024-12-05 12:14:09.127564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.103 [2024-12-05 12:14:09.127595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.103 qpair failed and we were unable to recover it. 00:30:35.103 [2024-12-05 12:14:09.127798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.103 [2024-12-05 12:14:09.127829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.103 qpair failed and we were unable to recover it. 00:30:35.103 [2024-12-05 12:14:09.128007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.103 [2024-12-05 12:14:09.128040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.103 qpair failed and we were unable to recover it. 00:30:35.103 [2024-12-05 12:14:09.128248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.103 [2024-12-05 12:14:09.128280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.103 qpair failed and we were unable to recover it. 00:30:35.103 [2024-12-05 12:14:09.128454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.103 [2024-12-05 12:14:09.128487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.103 qpair failed and we were unable to recover it. 00:30:35.103 [2024-12-05 12:14:09.128618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.103 [2024-12-05 12:14:09.128651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.103 qpair failed and we were unable to recover it. 00:30:35.103 [2024-12-05 12:14:09.128799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.103 [2024-12-05 12:14:09.128833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.103 qpair failed and we were unable to recover it. 00:30:35.103 [2024-12-05 12:14:09.129082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.103 [2024-12-05 12:14:09.129114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.103 qpair failed and we were unable to recover it. 00:30:35.103 [2024-12-05 12:14:09.129302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.103 [2024-12-05 12:14:09.129335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.103 qpair failed and we were unable to recover it. 00:30:35.103 [2024-12-05 12:14:09.129540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.103 [2024-12-05 12:14:09.129574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.103 qpair failed and we were unable to recover it. 00:30:35.103 [2024-12-05 12:14:09.129763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.103 [2024-12-05 12:14:09.129797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.103 qpair failed and we were unable to recover it. 00:30:35.103 [2024-12-05 12:14:09.129975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.103 [2024-12-05 12:14:09.130006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.103 qpair failed and we were unable to recover it. 00:30:35.103 [2024-12-05 12:14:09.130120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.103 [2024-12-05 12:14:09.130151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.103 qpair failed and we were unable to recover it. 00:30:35.103 [2024-12-05 12:14:09.130300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.103 [2024-12-05 12:14:09.130332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.103 qpair failed and we were unable to recover it. 00:30:35.103 [2024-12-05 12:14:09.130532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.103 [2024-12-05 12:14:09.130565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.103 qpair failed and we were unable to recover it. 00:30:35.103 [2024-12-05 12:14:09.130823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.103 [2024-12-05 12:14:09.130861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.103 qpair failed and we were unable to recover it. 00:30:35.103 [2024-12-05 12:14:09.131070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.103 [2024-12-05 12:14:09.131102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.103 qpair failed and we were unable to recover it. 00:30:35.103 [2024-12-05 12:14:09.131280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.103 [2024-12-05 12:14:09.131312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.103 qpair failed and we were unable to recover it. 00:30:35.103 [2024-12-05 12:14:09.131563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.103 [2024-12-05 12:14:09.131596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.103 qpair failed and we were unable to recover it. 00:30:35.103 [2024-12-05 12:14:09.131798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.103 [2024-12-05 12:14:09.131830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.103 qpair failed and we were unable to recover it. 00:30:35.103 [2024-12-05 12:14:09.132017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.103 [2024-12-05 12:14:09.132048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.103 qpair failed and we were unable to recover it. 00:30:35.103 [2024-12-05 12:14:09.132233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.104 [2024-12-05 12:14:09.132265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.104 qpair failed and we were unable to recover it. 00:30:35.104 [2024-12-05 12:14:09.132449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.104 [2024-12-05 12:14:09.132482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.104 qpair failed and we were unable to recover it. 00:30:35.104 [2024-12-05 12:14:09.132735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.104 [2024-12-05 12:14:09.132767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.104 qpair failed and we were unable to recover it. 00:30:35.104 [2024-12-05 12:14:09.132980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.104 [2024-12-05 12:14:09.133012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.104 qpair failed and we were unable to recover it. 00:30:35.104 [2024-12-05 12:14:09.133250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.104 [2024-12-05 12:14:09.133281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.104 qpair failed and we were unable to recover it. 00:30:35.104 [2024-12-05 12:14:09.133473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.104 [2024-12-05 12:14:09.133507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.104 qpair failed and we were unable to recover it. 00:30:35.104 [2024-12-05 12:14:09.133624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.104 [2024-12-05 12:14:09.133656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.104 qpair failed and we were unable to recover it. 00:30:35.104 [2024-12-05 12:14:09.133850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.104 [2024-12-05 12:14:09.133883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.104 qpair failed and we were unable to recover it. 00:30:35.104 [2024-12-05 12:14:09.134071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.104 [2024-12-05 12:14:09.134103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.104 qpair failed and we were unable to recover it. 00:30:35.104 [2024-12-05 12:14:09.134293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.104 [2024-12-05 12:14:09.134326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.104 qpair failed and we were unable to recover it. 00:30:35.104 [2024-12-05 12:14:09.134573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.104 [2024-12-05 12:14:09.134606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.104 qpair failed and we were unable to recover it. 00:30:35.104 [2024-12-05 12:14:09.134794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.104 [2024-12-05 12:14:09.134826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.104 qpair failed and we were unable to recover it. 00:30:35.104 [2024-12-05 12:14:09.135003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.104 [2024-12-05 12:14:09.135034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.104 qpair failed and we were unable to recover it. 00:30:35.104 [2024-12-05 12:14:09.135209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.104 [2024-12-05 12:14:09.135241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.104 qpair failed and we were unable to recover it. 00:30:35.104 [2024-12-05 12:14:09.135357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.104 [2024-12-05 12:14:09.135398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.104 qpair failed and we were unable to recover it. 00:30:35.104 [2024-12-05 12:14:09.135603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.104 [2024-12-05 12:14:09.135636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.104 qpair failed and we were unable to recover it. 00:30:35.104 [2024-12-05 12:14:09.135844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.104 [2024-12-05 12:14:09.135875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.104 qpair failed and we were unable to recover it. 00:30:35.104 [2024-12-05 12:14:09.136060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.104 [2024-12-05 12:14:09.136091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.104 qpair failed and we were unable to recover it. 00:30:35.104 [2024-12-05 12:14:09.136265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.104 [2024-12-05 12:14:09.136298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.104 qpair failed and we were unable to recover it. 00:30:35.104 [2024-12-05 12:14:09.136482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.104 [2024-12-05 12:14:09.136515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.104 qpair failed and we were unable to recover it. 00:30:35.104 [2024-12-05 12:14:09.136689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.104 [2024-12-05 12:14:09.136721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.104 qpair failed and we were unable to recover it. 00:30:35.104 [2024-12-05 12:14:09.136907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.104 [2024-12-05 12:14:09.136939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.104 qpair failed and we were unable to recover it. 00:30:35.104 [2024-12-05 12:14:09.137064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.104 [2024-12-05 12:14:09.137094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.104 qpair failed and we were unable to recover it. 00:30:35.104 [2024-12-05 12:14:09.137297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.104 [2024-12-05 12:14:09.137330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.104 qpair failed and we were unable to recover it. 00:30:35.104 [2024-12-05 12:14:09.137612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.104 [2024-12-05 12:14:09.137645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.104 qpair failed and we were unable to recover it. 00:30:35.104 [2024-12-05 12:14:09.137780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.104 [2024-12-05 12:14:09.137812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.104 qpair failed and we were unable to recover it. 00:30:35.104 [2024-12-05 12:14:09.138061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.104 [2024-12-05 12:14:09.138093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.104 qpair failed and we were unable to recover it. 00:30:35.104 [2024-12-05 12:14:09.138277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.104 [2024-12-05 12:14:09.138309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.104 qpair failed and we were unable to recover it. 00:30:35.104 [2024-12-05 12:14:09.138451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.104 [2024-12-05 12:14:09.138483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.104 qpair failed and we were unable to recover it. 00:30:35.104 [2024-12-05 12:14:09.138599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.104 [2024-12-05 12:14:09.138631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.104 qpair failed and we were unable to recover it. 00:30:35.104 [2024-12-05 12:14:09.138880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.104 [2024-12-05 12:14:09.138912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.104 qpair failed and we were unable to recover it. 00:30:35.104 [2024-12-05 12:14:09.139017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.104 [2024-12-05 12:14:09.139049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.104 qpair failed and we were unable to recover it. 00:30:35.104 [2024-12-05 12:14:09.139228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.104 [2024-12-05 12:14:09.139259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.104 qpair failed and we were unable to recover it. 00:30:35.104 [2024-12-05 12:14:09.139361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.104 [2024-12-05 12:14:09.139401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.104 qpair failed and we were unable to recover it. 00:30:35.104 [2024-12-05 12:14:09.139668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.104 [2024-12-05 12:14:09.139706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.104 qpair failed and we were unable to recover it. 00:30:35.104 [2024-12-05 12:14:09.139943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.104 [2024-12-05 12:14:09.139974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.104 qpair failed and we were unable to recover it. 00:30:35.104 [2024-12-05 12:14:09.140155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.104 [2024-12-05 12:14:09.140187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.104 qpair failed and we were unable to recover it. 00:30:35.104 [2024-12-05 12:14:09.140445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.104 [2024-12-05 12:14:09.140478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.104 qpair failed and we were unable to recover it. 00:30:35.104 [2024-12-05 12:14:09.140715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.105 [2024-12-05 12:14:09.140747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.105 qpair failed and we were unable to recover it. 00:30:35.105 [2024-12-05 12:14:09.140938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.105 [2024-12-05 12:14:09.140970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.105 qpair failed and we were unable to recover it. 00:30:35.105 [2024-12-05 12:14:09.141098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.105 [2024-12-05 12:14:09.141130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.105 qpair failed and we were unable to recover it. 00:30:35.105 [2024-12-05 12:14:09.141314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.105 [2024-12-05 12:14:09.141346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.105 qpair failed and we were unable to recover it. 00:30:35.105 [2024-12-05 12:14:09.141614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.105 [2024-12-05 12:14:09.141647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.105 qpair failed and we were unable to recover it. 00:30:35.105 [2024-12-05 12:14:09.141754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.105 [2024-12-05 12:14:09.141784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.105 qpair failed and we were unable to recover it. 00:30:35.105 [2024-12-05 12:14:09.141961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.105 [2024-12-05 12:14:09.141992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.105 qpair failed and we were unable to recover it. 00:30:35.105 [2024-12-05 12:14:09.142194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.105 [2024-12-05 12:14:09.142226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.105 qpair failed and we were unable to recover it. 00:30:35.105 [2024-12-05 12:14:09.142441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.105 [2024-12-05 12:14:09.142473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.105 qpair failed and we were unable to recover it. 00:30:35.105 [2024-12-05 12:14:09.142660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.105 [2024-12-05 12:14:09.142692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.105 qpair failed and we were unable to recover it. 00:30:35.105 [2024-12-05 12:14:09.142827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.105 [2024-12-05 12:14:09.142859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.105 qpair failed and we were unable to recover it. 00:30:35.105 [2024-12-05 12:14:09.143046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.105 [2024-12-05 12:14:09.143076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.105 qpair failed and we were unable to recover it. 00:30:35.105 [2024-12-05 12:14:09.143263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.105 [2024-12-05 12:14:09.143296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.105 qpair failed and we were unable to recover it. 00:30:35.105 [2024-12-05 12:14:09.143537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.105 [2024-12-05 12:14:09.143570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.105 qpair failed and we were unable to recover it. 00:30:35.105 [2024-12-05 12:14:09.143747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.105 [2024-12-05 12:14:09.143778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.105 qpair failed and we were unable to recover it. 00:30:35.105 [2024-12-05 12:14:09.143916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.105 [2024-12-05 12:14:09.143948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.105 qpair failed and we were unable to recover it. 00:30:35.105 [2024-12-05 12:14:09.144127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.105 [2024-12-05 12:14:09.144158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.105 qpair failed and we were unable to recover it. 00:30:35.105 [2024-12-05 12:14:09.144377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.105 [2024-12-05 12:14:09.144410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.105 qpair failed and we were unable to recover it. 00:30:35.105 [2024-12-05 12:14:09.144589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.105 [2024-12-05 12:14:09.144621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.105 qpair failed and we were unable to recover it. 00:30:35.105 [2024-12-05 12:14:09.144793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.105 [2024-12-05 12:14:09.144825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.105 qpair failed and we were unable to recover it. 00:30:35.105 [2024-12-05 12:14:09.145017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.105 [2024-12-05 12:14:09.145050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.105 qpair failed and we were unable to recover it. 00:30:35.105 [2024-12-05 12:14:09.145229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.105 [2024-12-05 12:14:09.145261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.105 qpair failed and we were unable to recover it. 00:30:35.105 [2024-12-05 12:14:09.145442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.105 [2024-12-05 12:14:09.145475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.105 qpair failed and we were unable to recover it. 00:30:35.105 [2024-12-05 12:14:09.145746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.105 [2024-12-05 12:14:09.145778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.105 qpair failed and we were unable to recover it. 00:30:35.105 [2024-12-05 12:14:09.145992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.105 [2024-12-05 12:14:09.146023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.105 qpair failed and we were unable to recover it. 00:30:35.105 [2024-12-05 12:14:09.146253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.105 [2024-12-05 12:14:09.146284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.105 qpair failed and we were unable to recover it. 00:30:35.105 [2024-12-05 12:14:09.146400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.105 [2024-12-05 12:14:09.146434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.105 qpair failed and we were unable to recover it. 00:30:35.105 [2024-12-05 12:14:09.146612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.105 [2024-12-05 12:14:09.146643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.105 qpair failed and we were unable to recover it. 00:30:35.105 [2024-12-05 12:14:09.146883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.105 [2024-12-05 12:14:09.146914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.105 qpair failed and we were unable to recover it. 00:30:35.105 [2024-12-05 12:14:09.147093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.105 [2024-12-05 12:14:09.147125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.105 qpair failed and we were unable to recover it. 00:30:35.105 [2024-12-05 12:14:09.147247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.105 [2024-12-05 12:14:09.147278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.105 qpair failed and we were unable to recover it. 00:30:35.105 [2024-12-05 12:14:09.147459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.105 [2024-12-05 12:14:09.147491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.105 qpair failed and we were unable to recover it. 00:30:35.105 [2024-12-05 12:14:09.147663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.105 [2024-12-05 12:14:09.147694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.105 qpair failed and we were unable to recover it. 00:30:35.105 [2024-12-05 12:14:09.147866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.105 [2024-12-05 12:14:09.147898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.105 qpair failed and we were unable to recover it. 00:30:35.105 [2024-12-05 12:14:09.148069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.105 [2024-12-05 12:14:09.148101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.105 qpair failed and we were unable to recover it. 00:30:35.105 [2024-12-05 12:14:09.148284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.105 [2024-12-05 12:14:09.148315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.105 qpair failed and we were unable to recover it. 00:30:35.105 [2024-12-05 12:14:09.148494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.105 [2024-12-05 12:14:09.148532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.105 qpair failed and we were unable to recover it. 00:30:35.105 [2024-12-05 12:14:09.148717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.105 [2024-12-05 12:14:09.148749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.105 qpair failed and we were unable to recover it. 00:30:35.105 [2024-12-05 12:14:09.149025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.106 [2024-12-05 12:14:09.149057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.106 qpair failed and we were unable to recover it. 00:30:35.106 [2024-12-05 12:14:09.149166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.106 [2024-12-05 12:14:09.149198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.106 qpair failed and we were unable to recover it. 00:30:35.106 [2024-12-05 12:14:09.149388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.106 [2024-12-05 12:14:09.149422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.106 qpair failed and we were unable to recover it. 00:30:35.106 [2024-12-05 12:14:09.149617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.106 [2024-12-05 12:14:09.149650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.106 qpair failed and we were unable to recover it. 00:30:35.106 [2024-12-05 12:14:09.149782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.106 [2024-12-05 12:14:09.149814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.106 qpair failed and we were unable to recover it. 00:30:35.106 [2024-12-05 12:14:09.149990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.106 [2024-12-05 12:14:09.150022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.106 qpair failed and we were unable to recover it. 00:30:35.106 [2024-12-05 12:14:09.150210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.106 [2024-12-05 12:14:09.150242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.106 qpair failed and we were unable to recover it. 00:30:35.106 [2024-12-05 12:14:09.150349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.106 [2024-12-05 12:14:09.150389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.106 qpair failed and we were unable to recover it. 00:30:35.106 [2024-12-05 12:14:09.150606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.106 [2024-12-05 12:14:09.150638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.106 qpair failed and we were unable to recover it. 00:30:35.106 [2024-12-05 12:14:09.150844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.106 [2024-12-05 12:14:09.150876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.106 qpair failed and we were unable to recover it. 00:30:35.106 [2024-12-05 12:14:09.151054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.106 [2024-12-05 12:14:09.151086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.106 qpair failed and we were unable to recover it. 00:30:35.106 [2024-12-05 12:14:09.151281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.106 [2024-12-05 12:14:09.151313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.106 qpair failed and we were unable to recover it. 00:30:35.106 [2024-12-05 12:14:09.151449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.106 [2024-12-05 12:14:09.151482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.106 qpair failed and we were unable to recover it. 00:30:35.106 [2024-12-05 12:14:09.151656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.106 [2024-12-05 12:14:09.151688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.106 qpair failed and we were unable to recover it. 00:30:35.106 [2024-12-05 12:14:09.151929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.106 [2024-12-05 12:14:09.151961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.106 qpair failed and we were unable to recover it. 00:30:35.106 [2024-12-05 12:14:09.152069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.106 [2024-12-05 12:14:09.152100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.106 qpair failed and we were unable to recover it. 00:30:35.106 [2024-12-05 12:14:09.152211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.106 [2024-12-05 12:14:09.152243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.106 qpair failed and we were unable to recover it. 00:30:35.106 [2024-12-05 12:14:09.152454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.106 [2024-12-05 12:14:09.152487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.106 qpair failed and we were unable to recover it. 00:30:35.106 [2024-12-05 12:14:09.152698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.106 [2024-12-05 12:14:09.152728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.106 qpair failed and we were unable to recover it. 00:30:35.106 [2024-12-05 12:14:09.152851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.106 [2024-12-05 12:14:09.152883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.106 qpair failed and we were unable to recover it. 00:30:35.106 [2024-12-05 12:14:09.153099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.106 [2024-12-05 12:14:09.153130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.106 qpair failed and we were unable to recover it. 00:30:35.106 [2024-12-05 12:14:09.153398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.106 [2024-12-05 12:14:09.153431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.106 qpair failed and we were unable to recover it. 00:30:35.106 [2024-12-05 12:14:09.153545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.106 [2024-12-05 12:14:09.153576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.106 qpair failed and we were unable to recover it. 00:30:35.106 [2024-12-05 12:14:09.153820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.106 [2024-12-05 12:14:09.153852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.106 qpair failed and we were unable to recover it. 00:30:35.106 [2024-12-05 12:14:09.153962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.106 [2024-12-05 12:14:09.153994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.106 qpair failed and we were unable to recover it. 00:30:35.106 [2024-12-05 12:14:09.154235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.106 [2024-12-05 12:14:09.154308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.106 qpair failed and we were unable to recover it. 00:30:35.106 [2024-12-05 12:14:09.154642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.106 [2024-12-05 12:14:09.154680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.106 qpair failed and we were unable to recover it. 00:30:35.106 [2024-12-05 12:14:09.154789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.106 [2024-12-05 12:14:09.154821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.106 qpair failed and we were unable to recover it. 00:30:35.106 [2024-12-05 12:14:09.155037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.106 [2024-12-05 12:14:09.155070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.106 qpair failed and we were unable to recover it. 00:30:35.106 [2024-12-05 12:14:09.155189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.106 [2024-12-05 12:14:09.155222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.106 qpair failed and we were unable to recover it. 00:30:35.106 [2024-12-05 12:14:09.155342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.106 [2024-12-05 12:14:09.155383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.106 qpair failed and we were unable to recover it. 00:30:35.106 [2024-12-05 12:14:09.155559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.106 [2024-12-05 12:14:09.155592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.106 qpair failed and we were unable to recover it. 00:30:35.106 [2024-12-05 12:14:09.155715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.106 [2024-12-05 12:14:09.155746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.106 qpair failed and we were unable to recover it. 00:30:35.106 [2024-12-05 12:14:09.155954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.106 [2024-12-05 12:14:09.155984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.106 qpair failed and we were unable to recover it. 00:30:35.106 [2024-12-05 12:14:09.156245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.106 [2024-12-05 12:14:09.156277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.106 qpair failed and we were unable to recover it. 00:30:35.106 [2024-12-05 12:14:09.156517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.106 [2024-12-05 12:14:09.156549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.106 qpair failed and we were unable to recover it. 00:30:35.106 [2024-12-05 12:14:09.156661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.106 [2024-12-05 12:14:09.156693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.106 qpair failed and we were unable to recover it. 00:30:35.106 [2024-12-05 12:14:09.156936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.106 [2024-12-05 12:14:09.156967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.106 qpair failed and we were unable to recover it. 00:30:35.106 [2024-12-05 12:14:09.157183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.107 [2024-12-05 12:14:09.157225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.107 qpair failed and we were unable to recover it. 00:30:35.107 [2024-12-05 12:14:09.157353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.107 [2024-12-05 12:14:09.157397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.107 qpair failed and we were unable to recover it. 00:30:35.107 [2024-12-05 12:14:09.157667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.107 [2024-12-05 12:14:09.157698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.107 qpair failed and we were unable to recover it. 00:30:35.107 [2024-12-05 12:14:09.157922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.107 [2024-12-05 12:14:09.157954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.107 qpair failed and we were unable to recover it. 00:30:35.107 [2024-12-05 12:14:09.158082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.107 [2024-12-05 12:14:09.158121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.107 qpair failed and we were unable to recover it. 00:30:35.107 [2024-12-05 12:14:09.158296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.107 [2024-12-05 12:14:09.158327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.107 qpair failed and we were unable to recover it. 00:30:35.107 [2024-12-05 12:14:09.158578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.107 [2024-12-05 12:14:09.158611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.107 qpair failed and we were unable to recover it. 00:30:35.107 [2024-12-05 12:14:09.158793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.107 [2024-12-05 12:14:09.158825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.107 qpair failed and we were unable to recover it. 00:30:35.107 [2024-12-05 12:14:09.159033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.107 [2024-12-05 12:14:09.159064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.107 qpair failed and we were unable to recover it. 00:30:35.107 [2024-12-05 12:14:09.159234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.107 [2024-12-05 12:14:09.159265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.107 qpair failed and we were unable to recover it. 00:30:35.107 [2024-12-05 12:14:09.159406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.107 [2024-12-05 12:14:09.159438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.107 qpair failed and we were unable to recover it. 00:30:35.107 [2024-12-05 12:14:09.159614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.107 [2024-12-05 12:14:09.159653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.107 qpair failed and we were unable to recover it. 00:30:35.107 [2024-12-05 12:14:09.159784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.107 [2024-12-05 12:14:09.159816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.107 qpair failed and we were unable to recover it. 00:30:35.107 [2024-12-05 12:14:09.159963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.107 [2024-12-05 12:14:09.159994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.107 qpair failed and we were unable to recover it. 00:30:35.107 [2024-12-05 12:14:09.160133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.107 [2024-12-05 12:14:09.160164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.107 qpair failed and we were unable to recover it. 00:30:35.107 [2024-12-05 12:14:09.160305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.107 [2024-12-05 12:14:09.160335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.107 qpair failed and we were unable to recover it. 00:30:35.107 [2024-12-05 12:14:09.160586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.107 [2024-12-05 12:14:09.160620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.107 qpair failed and we were unable to recover it. 00:30:35.107 [2024-12-05 12:14:09.160731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.107 [2024-12-05 12:14:09.160762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.107 qpair failed and we were unable to recover it. 00:30:35.107 [2024-12-05 12:14:09.160934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.107 [2024-12-05 12:14:09.160964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.107 qpair failed and we were unable to recover it. 00:30:35.107 [2024-12-05 12:14:09.161066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.107 [2024-12-05 12:14:09.161096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.107 qpair failed and we were unable to recover it. 00:30:35.107 [2024-12-05 12:14:09.161338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.107 [2024-12-05 12:14:09.161378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.107 qpair failed and we were unable to recover it. 00:30:35.107 [2024-12-05 12:14:09.161577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.107 [2024-12-05 12:14:09.161608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.107 qpair failed and we were unable to recover it. 00:30:35.107 [2024-12-05 12:14:09.161728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.107 [2024-12-05 12:14:09.161757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.107 qpair failed and we were unable to recover it. 00:30:35.107 [2024-12-05 12:14:09.161881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.107 [2024-12-05 12:14:09.161910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.107 qpair failed and we were unable to recover it. 00:30:35.107 [2024-12-05 12:14:09.162092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.107 [2024-12-05 12:14:09.162122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.107 qpair failed and we were unable to recover it. 00:30:35.107 [2024-12-05 12:14:09.162308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.107 [2024-12-05 12:14:09.162337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.107 qpair failed and we were unable to recover it. 00:30:35.107 [2024-12-05 12:14:09.162588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.107 [2024-12-05 12:14:09.162621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.107 qpair failed and we were unable to recover it. 00:30:35.107 [2024-12-05 12:14:09.162837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.107 [2024-12-05 12:14:09.162869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.107 qpair failed and we were unable to recover it. 00:30:35.107 [2024-12-05 12:14:09.163056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.107 [2024-12-05 12:14:09.163088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.107 qpair failed and we were unable to recover it. 00:30:35.107 [2024-12-05 12:14:09.163238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.107 [2024-12-05 12:14:09.163268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.107 qpair failed and we were unable to recover it. 00:30:35.107 [2024-12-05 12:14:09.163451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.107 [2024-12-05 12:14:09.163483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.107 qpair failed and we were unable to recover it. 00:30:35.107 [2024-12-05 12:14:09.163661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.107 [2024-12-05 12:14:09.163693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.107 qpair failed and we were unable to recover it. 00:30:35.107 [2024-12-05 12:14:09.163877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.107 [2024-12-05 12:14:09.163908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.107 qpair failed and we were unable to recover it. 00:30:35.107 [2024-12-05 12:14:09.164085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.107 [2024-12-05 12:14:09.164118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.107 qpair failed and we were unable to recover it. 00:30:35.107 [2024-12-05 12:14:09.164398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.107 [2024-12-05 12:14:09.164431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.108 qpair failed and we were unable to recover it. 00:30:35.108 [2024-12-05 12:14:09.164533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.108 [2024-12-05 12:14:09.164563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.108 qpair failed and we were unable to recover it. 00:30:35.108 [2024-12-05 12:14:09.164731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.108 [2024-12-05 12:14:09.164762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.108 qpair failed and we were unable to recover it. 00:30:35.108 [2024-12-05 12:14:09.164865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.108 [2024-12-05 12:14:09.164895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.108 qpair failed and we were unable to recover it. 00:30:35.108 [2024-12-05 12:14:09.165005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.108 [2024-12-05 12:14:09.165035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.108 qpair failed and we were unable to recover it. 00:30:35.108 [2024-12-05 12:14:09.165211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.108 [2024-12-05 12:14:09.165241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.108 qpair failed and we were unable to recover it. 00:30:35.108 [2024-12-05 12:14:09.165426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.108 [2024-12-05 12:14:09.165472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.108 qpair failed and we were unable to recover it. 00:30:35.108 [2024-12-05 12:14:09.165665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.108 [2024-12-05 12:14:09.165695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.108 qpair failed and we were unable to recover it. 00:30:35.108 [2024-12-05 12:14:09.165818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.108 [2024-12-05 12:14:09.165848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.108 qpair failed and we were unable to recover it. 00:30:35.108 [2024-12-05 12:14:09.166021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.108 [2024-12-05 12:14:09.166052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.108 qpair failed and we were unable to recover it. 00:30:35.108 [2024-12-05 12:14:09.166291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.108 [2024-12-05 12:14:09.166323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.108 qpair failed and we were unable to recover it. 00:30:35.108 [2024-12-05 12:14:09.166444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.108 [2024-12-05 12:14:09.166477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.108 qpair failed and we were unable to recover it. 00:30:35.108 [2024-12-05 12:14:09.166644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.108 [2024-12-05 12:14:09.166674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.108 qpair failed and we were unable to recover it. 00:30:35.108 [2024-12-05 12:14:09.166888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.108 [2024-12-05 12:14:09.166919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.108 qpair failed and we were unable to recover it. 00:30:35.108 [2024-12-05 12:14:09.167036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.108 [2024-12-05 12:14:09.167069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.108 qpair failed and we were unable to recover it. 00:30:35.108 [2024-12-05 12:14:09.167313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.108 [2024-12-05 12:14:09.167347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.108 qpair failed and we were unable to recover it. 00:30:35.108 [2024-12-05 12:14:09.167487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.108 [2024-12-05 12:14:09.167518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.108 qpair failed and we were unable to recover it. 00:30:35.108 [2024-12-05 12:14:09.167694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.108 [2024-12-05 12:14:09.167727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.108 qpair failed and we were unable to recover it. 00:30:35.108 [2024-12-05 12:14:09.167828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.108 [2024-12-05 12:14:09.167858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.108 qpair failed and we were unable to recover it. 00:30:35.108 [2024-12-05 12:14:09.167982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.108 [2024-12-05 12:14:09.168012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.108 qpair failed and we were unable to recover it. 00:30:35.108 [2024-12-05 12:14:09.168284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.108 [2024-12-05 12:14:09.168316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.108 qpair failed and we were unable to recover it. 00:30:35.108 [2024-12-05 12:14:09.168505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.108 [2024-12-05 12:14:09.168539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.108 qpair failed and we were unable to recover it. 00:30:35.108 [2024-12-05 12:14:09.168727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.108 [2024-12-05 12:14:09.168758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.108 qpair failed and we were unable to recover it. 00:30:35.108 [2024-12-05 12:14:09.168860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.108 [2024-12-05 12:14:09.168891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.108 qpair failed and we were unable to recover it. 00:30:35.108 [2024-12-05 12:14:09.169041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.108 [2024-12-05 12:14:09.169070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.108 qpair failed and we were unable to recover it. 00:30:35.108 [2024-12-05 12:14:09.169314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.108 [2024-12-05 12:14:09.169345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.108 qpair failed and we were unable to recover it. 00:30:35.108 [2024-12-05 12:14:09.169552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.108 [2024-12-05 12:14:09.169585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.108 qpair failed and we were unable to recover it. 00:30:35.108 [2024-12-05 12:14:09.169834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.108 [2024-12-05 12:14:09.169866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.108 qpair failed and we were unable to recover it. 00:30:35.108 [2024-12-05 12:14:09.170051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.108 [2024-12-05 12:14:09.170082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.108 qpair failed and we were unable to recover it. 00:30:35.108 [2024-12-05 12:14:09.170278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.108 [2024-12-05 12:14:09.170310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.108 qpair failed and we were unable to recover it. 00:30:35.108 [2024-12-05 12:14:09.170499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.108 [2024-12-05 12:14:09.170532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.108 qpair failed and we were unable to recover it. 00:30:35.108 [2024-12-05 12:14:09.170666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.108 [2024-12-05 12:14:09.170698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.108 qpair failed and we were unable to recover it. 00:30:35.108 [2024-12-05 12:14:09.170878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.108 [2024-12-05 12:14:09.170909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.108 qpair failed and we were unable to recover it. 00:30:35.108 [2024-12-05 12:14:09.171037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.108 [2024-12-05 12:14:09.171074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.108 qpair failed and we were unable to recover it. 00:30:35.108 [2024-12-05 12:14:09.171180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.108 [2024-12-05 12:14:09.171212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.108 qpair failed and we were unable to recover it. 00:30:35.108 [2024-12-05 12:14:09.171339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.108 [2024-12-05 12:14:09.171386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.108 qpair failed and we were unable to recover it. 00:30:35.108 [2024-12-05 12:14:09.171497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.108 [2024-12-05 12:14:09.171530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.108 qpair failed and we were unable to recover it. 00:30:35.108 [2024-12-05 12:14:09.171713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.108 [2024-12-05 12:14:09.171747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.108 qpair failed and we were unable to recover it. 00:30:35.108 [2024-12-05 12:14:09.171890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.109 [2024-12-05 12:14:09.171923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.109 qpair failed and we were unable to recover it. 00:30:35.109 [2024-12-05 12:14:09.172049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.109 [2024-12-05 12:14:09.172088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.109 qpair failed and we were unable to recover it. 00:30:35.109 [2024-12-05 12:14:09.172267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.109 [2024-12-05 12:14:09.172299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.109 qpair failed and we were unable to recover it. 00:30:35.109 [2024-12-05 12:14:09.172486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.109 [2024-12-05 12:14:09.172521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.109 qpair failed and we were unable to recover it. 00:30:35.109 [2024-12-05 12:14:09.172761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.109 [2024-12-05 12:14:09.172794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.109 qpair failed and we were unable to recover it. 00:30:35.109 [2024-12-05 12:14:09.172930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.109 [2024-12-05 12:14:09.172961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.109 qpair failed and we were unable to recover it. 00:30:35.109 [2024-12-05 12:14:09.173155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.109 [2024-12-05 12:14:09.173187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.109 qpair failed and we were unable to recover it. 00:30:35.109 [2024-12-05 12:14:09.173315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.109 [2024-12-05 12:14:09.173345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.109 qpair failed and we were unable to recover it. 00:30:35.109 [2024-12-05 12:14:09.173463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.109 [2024-12-05 12:14:09.173500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.109 qpair failed and we were unable to recover it. 00:30:35.109 [2024-12-05 12:14:09.173744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.109 [2024-12-05 12:14:09.173780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.109 qpair failed and we were unable to recover it. 00:30:35.109 [2024-12-05 12:14:09.173963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.109 [2024-12-05 12:14:09.173996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.109 qpair failed and we were unable to recover it. 00:30:35.109 [2024-12-05 12:14:09.174203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.109 [2024-12-05 12:14:09.174239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.109 qpair failed and we were unable to recover it. 00:30:35.109 [2024-12-05 12:14:09.174434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.109 [2024-12-05 12:14:09.174467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.109 qpair failed and we were unable to recover it. 00:30:35.109 [2024-12-05 12:14:09.174571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.109 [2024-12-05 12:14:09.174605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.109 qpair failed and we were unable to recover it. 00:30:35.109 [2024-12-05 12:14:09.174798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.109 [2024-12-05 12:14:09.174831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.109 qpair failed and we were unable to recover it. 00:30:35.109 [2024-12-05 12:14:09.175026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.109 [2024-12-05 12:14:09.175061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.109 qpair failed and we were unable to recover it. 00:30:35.109 [2024-12-05 12:14:09.175278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.109 [2024-12-05 12:14:09.175310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.109 qpair failed and we were unable to recover it. 00:30:35.109 [2024-12-05 12:14:09.175504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.109 [2024-12-05 12:14:09.175539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.109 qpair failed and we were unable to recover it. 00:30:35.109 [2024-12-05 12:14:09.175784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.109 [2024-12-05 12:14:09.175817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.109 qpair failed and we were unable to recover it. 00:30:35.109 [2024-12-05 12:14:09.175931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.109 [2024-12-05 12:14:09.175962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.109 qpair failed and we were unable to recover it. 00:30:35.109 [2024-12-05 12:14:09.176249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.109 [2024-12-05 12:14:09.176283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.109 qpair failed and we were unable to recover it. 00:30:35.109 [2024-12-05 12:14:09.176419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.109 [2024-12-05 12:14:09.176452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.109 qpair failed and we were unable to recover it. 00:30:35.109 [2024-12-05 12:14:09.176664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.109 [2024-12-05 12:14:09.176697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.109 qpair failed and we were unable to recover it. 00:30:35.109 [2024-12-05 12:14:09.176806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.109 [2024-12-05 12:14:09.176837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.109 qpair failed and we were unable to recover it. 00:30:35.109 [2024-12-05 12:14:09.177096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.109 [2024-12-05 12:14:09.177128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.109 qpair failed and we were unable to recover it. 00:30:35.109 [2024-12-05 12:14:09.177375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.109 [2024-12-05 12:14:09.177408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.109 qpair failed and we were unable to recover it. 00:30:35.109 [2024-12-05 12:14:09.177640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.109 [2024-12-05 12:14:09.177672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.109 qpair failed and we were unable to recover it. 00:30:35.109 [2024-12-05 12:14:09.177886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.109 [2024-12-05 12:14:09.177918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.109 qpair failed and we were unable to recover it. 00:30:35.109 [2024-12-05 12:14:09.178036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.109 [2024-12-05 12:14:09.178068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.109 qpair failed and we were unable to recover it. 00:30:35.109 [2024-12-05 12:14:09.178354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.109 [2024-12-05 12:14:09.178397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.109 qpair failed and we were unable to recover it. 00:30:35.109 [2024-12-05 12:14:09.178501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.109 [2024-12-05 12:14:09.178533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.109 qpair failed and we were unable to recover it. 00:30:35.109 [2024-12-05 12:14:09.178731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.109 [2024-12-05 12:14:09.178763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.109 qpair failed and we were unable to recover it. 00:30:35.109 [2024-12-05 12:14:09.178892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.109 [2024-12-05 12:14:09.178930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.109 qpair failed and we were unable to recover it. 00:30:35.109 [2024-12-05 12:14:09.179120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.109 [2024-12-05 12:14:09.179151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.109 qpair failed and we were unable to recover it. 00:30:35.109 [2024-12-05 12:14:09.179258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.109 [2024-12-05 12:14:09.179287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.109 qpair failed and we were unable to recover it. 00:30:35.109 [2024-12-05 12:14:09.179506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.109 [2024-12-05 12:14:09.179545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.109 qpair failed and we were unable to recover it. 00:30:35.109 [2024-12-05 12:14:09.179754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.109 [2024-12-05 12:14:09.179784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.109 qpair failed and we were unable to recover it. 00:30:35.109 [2024-12-05 12:14:09.179906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.109 [2024-12-05 12:14:09.179937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.109 qpair failed and we were unable to recover it. 00:30:35.109 [2024-12-05 12:14:09.180209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.110 [2024-12-05 12:14:09.180240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.110 qpair failed and we were unable to recover it. 00:30:35.110 [2024-12-05 12:14:09.180428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.110 [2024-12-05 12:14:09.180462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.110 qpair failed and we were unable to recover it. 00:30:35.110 [2024-12-05 12:14:09.180656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.110 [2024-12-05 12:14:09.180688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.110 qpair failed and we were unable to recover it. 00:30:35.110 [2024-12-05 12:14:09.180897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.110 [2024-12-05 12:14:09.180928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.110 qpair failed and we were unable to recover it. 00:30:35.110 [2024-12-05 12:14:09.181097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.110 [2024-12-05 12:14:09.181129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.110 qpair failed and we were unable to recover it. 00:30:35.110 [2024-12-05 12:14:09.181389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.110 [2024-12-05 12:14:09.181423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.110 qpair failed and we were unable to recover it. 00:30:35.110 [2024-12-05 12:14:09.181667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.110 [2024-12-05 12:14:09.181699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.110 qpair failed and we were unable to recover it. 00:30:35.110 [2024-12-05 12:14:09.181880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.110 [2024-12-05 12:14:09.181912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.110 qpair failed and we were unable to recover it. 00:30:35.110 [2024-12-05 12:14:09.182036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.110 [2024-12-05 12:14:09.182069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.110 qpair failed and we were unable to recover it. 00:30:35.110 [2024-12-05 12:14:09.182247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.110 [2024-12-05 12:14:09.182278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.110 qpair failed and we were unable to recover it. 00:30:35.110 [2024-12-05 12:14:09.182479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.110 [2024-12-05 12:14:09.182518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.110 qpair failed and we were unable to recover it. 00:30:35.110 [2024-12-05 12:14:09.182710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.110 [2024-12-05 12:14:09.182746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.110 qpair failed and we were unable to recover it. 00:30:35.110 [2024-12-05 12:14:09.182874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.110 [2024-12-05 12:14:09.182907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.110 qpair failed and we were unable to recover it. 00:30:35.110 [2024-12-05 12:14:09.183152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.110 [2024-12-05 12:14:09.183183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.110 qpair failed and we were unable to recover it. 00:30:35.110 [2024-12-05 12:14:09.183291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.110 [2024-12-05 12:14:09.183321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.110 qpair failed and we were unable to recover it. 00:30:35.110 [2024-12-05 12:14:09.183555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.110 [2024-12-05 12:14:09.183587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.110 qpair failed and we were unable to recover it. 00:30:35.110 [2024-12-05 12:14:09.183758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.110 [2024-12-05 12:14:09.183789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.110 qpair failed and we were unable to recover it. 00:30:35.110 [2024-12-05 12:14:09.183905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.110 [2024-12-05 12:14:09.183934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.110 qpair failed and we were unable to recover it. 00:30:35.110 [2024-12-05 12:14:09.184040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.110 [2024-12-05 12:14:09.184070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.110 qpair failed and we were unable to recover it. 00:30:35.110 [2024-12-05 12:14:09.184187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.110 [2024-12-05 12:14:09.184218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.110 qpair failed and we were unable to recover it. 00:30:35.110 [2024-12-05 12:14:09.184403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.110 [2024-12-05 12:14:09.184435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.110 qpair failed and we were unable to recover it. 00:30:35.110 [2024-12-05 12:14:09.184698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.110 [2024-12-05 12:14:09.184729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.110 qpair failed and we were unable to recover it. 00:30:35.110 [2024-12-05 12:14:09.184861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.110 [2024-12-05 12:14:09.184893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.110 qpair failed and we were unable to recover it. 00:30:35.110 [2024-12-05 12:14:09.185156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.110 [2024-12-05 12:14:09.185190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.110 qpair failed and we were unable to recover it. 00:30:35.110 [2024-12-05 12:14:09.185406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.110 [2024-12-05 12:14:09.185439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.110 qpair failed and we were unable to recover it. 00:30:35.110 [2024-12-05 12:14:09.185563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.110 [2024-12-05 12:14:09.185594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.110 qpair failed and we were unable to recover it. 00:30:35.110 [2024-12-05 12:14:09.185700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.110 [2024-12-05 12:14:09.185734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.110 qpair failed and we were unable to recover it. 00:30:35.110 [2024-12-05 12:14:09.185904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.110 [2024-12-05 12:14:09.185933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.110 qpair failed and we were unable to recover it. 00:30:35.110 [2024-12-05 12:14:09.186119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.110 [2024-12-05 12:14:09.186151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.110 qpair failed and we were unable to recover it. 00:30:35.110 [2024-12-05 12:14:09.186335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.110 [2024-12-05 12:14:09.186374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.110 qpair failed and we were unable to recover it. 00:30:35.110 [2024-12-05 12:14:09.186637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.110 [2024-12-05 12:14:09.186669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.110 qpair failed and we were unable to recover it. 00:30:35.110 [2024-12-05 12:14:09.186923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.110 [2024-12-05 12:14:09.186953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.110 qpair failed and we were unable to recover it. 00:30:35.110 [2024-12-05 12:14:09.187163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.110 [2024-12-05 12:14:09.187194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.110 qpair failed and we were unable to recover it. 00:30:35.110 [2024-12-05 12:14:09.187464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.110 [2024-12-05 12:14:09.187496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.110 qpair failed and we were unable to recover it. 00:30:35.110 [2024-12-05 12:14:09.187693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.110 [2024-12-05 12:14:09.187725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.110 qpair failed and we were unable to recover it. 00:30:35.110 [2024-12-05 12:14:09.187900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.110 [2024-12-05 12:14:09.187931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.110 qpair failed and we were unable to recover it. 00:30:35.110 [2024-12-05 12:14:09.188176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.110 [2024-12-05 12:14:09.188209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.110 qpair failed and we were unable to recover it. 00:30:35.110 [2024-12-05 12:14:09.188458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.110 [2024-12-05 12:14:09.188498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.110 qpair failed and we were unable to recover it. 00:30:35.110 [2024-12-05 12:14:09.188617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.111 [2024-12-05 12:14:09.188648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.111 qpair failed and we were unable to recover it. 00:30:35.111 [2024-12-05 12:14:09.188933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.111 [2024-12-05 12:14:09.188964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.111 qpair failed and we were unable to recover it. 00:30:35.111 [2024-12-05 12:14:09.189086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.111 [2024-12-05 12:14:09.189117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.111 qpair failed and we were unable to recover it. 00:30:35.111 [2024-12-05 12:14:09.189354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.111 [2024-12-05 12:14:09.189396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.111 qpair failed and we were unable to recover it. 00:30:35.111 [2024-12-05 12:14:09.189511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.111 [2024-12-05 12:14:09.189541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.111 qpair failed and we were unable to recover it. 00:30:35.111 [2024-12-05 12:14:09.189648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.111 [2024-12-05 12:14:09.189677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.111 qpair failed and we were unable to recover it. 00:30:35.111 [2024-12-05 12:14:09.189883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.111 [2024-12-05 12:14:09.189913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.111 qpair failed and we were unable to recover it. 00:30:35.111 [2024-12-05 12:14:09.190122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.111 [2024-12-05 12:14:09.190153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.111 qpair failed and we were unable to recover it. 00:30:35.111 [2024-12-05 12:14:09.190266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.111 [2024-12-05 12:14:09.190295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.111 qpair failed and we were unable to recover it. 00:30:35.111 [2024-12-05 12:14:09.190489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.111 [2024-12-05 12:14:09.190520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.111 qpair failed and we were unable to recover it. 00:30:35.111 [2024-12-05 12:14:09.190640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.111 [2024-12-05 12:14:09.190673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.111 qpair failed and we were unable to recover it. 00:30:35.111 [2024-12-05 12:14:09.190815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.111 [2024-12-05 12:14:09.190846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.111 qpair failed and we were unable to recover it. 00:30:35.111 [2024-12-05 12:14:09.190960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.111 [2024-12-05 12:14:09.190992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.111 qpair failed and we were unable to recover it. 00:30:35.111 [2024-12-05 12:14:09.191101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.111 [2024-12-05 12:14:09.191133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.111 qpair failed and we were unable to recover it. 00:30:35.111 [2024-12-05 12:14:09.191261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.111 [2024-12-05 12:14:09.191293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.111 qpair failed and we were unable to recover it. 00:30:35.111 [2024-12-05 12:14:09.191466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.111 [2024-12-05 12:14:09.191499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.111 qpair failed and we were unable to recover it. 00:30:35.111 [2024-12-05 12:14:09.191669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.111 [2024-12-05 12:14:09.191702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.111 qpair failed and we were unable to recover it. 00:30:35.111 [2024-12-05 12:14:09.191805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.111 [2024-12-05 12:14:09.191837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.111 qpair failed and we were unable to recover it. 00:30:35.111 [2024-12-05 12:14:09.192043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.111 [2024-12-05 12:14:09.192079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.111 qpair failed and we were unable to recover it. 00:30:35.111 [2024-12-05 12:14:09.192281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.111 [2024-12-05 12:14:09.192317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.111 qpair failed and we were unable to recover it. 00:30:35.111 [2024-12-05 12:14:09.192447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.111 [2024-12-05 12:14:09.192479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.111 qpair failed and we were unable to recover it. 00:30:35.111 [2024-12-05 12:14:09.192660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.111 [2024-12-05 12:14:09.192691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.111 qpair failed and we were unable to recover it. 00:30:35.111 [2024-12-05 12:14:09.192931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.111 [2024-12-05 12:14:09.192961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.111 qpair failed and we were unable to recover it. 00:30:35.111 [2024-12-05 12:14:09.193078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.111 [2024-12-05 12:14:09.193108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.111 qpair failed and we were unable to recover it. 00:30:35.111 [2024-12-05 12:14:09.193240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.111 [2024-12-05 12:14:09.193270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.111 qpair failed and we were unable to recover it. 00:30:35.111 [2024-12-05 12:14:09.193518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.111 [2024-12-05 12:14:09.193549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.111 qpair failed and we were unable to recover it. 00:30:35.111 [2024-12-05 12:14:09.193730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.111 [2024-12-05 12:14:09.193762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.111 qpair failed and we were unable to recover it. 00:30:35.111 [2024-12-05 12:14:09.193962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.111 [2024-12-05 12:14:09.193995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.111 qpair failed and we were unable to recover it. 00:30:35.111 [2024-12-05 12:14:09.194174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.111 [2024-12-05 12:14:09.194205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.111 qpair failed and we were unable to recover it. 00:30:35.111 [2024-12-05 12:14:09.194310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.111 [2024-12-05 12:14:09.194342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.111 qpair failed and we were unable to recover it. 00:30:35.111 [2024-12-05 12:14:09.194537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.111 [2024-12-05 12:14:09.194569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.111 qpair failed and we were unable to recover it. 00:30:35.111 [2024-12-05 12:14:09.194683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.111 [2024-12-05 12:14:09.194716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.111 qpair failed and we were unable to recover it. 00:30:35.111 [2024-12-05 12:14:09.194906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.111 [2024-12-05 12:14:09.194938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.111 qpair failed and we were unable to recover it. 00:30:35.111 [2024-12-05 12:14:09.195067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.111 [2024-12-05 12:14:09.195100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.111 qpair failed and we were unable to recover it. 00:30:35.111 [2024-12-05 12:14:09.195224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.111 [2024-12-05 12:14:09.195254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.111 qpair failed and we were unable to recover it. 00:30:35.111 [2024-12-05 12:14:09.195450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.111 [2024-12-05 12:14:09.195485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.111 qpair failed and we were unable to recover it. 00:30:35.111 [2024-12-05 12:14:09.195614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.111 [2024-12-05 12:14:09.195643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.111 qpair failed and we were unable to recover it. 00:30:35.111 [2024-12-05 12:14:09.195757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.111 [2024-12-05 12:14:09.195788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.111 qpair failed and we were unable to recover it. 00:30:35.111 [2024-12-05 12:14:09.195960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.112 [2024-12-05 12:14:09.195990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.112 qpair failed and we were unable to recover it. 00:30:35.112 [2024-12-05 12:14:09.196178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.112 [2024-12-05 12:14:09.196214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.112 qpair failed and we were unable to recover it. 00:30:35.112 [2024-12-05 12:14:09.196455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.112 [2024-12-05 12:14:09.196488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.112 qpair failed and we were unable to recover it. 00:30:35.112 [2024-12-05 12:14:09.196612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.112 [2024-12-05 12:14:09.196641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.112 qpair failed and we were unable to recover it. 00:30:35.112 [2024-12-05 12:14:09.196842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.112 [2024-12-05 12:14:09.196873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.112 qpair failed and we were unable to recover it. 00:30:35.112 [2024-12-05 12:14:09.197004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.112 [2024-12-05 12:14:09.197036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.112 qpair failed and we were unable to recover it. 00:30:35.112 [2024-12-05 12:14:09.197267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.112 [2024-12-05 12:14:09.197298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.112 qpair failed and we were unable to recover it. 00:30:35.112 [2024-12-05 12:14:09.197501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.112 [2024-12-05 12:14:09.197534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.112 qpair failed and we were unable to recover it. 00:30:35.112 [2024-12-05 12:14:09.197716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.112 [2024-12-05 12:14:09.197747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.112 qpair failed and we were unable to recover it. 00:30:35.112 [2024-12-05 12:14:09.197863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.112 [2024-12-05 12:14:09.197894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.112 qpair failed and we were unable to recover it. 00:30:35.112 [2024-12-05 12:14:09.198165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.112 [2024-12-05 12:14:09.198197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.112 qpair failed and we were unable to recover it. 00:30:35.112 [2024-12-05 12:14:09.198329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.112 [2024-12-05 12:14:09.198360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.112 qpair failed and we were unable to recover it. 00:30:35.112 [2024-12-05 12:14:09.198614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.112 [2024-12-05 12:14:09.198647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.112 qpair failed and we were unable to recover it. 00:30:35.112 [2024-12-05 12:14:09.198755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.112 [2024-12-05 12:14:09.198797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.112 qpair failed and we were unable to recover it. 00:30:35.112 [2024-12-05 12:14:09.198934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.112 [2024-12-05 12:14:09.198968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.112 qpair failed and we were unable to recover it. 00:30:35.112 [2024-12-05 12:14:09.199213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.112 [2024-12-05 12:14:09.199246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.112 qpair failed and we were unable to recover it. 00:30:35.112 [2024-12-05 12:14:09.199365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.112 [2024-12-05 12:14:09.199406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.112 qpair failed and we were unable to recover it. 00:30:35.112 [2024-12-05 12:14:09.199529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.112 [2024-12-05 12:14:09.199561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.112 qpair failed and we were unable to recover it. 00:30:35.112 [2024-12-05 12:14:09.199687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.112 [2024-12-05 12:14:09.199719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.112 qpair failed and we were unable to recover it. 00:30:35.112 [2024-12-05 12:14:09.199828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.112 [2024-12-05 12:14:09.199860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.112 qpair failed and we were unable to recover it. 00:30:35.112 [2024-12-05 12:14:09.199982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.112 [2024-12-05 12:14:09.200016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.112 qpair failed and we were unable to recover it. 00:30:35.112 [2024-12-05 12:14:09.200149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.112 [2024-12-05 12:14:09.200179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.112 qpair failed and we were unable to recover it. 00:30:35.112 [2024-12-05 12:14:09.200291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.112 [2024-12-05 12:14:09.200321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.112 qpair failed and we were unable to recover it. 00:30:35.112 [2024-12-05 12:14:09.200571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.112 [2024-12-05 12:14:09.200604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.112 qpair failed and we were unable to recover it. 00:30:35.112 [2024-12-05 12:14:09.200827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.112 [2024-12-05 12:14:09.200859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.112 qpair failed and we were unable to recover it. 00:30:35.112 [2024-12-05 12:14:09.201047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.112 [2024-12-05 12:14:09.201079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.112 qpair failed and we were unable to recover it. 00:30:35.112 [2024-12-05 12:14:09.201264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.112 [2024-12-05 12:14:09.201296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.112 qpair failed and we were unable to recover it. 00:30:35.112 [2024-12-05 12:14:09.201507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.112 [2024-12-05 12:14:09.201540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.112 qpair failed and we were unable to recover it. 00:30:35.112 [2024-12-05 12:14:09.201676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.112 [2024-12-05 12:14:09.201708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.112 qpair failed and we were unable to recover it. 00:30:35.112 [2024-12-05 12:14:09.201845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.112 [2024-12-05 12:14:09.201878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.112 qpair failed and we were unable to recover it. 00:30:35.112 [2024-12-05 12:14:09.202117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.112 [2024-12-05 12:14:09.202149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.112 qpair failed and we were unable to recover it. 00:30:35.112 [2024-12-05 12:14:09.202265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.112 [2024-12-05 12:14:09.202297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.112 qpair failed and we were unable to recover it. 00:30:35.112 [2024-12-05 12:14:09.202522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.112 [2024-12-05 12:14:09.202556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.112 qpair failed and we were unable to recover it. 00:30:35.112 [2024-12-05 12:14:09.202687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.112 [2024-12-05 12:14:09.202719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.112 qpair failed and we were unable to recover it. 00:30:35.112 [2024-12-05 12:14:09.202831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.112 [2024-12-05 12:14:09.202862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.112 qpair failed and we were unable to recover it. 00:30:35.112 [2024-12-05 12:14:09.203045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.112 [2024-12-05 12:14:09.203076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.112 qpair failed and we were unable to recover it. 00:30:35.113 [2024-12-05 12:14:09.203200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.113 [2024-12-05 12:14:09.203229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.113 qpair failed and we were unable to recover it. 00:30:35.113 [2024-12-05 12:14:09.203334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.113 [2024-12-05 12:14:09.203363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.113 qpair failed and we were unable to recover it. 00:30:35.113 [2024-12-05 12:14:09.203495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.113 [2024-12-05 12:14:09.203525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.113 qpair failed and we were unable to recover it. 00:30:35.113 [2024-12-05 12:14:09.203715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.113 [2024-12-05 12:14:09.203746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.113 qpair failed and we were unable to recover it. 00:30:35.113 [2024-12-05 12:14:09.204001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.113 [2024-12-05 12:14:09.204032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.113 qpair failed and we were unable to recover it. 00:30:35.113 [2024-12-05 12:14:09.204142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.113 [2024-12-05 12:14:09.204177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.113 qpair failed and we were unable to recover it. 00:30:35.113 [2024-12-05 12:14:09.204469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.113 [2024-12-05 12:14:09.204503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.113 qpair failed and we were unable to recover it. 00:30:35.113 [2024-12-05 12:14:09.204630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.113 [2024-12-05 12:14:09.204661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.113 qpair failed and we were unable to recover it. 00:30:35.113 [2024-12-05 12:14:09.204767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.113 [2024-12-05 12:14:09.204799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.113 qpair failed and we were unable to recover it. 00:30:35.113 [2024-12-05 12:14:09.204918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.113 [2024-12-05 12:14:09.204951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.113 qpair failed and we were unable to recover it. 00:30:35.113 [2024-12-05 12:14:09.205076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.113 [2024-12-05 12:14:09.205107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.113 qpair failed and we were unable to recover it. 00:30:35.113 [2024-12-05 12:14:09.205231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.113 [2024-12-05 12:14:09.205263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.113 qpair failed and we were unable to recover it. 00:30:35.113 [2024-12-05 12:14:09.205449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.113 [2024-12-05 12:14:09.205483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.113 qpair failed and we were unable to recover it. 00:30:35.113 [2024-12-05 12:14:09.205674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.113 [2024-12-05 12:14:09.205704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.113 qpair failed and we were unable to recover it. 00:30:35.113 [2024-12-05 12:14:09.205836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.113 [2024-12-05 12:14:09.205867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.113 qpair failed and we were unable to recover it. 00:30:35.113 [2024-12-05 12:14:09.205977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.113 [2024-12-05 12:14:09.206007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.113 qpair failed and we were unable to recover it. 00:30:35.113 [2024-12-05 12:14:09.206193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.113 [2024-12-05 12:14:09.206224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.113 qpair failed and we were unable to recover it. 00:30:35.113 [2024-12-05 12:14:09.206406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.113 [2024-12-05 12:14:09.206439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.113 qpair failed and we were unable to recover it. 00:30:35.113 [2024-12-05 12:14:09.206629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.113 [2024-12-05 12:14:09.206660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.113 qpair failed and we were unable to recover it. 00:30:35.113 [2024-12-05 12:14:09.206840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.113 [2024-12-05 12:14:09.206872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.113 qpair failed and we were unable to recover it. 00:30:35.113 [2024-12-05 12:14:09.206992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.113 [2024-12-05 12:14:09.207024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.113 qpair failed and we were unable to recover it. 00:30:35.113 [2024-12-05 12:14:09.207213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.113 [2024-12-05 12:14:09.207244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.113 qpair failed and we were unable to recover it. 00:30:35.113 [2024-12-05 12:14:09.207351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.113 [2024-12-05 12:14:09.207391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.113 qpair failed and we were unable to recover it. 00:30:35.113 [2024-12-05 12:14:09.207515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.113 [2024-12-05 12:14:09.207547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.113 qpair failed and we were unable to recover it. 00:30:35.113 [2024-12-05 12:14:09.207745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.113 [2024-12-05 12:14:09.207776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.113 qpair failed and we were unable to recover it. 00:30:35.113 [2024-12-05 12:14:09.207962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.113 [2024-12-05 12:14:09.207993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.113 qpair failed and we were unable to recover it. 00:30:35.113 [2024-12-05 12:14:09.208282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.113 [2024-12-05 12:14:09.208314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.113 qpair failed and we were unable to recover it. 00:30:35.113 [2024-12-05 12:14:09.208507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.113 [2024-12-05 12:14:09.208539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.113 qpair failed and we were unable to recover it. 00:30:35.113 [2024-12-05 12:14:09.208726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.113 [2024-12-05 12:14:09.208758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.113 qpair failed and we were unable to recover it. 00:30:35.113 [2024-12-05 12:14:09.208953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.113 [2024-12-05 12:14:09.208984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.113 qpair failed and we were unable to recover it. 00:30:35.113 [2024-12-05 12:14:09.209101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.113 [2024-12-05 12:14:09.209133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.113 qpair failed and we were unable to recover it. 00:30:35.113 [2024-12-05 12:14:09.209417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.113 [2024-12-05 12:14:09.209452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.113 qpair failed and we were unable to recover it. 00:30:35.113 [2024-12-05 12:14:09.209568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.113 [2024-12-05 12:14:09.209600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.113 qpair failed and we were unable to recover it. 00:30:35.113 [2024-12-05 12:14:09.209783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.113 [2024-12-05 12:14:09.209817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.113 qpair failed and we were unable to recover it. 00:30:35.113 [2024-12-05 12:14:09.209926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.113 [2024-12-05 12:14:09.209957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.113 qpair failed and we were unable to recover it. 00:30:35.113 [2024-12-05 12:14:09.210073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.113 [2024-12-05 12:14:09.210104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.113 qpair failed and we were unable to recover it. 00:30:35.113 [2024-12-05 12:14:09.210304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.113 [2024-12-05 12:14:09.210337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.113 qpair failed and we were unable to recover it. 00:30:35.113 [2024-12-05 12:14:09.210532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.113 [2024-12-05 12:14:09.210566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.113 qpair failed and we were unable to recover it. 00:30:35.113 [2024-12-05 12:14:09.210667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.113 [2024-12-05 12:14:09.210699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.113 qpair failed and we were unable to recover it. 00:30:35.113 [2024-12-05 12:14:09.210815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.113 [2024-12-05 12:14:09.210845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.113 qpair failed and we were unable to recover it. 00:30:35.113 [2024-12-05 12:14:09.210946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.113 [2024-12-05 12:14:09.210976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.113 qpair failed and we were unable to recover it. 00:30:35.113 [2024-12-05 12:14:09.211089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.114 [2024-12-05 12:14:09.211121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.114 qpair failed and we were unable to recover it. 00:30:35.114 [2024-12-05 12:14:09.211356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.114 [2024-12-05 12:14:09.211395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.114 qpair failed and we were unable to recover it. 00:30:35.114 [2024-12-05 12:14:09.211574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.114 [2024-12-05 12:14:09.211606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.114 qpair failed and we were unable to recover it. 00:30:35.114 [2024-12-05 12:14:09.211738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.114 [2024-12-05 12:14:09.211770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.114 qpair failed and we were unable to recover it. 00:30:35.114 [2024-12-05 12:14:09.211885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.114 [2024-12-05 12:14:09.211922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.114 qpair failed and we were unable to recover it. 00:30:35.114 [2024-12-05 12:14:09.212187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.114 [2024-12-05 12:14:09.212218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.114 qpair failed and we were unable to recover it. 00:30:35.114 [2024-12-05 12:14:09.212407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.114 [2024-12-05 12:14:09.212439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.114 qpair failed and we were unable to recover it. 00:30:35.114 [2024-12-05 12:14:09.212682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.114 [2024-12-05 12:14:09.212715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.114 qpair failed and we were unable to recover it. 00:30:35.114 [2024-12-05 12:14:09.212904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.114 [2024-12-05 12:14:09.212935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.114 qpair failed and we were unable to recover it. 00:30:35.114 [2024-12-05 12:14:09.213125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.114 [2024-12-05 12:14:09.213157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.114 qpair failed and we were unable to recover it. 00:30:35.114 [2024-12-05 12:14:09.213330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.114 [2024-12-05 12:14:09.213362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.114 qpair failed and we were unable to recover it. 00:30:35.114 [2024-12-05 12:14:09.213513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.114 [2024-12-05 12:14:09.213544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.114 qpair failed and we were unable to recover it. 00:30:35.114 [2024-12-05 12:14:09.213725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.114 [2024-12-05 12:14:09.213756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.114 qpair failed and we were unable to recover it. 00:30:35.114 [2024-12-05 12:14:09.213873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.114 [2024-12-05 12:14:09.213906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.114 qpair failed and we were unable to recover it. 00:30:35.114 [2024-12-05 12:14:09.214024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.114 [2024-12-05 12:14:09.214056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.114 qpair failed and we were unable to recover it. 00:30:35.114 [2024-12-05 12:14:09.214168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.114 [2024-12-05 12:14:09.214203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.114 qpair failed and we were unable to recover it. 00:30:35.114 [2024-12-05 12:14:09.214306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.114 [2024-12-05 12:14:09.214337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.114 qpair failed and we were unable to recover it. 00:30:35.114 [2024-12-05 12:14:09.214531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.114 [2024-12-05 12:14:09.214565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.114 qpair failed and we were unable to recover it. 00:30:35.114 [2024-12-05 12:14:09.214680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.114 [2024-12-05 12:14:09.214712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.114 qpair failed and we were unable to recover it. 00:30:35.114 [2024-12-05 12:14:09.214909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.114 [2024-12-05 12:14:09.214941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.114 qpair failed and we were unable to recover it. 00:30:35.114 [2024-12-05 12:14:09.215134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.114 [2024-12-05 12:14:09.215166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.114 qpair failed and we were unable to recover it. 00:30:35.114 [2024-12-05 12:14:09.215273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.114 [2024-12-05 12:14:09.215305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.114 qpair failed and we were unable to recover it. 00:30:35.114 [2024-12-05 12:14:09.215487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.114 [2024-12-05 12:14:09.215521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.114 qpair failed and we were unable to recover it. 00:30:35.114 [2024-12-05 12:14:09.215792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.114 [2024-12-05 12:14:09.215824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.114 qpair failed and we were unable to recover it. 00:30:35.114 [2024-12-05 12:14:09.216013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.114 [2024-12-05 12:14:09.216045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.114 qpair failed and we were unable to recover it. 00:30:35.114 [2024-12-05 12:14:09.216240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.114 [2024-12-05 12:14:09.216271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.114 qpair failed and we were unable to recover it. 00:30:35.114 [2024-12-05 12:14:09.216386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.114 [2024-12-05 12:14:09.216447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.114 qpair failed and we were unable to recover it. 00:30:35.114 [2024-12-05 12:14:09.216577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.114 [2024-12-05 12:14:09.216610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.114 qpair failed and we were unable to recover it. 00:30:35.114 [2024-12-05 12:14:09.216738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.114 [2024-12-05 12:14:09.216771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.114 qpair failed and we were unable to recover it. 00:30:35.114 [2024-12-05 12:14:09.216912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.114 [2024-12-05 12:14:09.216944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.114 qpair failed and we were unable to recover it. 00:30:35.114 [2024-12-05 12:14:09.217053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.114 [2024-12-05 12:14:09.217084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.114 qpair failed and we were unable to recover it. 00:30:35.114 [2024-12-05 12:14:09.217208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.114 [2024-12-05 12:14:09.217239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.114 qpair failed and we were unable to recover it. 00:30:35.114 [2024-12-05 12:14:09.217399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.114 [2024-12-05 12:14:09.217432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.114 qpair failed and we were unable to recover it. 00:30:35.114 [2024-12-05 12:14:09.217539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.114 [2024-12-05 12:14:09.217570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.114 qpair failed and we were unable to recover it. 00:30:35.114 [2024-12-05 12:14:09.217746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.114 [2024-12-05 12:14:09.217777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.114 qpair failed and we were unable to recover it. 00:30:35.114 [2024-12-05 12:14:09.217964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.114 [2024-12-05 12:14:09.217995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.114 qpair failed and we were unable to recover it. 00:30:35.114 [2024-12-05 12:14:09.218197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.114 [2024-12-05 12:14:09.218229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.114 qpair failed and we were unable to recover it. 00:30:35.114 [2024-12-05 12:14:09.218466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.114 [2024-12-05 12:14:09.218499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.114 qpair failed and we were unable to recover it. 00:30:35.114 [2024-12-05 12:14:09.218620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.114 [2024-12-05 12:14:09.218653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.114 qpair failed and we were unable to recover it. 00:30:35.114 [2024-12-05 12:14:09.218830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.114 [2024-12-05 12:14:09.218860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.114 qpair failed and we were unable to recover it. 00:30:35.114 [2024-12-05 12:14:09.219103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.114 [2024-12-05 12:14:09.219136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.114 qpair failed and we were unable to recover it. 00:30:35.115 [2024-12-05 12:14:09.219387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.115 [2024-12-05 12:14:09.219421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.115 qpair failed and we were unable to recover it. 00:30:35.115 [2024-12-05 12:14:09.219614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.115 [2024-12-05 12:14:09.219645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.115 qpair failed and we were unable to recover it. 00:30:35.115 [2024-12-05 12:14:09.219781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.115 [2024-12-05 12:14:09.219810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.115 qpair failed and we were unable to recover it. 00:30:35.115 [2024-12-05 12:14:09.219933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.115 [2024-12-05 12:14:09.219969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.115 qpair failed and we were unable to recover it. 00:30:35.115 [2024-12-05 12:14:09.220168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.115 [2024-12-05 12:14:09.220200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.115 qpair failed and we were unable to recover it. 00:30:35.115 [2024-12-05 12:14:09.220310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.115 [2024-12-05 12:14:09.220341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.115 qpair failed and we were unable to recover it. 00:30:35.115 [2024-12-05 12:14:09.220465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.115 [2024-12-05 12:14:09.220498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.115 qpair failed and we were unable to recover it. 00:30:35.115 [2024-12-05 12:14:09.220670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.115 [2024-12-05 12:14:09.220700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.115 qpair failed and we were unable to recover it. 00:30:35.115 [2024-12-05 12:14:09.220884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.115 [2024-12-05 12:14:09.220915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.115 qpair failed and we were unable to recover it. 00:30:35.115 [2024-12-05 12:14:09.221138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.115 [2024-12-05 12:14:09.221171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.115 qpair failed and we were unable to recover it. 00:30:35.115 [2024-12-05 12:14:09.221362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.115 [2024-12-05 12:14:09.221418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.115 qpair failed and we were unable to recover it. 00:30:35.115 [2024-12-05 12:14:09.221544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.115 [2024-12-05 12:14:09.221576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.115 qpair failed and we were unable to recover it. 00:30:35.115 [2024-12-05 12:14:09.221763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.115 [2024-12-05 12:14:09.221796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.115 qpair failed and we were unable to recover it. 00:30:35.115 [2024-12-05 12:14:09.221966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.115 [2024-12-05 12:14:09.221998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.115 qpair failed and we were unable to recover it. 00:30:35.115 [2024-12-05 12:14:09.222247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.115 [2024-12-05 12:14:09.222279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.115 qpair failed and we were unable to recover it. 00:30:35.115 [2024-12-05 12:14:09.222453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.115 [2024-12-05 12:14:09.222488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.115 qpair failed and we were unable to recover it. 00:30:35.115 [2024-12-05 12:14:09.222604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.115 [2024-12-05 12:14:09.222635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.115 qpair failed and we were unable to recover it. 00:30:35.115 [2024-12-05 12:14:09.222830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.115 [2024-12-05 12:14:09.222861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.115 qpair failed and we were unable to recover it. 00:30:35.115 [2024-12-05 12:14:09.223060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.115 [2024-12-05 12:14:09.223092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.115 qpair failed and we were unable to recover it. 00:30:35.115 [2024-12-05 12:14:09.223297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.115 [2024-12-05 12:14:09.223331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.115 qpair failed and we were unable to recover it. 00:30:35.115 [2024-12-05 12:14:09.223594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.115 [2024-12-05 12:14:09.223626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.115 qpair failed and we were unable to recover it. 00:30:35.115 [2024-12-05 12:14:09.223810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.115 [2024-12-05 12:14:09.223843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.115 qpair failed and we were unable to recover it. 00:30:35.115 [2024-12-05 12:14:09.224027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.115 [2024-12-05 12:14:09.224060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.115 qpair failed and we were unable to recover it. 00:30:35.115 [2024-12-05 12:14:09.224180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.115 [2024-12-05 12:14:09.224211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.115 qpair failed and we were unable to recover it. 00:30:35.115 [2024-12-05 12:14:09.224499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.115 [2024-12-05 12:14:09.224532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.115 qpair failed and we were unable to recover it. 00:30:35.115 [2024-12-05 12:14:09.224647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.115 [2024-12-05 12:14:09.224678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.115 qpair failed and we were unable to recover it. 00:30:35.115 [2024-12-05 12:14:09.224916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.115 [2024-12-05 12:14:09.224948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.115 qpair failed and we were unable to recover it. 00:30:35.115 [2024-12-05 12:14:09.225128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.115 [2024-12-05 12:14:09.225158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.115 qpair failed and we were unable to recover it. 00:30:35.115 [2024-12-05 12:14:09.225332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.115 [2024-12-05 12:14:09.225364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.115 qpair failed and we were unable to recover it. 00:30:35.115 [2024-12-05 12:14:09.225576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.115 [2024-12-05 12:14:09.225607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.115 qpair failed and we were unable to recover it. 00:30:35.115 [2024-12-05 12:14:09.225802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.115 [2024-12-05 12:14:09.225833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.115 qpair failed and we were unable to recover it. 00:30:35.115 [2024-12-05 12:14:09.226016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.115 [2024-12-05 12:14:09.226047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.115 qpair failed and we were unable to recover it. 00:30:35.115 [2024-12-05 12:14:09.226152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.115 [2024-12-05 12:14:09.226183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.115 qpair failed and we were unable to recover it. 00:30:35.115 [2024-12-05 12:14:09.226391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.115 [2024-12-05 12:14:09.226424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.115 qpair failed and we were unable to recover it. 00:30:35.115 [2024-12-05 12:14:09.226608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.115 [2024-12-05 12:14:09.226639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.115 qpair failed and we were unable to recover it. 00:30:35.115 [2024-12-05 12:14:09.226765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.115 [2024-12-05 12:14:09.226808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.115 qpair failed and we were unable to recover it. 00:30:35.115 [2024-12-05 12:14:09.226956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.115 [2024-12-05 12:14:09.226994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.115 qpair failed and we were unable to recover it. 00:30:35.115 [2024-12-05 12:14:09.227199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.115 [2024-12-05 12:14:09.227236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.115 qpair failed and we were unable to recover it. 00:30:35.115 [2024-12-05 12:14:09.227455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.115 [2024-12-05 12:14:09.227491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.115 qpair failed and we were unable to recover it. 00:30:35.115 [2024-12-05 12:14:09.227688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.115 [2024-12-05 12:14:09.227720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.115 qpair failed and we were unable to recover it. 00:30:35.115 [2024-12-05 12:14:09.227829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.115 [2024-12-05 12:14:09.227862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.115 qpair failed and we were unable to recover it. 00:30:35.115 [2024-12-05 12:14:09.227981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.116 [2024-12-05 12:14:09.228014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.116 qpair failed and we were unable to recover it. 00:30:35.116 [2024-12-05 12:14:09.228219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.116 [2024-12-05 12:14:09.228251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.116 qpair failed and we were unable to recover it. 00:30:35.116 [2024-12-05 12:14:09.228436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.116 [2024-12-05 12:14:09.228477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.116 qpair failed and we were unable to recover it. 00:30:35.116 [2024-12-05 12:14:09.228742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.116 [2024-12-05 12:14:09.228774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.116 qpair failed and we were unable to recover it. 00:30:35.116 [2024-12-05 12:14:09.228954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.116 [2024-12-05 12:14:09.228984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.116 qpair failed and we were unable to recover it. 00:30:35.116 [2024-12-05 12:14:09.229202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.116 [2024-12-05 12:14:09.229234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.116 qpair failed and we were unable to recover it. 00:30:35.116 [2024-12-05 12:14:09.229435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.116 [2024-12-05 12:14:09.229469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.116 qpair failed and we were unable to recover it. 00:30:35.116 [2024-12-05 12:14:09.229665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.116 [2024-12-05 12:14:09.229697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.116 qpair failed and we were unable to recover it. 00:30:35.116 [2024-12-05 12:14:09.229981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.116 [2024-12-05 12:14:09.230012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.116 qpair failed and we were unable to recover it. 00:30:35.116 [2024-12-05 12:14:09.230140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.116 [2024-12-05 12:14:09.230172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.116 qpair failed and we were unable to recover it. 00:30:35.116 [2024-12-05 12:14:09.230355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.116 [2024-12-05 12:14:09.230418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.116 qpair failed and we were unable to recover it. 00:30:35.116 [2024-12-05 12:14:09.230612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.116 [2024-12-05 12:14:09.230644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.116 qpair failed and we were unable to recover it. 00:30:35.116 [2024-12-05 12:14:09.230830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.116 [2024-12-05 12:14:09.230862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.116 qpair failed and we were unable to recover it. 00:30:35.116 [2024-12-05 12:14:09.231116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.116 [2024-12-05 12:14:09.231147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.116 qpair failed and we were unable to recover it. 00:30:35.116 [2024-12-05 12:14:09.231318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.116 [2024-12-05 12:14:09.231349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.116 qpair failed and we were unable to recover it. 00:30:35.116 [2024-12-05 12:14:09.231629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.116 [2024-12-05 12:14:09.231661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.116 qpair failed and we were unable to recover it. 00:30:35.116 [2024-12-05 12:14:09.231841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.116 [2024-12-05 12:14:09.231873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.116 qpair failed and we were unable to recover it. 00:30:35.116 [2024-12-05 12:14:09.232072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.116 [2024-12-05 12:14:09.232102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.116 qpair failed and we were unable to recover it. 00:30:35.116 [2024-12-05 12:14:09.232302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.116 [2024-12-05 12:14:09.232334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.116 qpair failed and we were unable to recover it. 00:30:35.116 [2024-12-05 12:14:09.232518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.116 [2024-12-05 12:14:09.232551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.116 qpair failed and we were unable to recover it. 00:30:35.116 [2024-12-05 12:14:09.232732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.116 [2024-12-05 12:14:09.232763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.116 qpair failed and we were unable to recover it. 00:30:35.116 [2024-12-05 12:14:09.232950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.116 [2024-12-05 12:14:09.232981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.116 qpair failed and we were unable to recover it. 00:30:35.116 [2024-12-05 12:14:09.233165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.116 [2024-12-05 12:14:09.233196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.116 qpair failed and we were unable to recover it. 00:30:35.116 [2024-12-05 12:14:09.233385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.116 [2024-12-05 12:14:09.233418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.116 qpair failed and we were unable to recover it. 00:30:35.116 [2024-12-05 12:14:09.233552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.116 [2024-12-05 12:14:09.233583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.116 qpair failed and we were unable to recover it. 00:30:35.116 [2024-12-05 12:14:09.233755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.116 [2024-12-05 12:14:09.233785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.116 qpair failed and we were unable to recover it. 00:30:35.116 [2024-12-05 12:14:09.234001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.116 [2024-12-05 12:14:09.234032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.116 qpair failed and we were unable to recover it. 00:30:35.116 [2024-12-05 12:14:09.234232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.116 [2024-12-05 12:14:09.234264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.116 qpair failed and we were unable to recover it. 00:30:35.116 [2024-12-05 12:14:09.234459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.116 [2024-12-05 12:14:09.234493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.116 qpair failed and we were unable to recover it. 00:30:35.116 [2024-12-05 12:14:09.234620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.116 [2024-12-05 12:14:09.234653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.116 qpair failed and we were unable to recover it. 00:30:35.116 [2024-12-05 12:14:09.234767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.116 [2024-12-05 12:14:09.234798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.116 qpair failed and we were unable to recover it. 00:30:35.116 [2024-12-05 12:14:09.234977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.116 [2024-12-05 12:14:09.235009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.116 qpair failed and we were unable to recover it. 00:30:35.116 [2024-12-05 12:14:09.235182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.116 [2024-12-05 12:14:09.235213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.116 qpair failed and we were unable to recover it. 00:30:35.116 [2024-12-05 12:14:09.235398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.116 [2024-12-05 12:14:09.235433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.116 qpair failed and we were unable to recover it. 00:30:35.116 [2024-12-05 12:14:09.235554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.116 [2024-12-05 12:14:09.235584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.116 qpair failed and we were unable to recover it. 00:30:35.116 [2024-12-05 12:14:09.235708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.116 [2024-12-05 12:14:09.235738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.116 qpair failed and we were unable to recover it. 00:30:35.116 [2024-12-05 12:14:09.235860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.116 [2024-12-05 12:14:09.235890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.116 qpair failed and we were unable to recover it. 00:30:35.117 [2024-12-05 12:14:09.236059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.117 [2024-12-05 12:14:09.236089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.117 qpair failed and we were unable to recover it. 00:30:35.117 [2024-12-05 12:14:09.236256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.117 [2024-12-05 12:14:09.236287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.117 qpair failed and we were unable to recover it. 00:30:35.117 [2024-12-05 12:14:09.236472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.117 [2024-12-05 12:14:09.236506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.117 qpair failed and we were unable to recover it. 00:30:35.117 [2024-12-05 12:14:09.236700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.117 [2024-12-05 12:14:09.236732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.117 qpair failed and we were unable to recover it. 00:30:35.117 [2024-12-05 12:14:09.236916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.117 [2024-12-05 12:14:09.236948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.117 qpair failed and we were unable to recover it. 00:30:35.117 [2024-12-05 12:14:09.237133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.117 [2024-12-05 12:14:09.237171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.117 qpair failed and we were unable to recover it. 00:30:35.117 [2024-12-05 12:14:09.237282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.117 [2024-12-05 12:14:09.237311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.117 qpair failed and we were unable to recover it. 00:30:35.117 [2024-12-05 12:14:09.237566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.117 [2024-12-05 12:14:09.237598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.117 qpair failed and we were unable to recover it. 00:30:35.117 [2024-12-05 12:14:09.237845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.117 [2024-12-05 12:14:09.237877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.117 qpair failed and we were unable to recover it. 00:30:35.117 [2024-12-05 12:14:09.238136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.117 [2024-12-05 12:14:09.238167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.117 qpair failed and we were unable to recover it. 00:30:35.117 [2024-12-05 12:14:09.238341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.117 [2024-12-05 12:14:09.238383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.117 qpair failed and we were unable to recover it. 00:30:35.117 [2024-12-05 12:14:09.238511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.117 [2024-12-05 12:14:09.238541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.117 qpair failed and we were unable to recover it. 00:30:35.117 [2024-12-05 12:14:09.238716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.117 [2024-12-05 12:14:09.238745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.117 qpair failed and we were unable to recover it. 00:30:35.117 [2024-12-05 12:14:09.238864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.117 [2024-12-05 12:14:09.238895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.117 qpair failed and we were unable to recover it. 00:30:35.117 [2024-12-05 12:14:09.239029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.117 [2024-12-05 12:14:09.239059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.117 qpair failed and we were unable to recover it. 00:30:35.117 [2024-12-05 12:14:09.239241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.117 [2024-12-05 12:14:09.239271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.117 qpair failed and we were unable to recover it. 00:30:35.117 [2024-12-05 12:14:09.239390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.117 [2024-12-05 12:14:09.239423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.117 qpair failed and we were unable to recover it. 00:30:35.117 [2024-12-05 12:14:09.239664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.117 [2024-12-05 12:14:09.239696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.117 qpair failed and we were unable to recover it. 00:30:35.117 [2024-12-05 12:14:09.239881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.117 [2024-12-05 12:14:09.239912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.117 qpair failed and we were unable to recover it. 00:30:35.117 [2024-12-05 12:14:09.240023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.117 [2024-12-05 12:14:09.240056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.117 qpair failed and we were unable to recover it. 00:30:35.117 [2024-12-05 12:14:09.240319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.117 [2024-12-05 12:14:09.240353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.117 qpair failed and we were unable to recover it. 00:30:35.117 [2024-12-05 12:14:09.240544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.117 [2024-12-05 12:14:09.240576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.117 qpair failed and we were unable to recover it. 00:30:35.117 [2024-12-05 12:14:09.240800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.117 [2024-12-05 12:14:09.240831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.117 qpair failed and we were unable to recover it. 00:30:35.117 [2024-12-05 12:14:09.240952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.117 [2024-12-05 12:14:09.240982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.117 qpair failed and we were unable to recover it. 00:30:35.117 [2024-12-05 12:14:09.241161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.117 [2024-12-05 12:14:09.241191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.117 qpair failed and we were unable to recover it. 00:30:35.117 [2024-12-05 12:14:09.241316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.117 [2024-12-05 12:14:09.241346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.117 qpair failed and we were unable to recover it. 00:30:35.117 [2024-12-05 12:14:09.241534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.117 [2024-12-05 12:14:09.241564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.117 qpair failed and we were unable to recover it. 00:30:35.117 [2024-12-05 12:14:09.241745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.117 [2024-12-05 12:14:09.241775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.117 qpair failed and we were unable to recover it. 00:30:35.117 [2024-12-05 12:14:09.241945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.117 [2024-12-05 12:14:09.241975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.117 qpair failed and we were unable to recover it. 00:30:35.117 [2024-12-05 12:14:09.242240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.117 [2024-12-05 12:14:09.242271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.117 qpair failed and we were unable to recover it. 00:30:35.117 [2024-12-05 12:14:09.242458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.117 [2024-12-05 12:14:09.242490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.117 qpair failed and we were unable to recover it. 00:30:35.117 [2024-12-05 12:14:09.242751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.117 [2024-12-05 12:14:09.242784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.117 qpair failed and we were unable to recover it. 00:30:35.117 [2024-12-05 12:14:09.242907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.117 [2024-12-05 12:14:09.242938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.117 qpair failed and we were unable to recover it. 00:30:35.117 [2024-12-05 12:14:09.243192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.117 [2024-12-05 12:14:09.243224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.117 qpair failed and we were unable to recover it. 00:30:35.117 [2024-12-05 12:14:09.243344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.117 [2024-12-05 12:14:09.243385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.117 qpair failed and we were unable to recover it. 00:30:35.117 [2024-12-05 12:14:09.243575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.117 [2024-12-05 12:14:09.243608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.117 qpair failed and we were unable to recover it. 00:30:35.117 [2024-12-05 12:14:09.243781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.117 [2024-12-05 12:14:09.243812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.117 qpair failed and we were unable to recover it. 00:30:35.117 [2024-12-05 12:14:09.244015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.117 [2024-12-05 12:14:09.244047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.117 qpair failed and we were unable to recover it. 00:30:35.399 [2024-12-05 12:14:09.244229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.399 [2024-12-05 12:14:09.244260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.399 qpair failed and we were unable to recover it. 00:30:35.399 [2024-12-05 12:14:09.244434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.399 [2024-12-05 12:14:09.244466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.399 qpair failed and we were unable to recover it. 00:30:35.399 [2024-12-05 12:14:09.244598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.399 [2024-12-05 12:14:09.244628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.399 qpair failed and we were unable to recover it. 00:30:35.399 [2024-12-05 12:14:09.244756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.399 [2024-12-05 12:14:09.244785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.399 qpair failed and we were unable to recover it. 00:30:35.399 [2024-12-05 12:14:09.245021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.399 [2024-12-05 12:14:09.245053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.400 qpair failed and we were unable to recover it. 00:30:35.400 [2024-12-05 12:14:09.245179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.400 [2024-12-05 12:14:09.245209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.400 qpair failed and we were unable to recover it. 00:30:35.400 [2024-12-05 12:14:09.245395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.400 [2024-12-05 12:14:09.245427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.400 qpair failed and we were unable to recover it. 00:30:35.400 [2024-12-05 12:14:09.245604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.400 [2024-12-05 12:14:09.245640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.400 qpair failed and we were unable to recover it. 00:30:35.400 [2024-12-05 12:14:09.245759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.400 [2024-12-05 12:14:09.245789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.400 qpair failed and we were unable to recover it. 00:30:35.400 [2024-12-05 12:14:09.245907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.400 [2024-12-05 12:14:09.245938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.400 qpair failed and we were unable to recover it. 00:30:35.400 [2024-12-05 12:14:09.246084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.400 [2024-12-05 12:14:09.246114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.400 qpair failed and we were unable to recover it. 00:30:35.400 [2024-12-05 12:14:09.246283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.400 [2024-12-05 12:14:09.246316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.400 qpair failed and we were unable to recover it. 00:30:35.400 [2024-12-05 12:14:09.246510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.400 [2024-12-05 12:14:09.246543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.400 qpair failed and we were unable to recover it. 00:30:35.400 [2024-12-05 12:14:09.246751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.400 [2024-12-05 12:14:09.246782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.400 qpair failed and we were unable to recover it. 00:30:35.400 [2024-12-05 12:14:09.246885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.400 [2024-12-05 12:14:09.246918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.400 qpair failed and we were unable to recover it. 00:30:35.400 [2024-12-05 12:14:09.247120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.400 [2024-12-05 12:14:09.247151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.400 qpair failed and we were unable to recover it. 00:30:35.400 [2024-12-05 12:14:09.247352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.400 [2024-12-05 12:14:09.247394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.400 qpair failed and we were unable to recover it. 00:30:35.400 [2024-12-05 12:14:09.247696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.400 [2024-12-05 12:14:09.247728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.400 qpair failed and we were unable to recover it. 00:30:35.400 [2024-12-05 12:14:09.247853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.400 [2024-12-05 12:14:09.247882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.400 qpair failed and we were unable to recover it. 00:30:35.400 [2024-12-05 12:14:09.248051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.400 [2024-12-05 12:14:09.248082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.400 qpair failed and we were unable to recover it. 00:30:35.400 [2024-12-05 12:14:09.248268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.400 [2024-12-05 12:14:09.248300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.400 qpair failed and we were unable to recover it. 00:30:35.400 [2024-12-05 12:14:09.248492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.400 [2024-12-05 12:14:09.248525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.400 qpair failed and we were unable to recover it. 00:30:35.400 [2024-12-05 12:14:09.248702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.400 [2024-12-05 12:14:09.248734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.400 qpair failed and we were unable to recover it. 00:30:35.400 [2024-12-05 12:14:09.248920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.400 [2024-12-05 12:14:09.248952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.400 qpair failed and we were unable to recover it. 00:30:35.400 [2024-12-05 12:14:09.249069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.400 [2024-12-05 12:14:09.249100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.400 qpair failed and we were unable to recover it. 00:30:35.400 [2024-12-05 12:14:09.249281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.400 [2024-12-05 12:14:09.249312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.400 qpair failed and we were unable to recover it. 00:30:35.400 [2024-12-05 12:14:09.249461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.400 [2024-12-05 12:14:09.249494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.400 qpair failed and we were unable to recover it. 00:30:35.400 [2024-12-05 12:14:09.249601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.400 [2024-12-05 12:14:09.249632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.400 qpair failed and we were unable to recover it. 00:30:35.400 [2024-12-05 12:14:09.249802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.400 [2024-12-05 12:14:09.249833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.400 qpair failed and we were unable to recover it. 00:30:35.400 [2024-12-05 12:14:09.250006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.400 [2024-12-05 12:14:09.250039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.400 qpair failed and we were unable to recover it. 00:30:35.400 [2024-12-05 12:14:09.250302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.400 [2024-12-05 12:14:09.250333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.400 qpair failed and we were unable to recover it. 00:30:35.400 [2024-12-05 12:14:09.250602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.400 [2024-12-05 12:14:09.250634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.400 qpair failed and we were unable to recover it. 00:30:35.400 [2024-12-05 12:14:09.250764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.400 [2024-12-05 12:14:09.250794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.400 qpair failed and we were unable to recover it. 00:30:35.400 [2024-12-05 12:14:09.250973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.400 [2024-12-05 12:14:09.251003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.400 qpair failed and we were unable to recover it. 00:30:35.400 [2024-12-05 12:14:09.251212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.400 [2024-12-05 12:14:09.251245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.400 qpair failed and we were unable to recover it. 00:30:35.400 [2024-12-05 12:14:09.251434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.400 [2024-12-05 12:14:09.251472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.400 qpair failed and we were unable to recover it. 00:30:35.400 [2024-12-05 12:14:09.251741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.400 [2024-12-05 12:14:09.251772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.400 qpair failed and we were unable to recover it. 00:30:35.400 [2024-12-05 12:14:09.251988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.400 [2024-12-05 12:14:09.252019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.400 qpair failed and we were unable to recover it. 00:30:35.400 [2024-12-05 12:14:09.252146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.400 [2024-12-05 12:14:09.252175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.400 qpair failed and we were unable to recover it. 00:30:35.400 [2024-12-05 12:14:09.252355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.400 [2024-12-05 12:14:09.252397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.400 qpair failed and we were unable to recover it. 00:30:35.400 [2024-12-05 12:14:09.252583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.400 [2024-12-05 12:14:09.252615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.400 qpair failed and we were unable to recover it. 00:30:35.400 [2024-12-05 12:14:09.252737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.400 [2024-12-05 12:14:09.252767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.400 qpair failed and we were unable to recover it. 00:30:35.401 [2024-12-05 12:14:09.252954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.401 [2024-12-05 12:14:09.252986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.401 qpair failed and we were unable to recover it. 00:30:35.401 [2024-12-05 12:14:09.253176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.401 [2024-12-05 12:14:09.253207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.401 qpair failed and we were unable to recover it. 00:30:35.401 [2024-12-05 12:14:09.253329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.401 [2024-12-05 12:14:09.253359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.401 qpair failed and we were unable to recover it. 00:30:35.401 [2024-12-05 12:14:09.253574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.401 [2024-12-05 12:14:09.253606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.401 qpair failed and we were unable to recover it. 00:30:35.401 [2024-12-05 12:14:09.253788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.401 [2024-12-05 12:14:09.253820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.401 qpair failed and we were unable to recover it. 00:30:35.401 [2024-12-05 12:14:09.253943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.401 [2024-12-05 12:14:09.253983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.401 qpair failed and we were unable to recover it. 00:30:35.401 [2024-12-05 12:14:09.254153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.401 [2024-12-05 12:14:09.254184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.401 qpair failed and we were unable to recover it. 00:30:35.401 [2024-12-05 12:14:09.254314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.401 [2024-12-05 12:14:09.254345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.401 qpair failed and we were unable to recover it. 00:30:35.401 [2024-12-05 12:14:09.254616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.401 [2024-12-05 12:14:09.254649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.401 qpair failed and we were unable to recover it. 00:30:35.401 [2024-12-05 12:14:09.254761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.401 [2024-12-05 12:14:09.254791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.401 qpair failed and we were unable to recover it. 00:30:35.401 [2024-12-05 12:14:09.254911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.401 [2024-12-05 12:14:09.254941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.401 qpair failed and we were unable to recover it. 00:30:35.401 [2024-12-05 12:14:09.255072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.401 [2024-12-05 12:14:09.255102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.401 qpair failed and we were unable to recover it. 00:30:35.401 [2024-12-05 12:14:09.255271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.401 [2024-12-05 12:14:09.255301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.401 qpair failed and we were unable to recover it. 00:30:35.401 [2024-12-05 12:14:09.255411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.401 [2024-12-05 12:14:09.255443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.401 qpair failed and we were unable to recover it. 00:30:35.401 [2024-12-05 12:14:09.255615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.401 [2024-12-05 12:14:09.255647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.401 qpair failed and we were unable to recover it. 00:30:35.401 [2024-12-05 12:14:09.255923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.401 [2024-12-05 12:14:09.255954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.401 qpair failed and we were unable to recover it. 00:30:35.401 [2024-12-05 12:14:09.256081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.401 [2024-12-05 12:14:09.256112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.401 qpair failed and we were unable to recover it. 00:30:35.401 [2024-12-05 12:14:09.256292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.401 [2024-12-05 12:14:09.256323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.401 qpair failed and we were unable to recover it. 00:30:35.401 [2024-12-05 12:14:09.256512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.401 [2024-12-05 12:14:09.256546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.401 qpair failed and we were unable to recover it. 00:30:35.401 [2024-12-05 12:14:09.256750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.401 [2024-12-05 12:14:09.256782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.401 qpair failed and we were unable to recover it. 00:30:35.401 [2024-12-05 12:14:09.256968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.401 [2024-12-05 12:14:09.256999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.401 qpair failed and we were unable to recover it. 00:30:35.401 [2024-12-05 12:14:09.257183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.401 [2024-12-05 12:14:09.257214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.401 qpair failed and we were unable to recover it. 00:30:35.401 [2024-12-05 12:14:09.257339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.401 [2024-12-05 12:14:09.257378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.401 qpair failed and we were unable to recover it. 00:30:35.401 [2024-12-05 12:14:09.257502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.401 [2024-12-05 12:14:09.257534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.401 qpair failed and we were unable to recover it. 00:30:35.401 [2024-12-05 12:14:09.257722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.401 [2024-12-05 12:14:09.257753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.401 qpair failed and we were unable to recover it. 00:30:35.401 [2024-12-05 12:14:09.257957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.401 [2024-12-05 12:14:09.257989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.401 qpair failed and we were unable to recover it. 00:30:35.401 [2024-12-05 12:14:09.258105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.401 [2024-12-05 12:14:09.258140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.401 qpair failed and we were unable to recover it. 00:30:35.401 [2024-12-05 12:14:09.258257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.401 [2024-12-05 12:14:09.258290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.401 qpair failed and we were unable to recover it. 00:30:35.401 [2024-12-05 12:14:09.258410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.401 [2024-12-05 12:14:09.258442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.401 qpair failed and we were unable to recover it. 00:30:35.401 [2024-12-05 12:14:09.258623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.401 [2024-12-05 12:14:09.258654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.401 qpair failed and we were unable to recover it. 00:30:35.401 [2024-12-05 12:14:09.258838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.401 [2024-12-05 12:14:09.258871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.401 qpair failed and we were unable to recover it. 00:30:35.401 [2024-12-05 12:14:09.259045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.401 [2024-12-05 12:14:09.259077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.401 qpair failed and we were unable to recover it. 00:30:35.401 [2024-12-05 12:14:09.259264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.401 [2024-12-05 12:14:09.259296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.401 qpair failed and we were unable to recover it. 00:30:35.401 [2024-12-05 12:14:09.259483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.401 [2024-12-05 12:14:09.259516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.401 qpair failed and we were unable to recover it. 00:30:35.401 [2024-12-05 12:14:09.259641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.401 [2024-12-05 12:14:09.259674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.401 qpair failed and we were unable to recover it. 00:30:35.401 [2024-12-05 12:14:09.259852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.401 [2024-12-05 12:14:09.259883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.401 qpair failed and we were unable to recover it. 00:30:35.401 [2024-12-05 12:14:09.260005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.401 [2024-12-05 12:14:09.260035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.401 qpair failed and we were unable to recover it. 00:30:35.401 [2024-12-05 12:14:09.260272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.401 [2024-12-05 12:14:09.260304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.401 qpair failed and we were unable to recover it. 00:30:35.402 [2024-12-05 12:14:09.260482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.402 [2024-12-05 12:14:09.260515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.402 qpair failed and we were unable to recover it. 00:30:35.402 [2024-12-05 12:14:09.260699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.402 [2024-12-05 12:14:09.260730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.402 qpair failed and we were unable to recover it. 00:30:35.402 [2024-12-05 12:14:09.260934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.402 [2024-12-05 12:14:09.260965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.402 qpair failed and we were unable to recover it. 00:30:35.402 [2024-12-05 12:14:09.261234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.402 [2024-12-05 12:14:09.261265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.402 qpair failed and we were unable to recover it. 00:30:35.402 [2024-12-05 12:14:09.261386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.402 [2024-12-05 12:14:09.261419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.402 qpair failed and we were unable to recover it. 00:30:35.402 [2024-12-05 12:14:09.261625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.402 [2024-12-05 12:14:09.261656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.402 qpair failed and we were unable to recover it. 00:30:35.402 [2024-12-05 12:14:09.261836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.402 [2024-12-05 12:14:09.261867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.402 qpair failed and we were unable to recover it. 00:30:35.402 [2024-12-05 12:14:09.262125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.402 [2024-12-05 12:14:09.262162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.402 qpair failed and we were unable to recover it. 00:30:35.402 [2024-12-05 12:14:09.262426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.402 [2024-12-05 12:14:09.262459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.402 qpair failed and we were unable to recover it. 00:30:35.402 [2024-12-05 12:14:09.262576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.402 [2024-12-05 12:14:09.262608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.402 qpair failed and we were unable to recover it. 00:30:35.402 [2024-12-05 12:14:09.262713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.402 [2024-12-05 12:14:09.262745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.402 qpair failed and we were unable to recover it. 00:30:35.402 [2024-12-05 12:14:09.262942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.402 [2024-12-05 12:14:09.262973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.402 qpair failed and we were unable to recover it. 00:30:35.402 [2024-12-05 12:14:09.263147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.402 [2024-12-05 12:14:09.263179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.402 qpair failed and we were unable to recover it. 00:30:35.402 [2024-12-05 12:14:09.263314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.402 [2024-12-05 12:14:09.263346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.402 qpair failed and we were unable to recover it. 00:30:35.402 [2024-12-05 12:14:09.263481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.402 [2024-12-05 12:14:09.263513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.402 qpair failed and we were unable to recover it. 00:30:35.402 [2024-12-05 12:14:09.263628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.402 [2024-12-05 12:14:09.263660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.402 qpair failed and we were unable to recover it. 00:30:35.402 [2024-12-05 12:14:09.263898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.402 [2024-12-05 12:14:09.263930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.402 qpair failed and we were unable to recover it. 00:30:35.402 [2024-12-05 12:14:09.264049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.402 [2024-12-05 12:14:09.264083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.402 qpair failed and we were unable to recover it. 00:30:35.402 [2024-12-05 12:14:09.264267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.402 [2024-12-05 12:14:09.264298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.402 qpair failed and we were unable to recover it. 00:30:35.402 [2024-12-05 12:14:09.264485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.402 [2024-12-05 12:14:09.264518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.402 qpair failed and we were unable to recover it. 00:30:35.402 [2024-12-05 12:14:09.264704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.402 [2024-12-05 12:14:09.264736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.402 qpair failed and we were unable to recover it. 00:30:35.402 [2024-12-05 12:14:09.264870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.402 [2024-12-05 12:14:09.264902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.402 qpair failed and we were unable to recover it. 00:30:35.402 [2024-12-05 12:14:09.265023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.402 [2024-12-05 12:14:09.265055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.402 qpair failed and we were unable to recover it. 00:30:35.402 [2024-12-05 12:14:09.265172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.402 [2024-12-05 12:14:09.265203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.402 qpair failed and we were unable to recover it. 00:30:35.402 [2024-12-05 12:14:09.265386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.402 [2024-12-05 12:14:09.265419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.402 qpair failed and we were unable to recover it. 00:30:35.402 [2024-12-05 12:14:09.265533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.402 [2024-12-05 12:14:09.265565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.402 qpair failed and we were unable to recover it. 00:30:35.402 [2024-12-05 12:14:09.265806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.402 [2024-12-05 12:14:09.265837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.402 qpair failed and we were unable to recover it. 00:30:35.402 [2024-12-05 12:14:09.266020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.402 [2024-12-05 12:14:09.266053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.402 qpair failed and we were unable to recover it. 00:30:35.402 [2024-12-05 12:14:09.266234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.402 [2024-12-05 12:14:09.266266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.402 qpair failed and we were unable to recover it. 00:30:35.402 [2024-12-05 12:14:09.266503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.402 [2024-12-05 12:14:09.266537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.402 qpair failed and we were unable to recover it. 00:30:35.402 [2024-12-05 12:14:09.266715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.402 [2024-12-05 12:14:09.266747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.402 qpair failed and we were unable to recover it. 00:30:35.402 [2024-12-05 12:14:09.266944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.402 [2024-12-05 12:14:09.266976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.402 qpair failed and we were unable to recover it. 00:30:35.402 [2024-12-05 12:14:09.267103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.402 [2024-12-05 12:14:09.267132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.402 qpair failed and we were unable to recover it. 00:30:35.402 [2024-12-05 12:14:09.267311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.402 [2024-12-05 12:14:09.267343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.402 qpair failed and we were unable to recover it. 00:30:35.402 [2024-12-05 12:14:09.267473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.402 [2024-12-05 12:14:09.267507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.402 qpair failed and we were unable to recover it. 00:30:35.402 [2024-12-05 12:14:09.267758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.402 [2024-12-05 12:14:09.267790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.402 qpair failed and we were unable to recover it. 00:30:35.402 [2024-12-05 12:14:09.267962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.402 [2024-12-05 12:14:09.267994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.402 qpair failed and we were unable to recover it. 00:30:35.402 [2024-12-05 12:14:09.268181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.402 [2024-12-05 12:14:09.268212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.402 qpair failed and we were unable to recover it. 00:30:35.403 [2024-12-05 12:14:09.268414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.403 [2024-12-05 12:14:09.268447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.403 qpair failed and we were unable to recover it. 00:30:35.403 [2024-12-05 12:14:09.268690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.403 [2024-12-05 12:14:09.268723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.403 qpair failed and we were unable to recover it. 00:30:35.403 [2024-12-05 12:14:09.268908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.403 [2024-12-05 12:14:09.268940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.403 qpair failed and we were unable to recover it. 00:30:35.403 [2024-12-05 12:14:09.269125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.403 [2024-12-05 12:14:09.269156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.403 qpair failed and we were unable to recover it. 00:30:35.403 [2024-12-05 12:14:09.269329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.403 [2024-12-05 12:14:09.269360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.403 qpair failed and we were unable to recover it. 00:30:35.403 [2024-12-05 12:14:09.269605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.403 [2024-12-05 12:14:09.269637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.403 qpair failed and we were unable to recover it. 00:30:35.403 [2024-12-05 12:14:09.269826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.403 [2024-12-05 12:14:09.269857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.403 qpair failed and we were unable to recover it. 00:30:35.403 [2024-12-05 12:14:09.270055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.403 [2024-12-05 12:14:09.270087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.403 qpair failed and we were unable to recover it. 00:30:35.403 [2024-12-05 12:14:09.270210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.403 [2024-12-05 12:14:09.270241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.403 qpair failed and we were unable to recover it. 00:30:35.403 [2024-12-05 12:14:09.270406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.403 [2024-12-05 12:14:09.270445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.403 qpair failed and we were unable to recover it. 00:30:35.403 [2024-12-05 12:14:09.270690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.403 [2024-12-05 12:14:09.270723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.403 qpair failed and we were unable to recover it. 00:30:35.403 [2024-12-05 12:14:09.270897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.403 [2024-12-05 12:14:09.270929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.403 qpair failed and we were unable to recover it. 00:30:35.403 [2024-12-05 12:14:09.271136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.403 [2024-12-05 12:14:09.271169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.403 qpair failed and we were unable to recover it. 00:30:35.403 [2024-12-05 12:14:09.271359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.403 [2024-12-05 12:14:09.271399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.403 qpair failed and we were unable to recover it. 00:30:35.403 [2024-12-05 12:14:09.271661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.403 [2024-12-05 12:14:09.271692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.403 qpair failed and we were unable to recover it. 00:30:35.403 [2024-12-05 12:14:09.271872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.403 [2024-12-05 12:14:09.271902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.403 qpair failed and we were unable to recover it. 00:30:35.403 [2024-12-05 12:14:09.272074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.403 [2024-12-05 12:14:09.272104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.403 qpair failed and we were unable to recover it. 00:30:35.403 [2024-12-05 12:14:09.272221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.403 [2024-12-05 12:14:09.272252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.403 qpair failed and we were unable to recover it. 00:30:35.403 [2024-12-05 12:14:09.272422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.403 [2024-12-05 12:14:09.272454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.403 qpair failed and we were unable to recover it. 00:30:35.403 [2024-12-05 12:14:09.272624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.403 [2024-12-05 12:14:09.272655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.403 qpair failed and we were unable to recover it. 00:30:35.403 [2024-12-05 12:14:09.272824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.403 [2024-12-05 12:14:09.272854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.403 qpair failed and we were unable to recover it. 00:30:35.403 [2024-12-05 12:14:09.273112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.403 [2024-12-05 12:14:09.273143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.403 qpair failed and we were unable to recover it. 00:30:35.403 [2024-12-05 12:14:09.273422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.403 [2024-12-05 12:14:09.273455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.403 qpair failed and we were unable to recover it. 00:30:35.403 [2024-12-05 12:14:09.273652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.403 [2024-12-05 12:14:09.273683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.403 qpair failed and we were unable to recover it. 00:30:35.403 [2024-12-05 12:14:09.273891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.403 [2024-12-05 12:14:09.273922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.403 qpair failed and we were unable to recover it. 00:30:35.403 [2024-12-05 12:14:09.274106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.403 [2024-12-05 12:14:09.274137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.403 qpair failed and we were unable to recover it. 00:30:35.403 [2024-12-05 12:14:09.274336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.403 [2024-12-05 12:14:09.274396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.403 qpair failed and we were unable to recover it. 00:30:35.403 [2024-12-05 12:14:09.274538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.403 [2024-12-05 12:14:09.274570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.403 qpair failed and we were unable to recover it. 00:30:35.403 [2024-12-05 12:14:09.274779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.403 [2024-12-05 12:14:09.274811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.403 qpair failed and we were unable to recover it. 00:30:35.403 [2024-12-05 12:14:09.274992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.403 [2024-12-05 12:14:09.275023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.403 qpair failed and we were unable to recover it. 00:30:35.403 [2024-12-05 12:14:09.275197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.403 [2024-12-05 12:14:09.275229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.403 qpair failed and we were unable to recover it. 00:30:35.403 [2024-12-05 12:14:09.275407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.403 [2024-12-05 12:14:09.275440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.403 qpair failed and we were unable to recover it. 00:30:35.403 [2024-12-05 12:14:09.275627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.403 [2024-12-05 12:14:09.275659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.403 qpair failed and we were unable to recover it. 00:30:35.403 [2024-12-05 12:14:09.275858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.403 [2024-12-05 12:14:09.275889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.403 qpair failed and we were unable to recover it. 00:30:35.403 [2024-12-05 12:14:09.276012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.403 [2024-12-05 12:14:09.276043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.403 qpair failed and we were unable to recover it. 00:30:35.403 [2024-12-05 12:14:09.276216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.403 [2024-12-05 12:14:09.276248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.403 qpair failed and we were unable to recover it. 00:30:35.403 [2024-12-05 12:14:09.276358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.403 [2024-12-05 12:14:09.276399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.403 qpair failed and we were unable to recover it. 00:30:35.403 [2024-12-05 12:14:09.276620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.403 [2024-12-05 12:14:09.276653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.403 qpair failed and we were unable to recover it. 00:30:35.404 [2024-12-05 12:14:09.276835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.404 [2024-12-05 12:14:09.276867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.404 qpair failed and we were unable to recover it. 00:30:35.404 [2024-12-05 12:14:09.276983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.404 [2024-12-05 12:14:09.277022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.404 qpair failed and we were unable to recover it. 00:30:35.404 [2024-12-05 12:14:09.277147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.404 [2024-12-05 12:14:09.277178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.404 qpair failed and we were unable to recover it. 00:30:35.404 [2024-12-05 12:14:09.277454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.404 [2024-12-05 12:14:09.277487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.404 qpair failed and we were unable to recover it. 00:30:35.404 [2024-12-05 12:14:09.277668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.404 [2024-12-05 12:14:09.277702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.404 qpair failed and we were unable to recover it. 00:30:35.404 [2024-12-05 12:14:09.277825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.404 [2024-12-05 12:14:09.277856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.404 qpair failed and we were unable to recover it. 00:30:35.404 [2024-12-05 12:14:09.277955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.404 [2024-12-05 12:14:09.277985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.404 qpair failed and we were unable to recover it. 00:30:35.404 [2024-12-05 12:14:09.278178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.404 [2024-12-05 12:14:09.278210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.404 qpair failed and we were unable to recover it. 00:30:35.404 [2024-12-05 12:14:09.278330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.404 [2024-12-05 12:14:09.278361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.404 qpair failed and we were unable to recover it. 00:30:35.404 [2024-12-05 12:14:09.278501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.404 [2024-12-05 12:14:09.278533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.404 qpair failed and we were unable to recover it. 00:30:35.404 [2024-12-05 12:14:09.278738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.404 [2024-12-05 12:14:09.278770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.404 qpair failed and we were unable to recover it. 00:30:35.404 [2024-12-05 12:14:09.278904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.404 [2024-12-05 12:14:09.278936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.404 qpair failed and we were unable to recover it. 00:30:35.404 [2024-12-05 12:14:09.279180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.404 [2024-12-05 12:14:09.279212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.404 qpair failed and we were unable to recover it. 00:30:35.405 [2024-12-05 12:14:09.279387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.405 [2024-12-05 12:14:09.279419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.405 qpair failed and we were unable to recover it. 00:30:35.405 [2024-12-05 12:14:09.279596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.405 [2024-12-05 12:14:09.279630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.405 qpair failed and we were unable to recover it. 00:30:35.405 [2024-12-05 12:14:09.279873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.405 [2024-12-05 12:14:09.279904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.405 qpair failed and we were unable to recover it. 00:30:35.405 [2024-12-05 12:14:09.280032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.405 [2024-12-05 12:14:09.280063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.405 qpair failed and we were unable to recover it. 00:30:35.405 [2024-12-05 12:14:09.280229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.405 [2024-12-05 12:14:09.280261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.405 qpair failed and we were unable to recover it. 00:30:35.405 [2024-12-05 12:14:09.280449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.405 [2024-12-05 12:14:09.280482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.405 qpair failed and we were unable to recover it. 00:30:35.405 [2024-12-05 12:14:09.280663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.405 [2024-12-05 12:14:09.280695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.405 qpair failed and we were unable to recover it. 00:30:35.405 [2024-12-05 12:14:09.280931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.405 [2024-12-05 12:14:09.280963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.405 qpair failed and we were unable to recover it. 00:30:35.405 [2024-12-05 12:14:09.281198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.405 [2024-12-05 12:14:09.281229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.405 qpair failed and we were unable to recover it. 00:30:35.405 [2024-12-05 12:14:09.281344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.405 [2024-12-05 12:14:09.281384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.405 qpair failed and we were unable to recover it. 00:30:35.405 [2024-12-05 12:14:09.281623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.405 [2024-12-05 12:14:09.281655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.405 qpair failed and we were unable to recover it. 00:30:35.405 [2024-12-05 12:14:09.281896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.405 [2024-12-05 12:14:09.281927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.405 qpair failed and we were unable to recover it. 00:30:35.405 [2024-12-05 12:14:09.282130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.405 [2024-12-05 12:14:09.282162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.405 qpair failed and we were unable to recover it. 00:30:35.405 [2024-12-05 12:14:09.282295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.405 [2024-12-05 12:14:09.282327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.405 qpair failed and we were unable to recover it. 00:30:35.405 [2024-12-05 12:14:09.282525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.405 [2024-12-05 12:14:09.282557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.405 qpair failed and we were unable to recover it. 00:30:35.405 [2024-12-05 12:14:09.282750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.405 [2024-12-05 12:14:09.282781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.405 qpair failed and we were unable to recover it. 00:30:35.405 [2024-12-05 12:14:09.282960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.405 [2024-12-05 12:14:09.282991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.405 qpair failed and we were unable to recover it. 00:30:35.405 [2024-12-05 12:14:09.283093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.405 [2024-12-05 12:14:09.283123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.405 qpair failed and we were unable to recover it. 00:30:35.405 [2024-12-05 12:14:09.283322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.405 [2024-12-05 12:14:09.283354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.405 qpair failed and we were unable to recover it. 00:30:35.405 [2024-12-05 12:14:09.283575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.405 [2024-12-05 12:14:09.283607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.405 qpair failed and we were unable to recover it. 00:30:35.405 [2024-12-05 12:14:09.283797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.405 [2024-12-05 12:14:09.283829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.405 qpair failed and we were unable to recover it. 00:30:35.405 [2024-12-05 12:14:09.284011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.405 [2024-12-05 12:14:09.284043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.405 qpair failed and we were unable to recover it. 00:30:35.405 [2024-12-05 12:14:09.284280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.405 [2024-12-05 12:14:09.284311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.405 qpair failed and we were unable to recover it. 00:30:35.405 [2024-12-05 12:14:09.284430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.405 [2024-12-05 12:14:09.284464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.405 qpair failed and we were unable to recover it. 00:30:35.405 [2024-12-05 12:14:09.284604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.405 [2024-12-05 12:14:09.284635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.405 qpair failed and we were unable to recover it. 00:30:35.405 [2024-12-05 12:14:09.284841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.405 [2024-12-05 12:14:09.284878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.405 qpair failed and we were unable to recover it. 00:30:35.405 [2024-12-05 12:14:09.284995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.405 [2024-12-05 12:14:09.285026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.405 qpair failed and we were unable to recover it. 00:30:35.405 [2024-12-05 12:14:09.285222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.405 [2024-12-05 12:14:09.285252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.405 qpair failed and we were unable to recover it. 00:30:35.405 [2024-12-05 12:14:09.285446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.405 [2024-12-05 12:14:09.285479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.405 qpair failed and we were unable to recover it. 00:30:35.405 [2024-12-05 12:14:09.285602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.405 [2024-12-05 12:14:09.285634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.405 qpair failed and we were unable to recover it. 00:30:35.405 [2024-12-05 12:14:09.285747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.405 [2024-12-05 12:14:09.285779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.405 qpair failed and we were unable to recover it. 00:30:35.405 [2024-12-05 12:14:09.285980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.405 [2024-12-05 12:14:09.286012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.405 qpair failed and we were unable to recover it. 00:30:35.405 [2024-12-05 12:14:09.286198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.405 [2024-12-05 12:14:09.286229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.405 qpair failed and we were unable to recover it. 00:30:35.405 [2024-12-05 12:14:09.286409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.406 [2024-12-05 12:14:09.286442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.406 qpair failed and we were unable to recover it. 00:30:35.406 [2024-12-05 12:14:09.286703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.406 [2024-12-05 12:14:09.286735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.406 qpair failed and we were unable to recover it. 00:30:35.406 [2024-12-05 12:14:09.286970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.406 [2024-12-05 12:14:09.287001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.406 qpair failed and we were unable to recover it. 00:30:35.406 [2024-12-05 12:14:09.287178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.406 [2024-12-05 12:14:09.287209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.406 qpair failed and we were unable to recover it. 00:30:35.406 [2024-12-05 12:14:09.287452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.406 [2024-12-05 12:14:09.287485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.406 qpair failed and we were unable to recover it. 00:30:35.406 [2024-12-05 12:14:09.287661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.406 [2024-12-05 12:14:09.287692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.406 qpair failed and we were unable to recover it. 00:30:35.406 [2024-12-05 12:14:09.287971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.406 [2024-12-05 12:14:09.288003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.406 qpair failed and we were unable to recover it. 00:30:35.406 [2024-12-05 12:14:09.288122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.406 [2024-12-05 12:14:09.288153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.406 qpair failed and we were unable to recover it. 00:30:35.406 [2024-12-05 12:14:09.288324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.406 [2024-12-05 12:14:09.288355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.406 qpair failed and we were unable to recover it. 00:30:35.406 [2024-12-05 12:14:09.288547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.406 [2024-12-05 12:14:09.288579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.406 qpair failed and we were unable to recover it. 00:30:35.406 [2024-12-05 12:14:09.288709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.406 [2024-12-05 12:14:09.288740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.406 qpair failed and we were unable to recover it. 00:30:35.406 [2024-12-05 12:14:09.288936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.406 [2024-12-05 12:14:09.288968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.406 qpair failed and we were unable to recover it. 00:30:35.406 [2024-12-05 12:14:09.289092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.406 [2024-12-05 12:14:09.289124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.406 qpair failed and we were unable to recover it. 00:30:35.406 [2024-12-05 12:14:09.289259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.406 [2024-12-05 12:14:09.289290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.406 qpair failed and we were unable to recover it. 00:30:35.406 [2024-12-05 12:14:09.289413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.406 [2024-12-05 12:14:09.289446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.406 qpair failed and we were unable to recover it. 00:30:35.406 [2024-12-05 12:14:09.289645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.406 [2024-12-05 12:14:09.289676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.406 qpair failed and we were unable to recover it. 00:30:35.406 [2024-12-05 12:14:09.289919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.406 [2024-12-05 12:14:09.289950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.406 qpair failed and we were unable to recover it. 00:30:35.406 [2024-12-05 12:14:09.290155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.406 [2024-12-05 12:14:09.290187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.406 qpair failed and we were unable to recover it. 00:30:35.406 [2024-12-05 12:14:09.290298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.406 [2024-12-05 12:14:09.290329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.406 qpair failed and we were unable to recover it. 00:30:35.406 [2024-12-05 12:14:09.290530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.406 [2024-12-05 12:14:09.290562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.406 qpair failed and we were unable to recover it. 00:30:35.406 [2024-12-05 12:14:09.290751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.406 [2024-12-05 12:14:09.290782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.406 qpair failed and we were unable to recover it. 00:30:35.406 [2024-12-05 12:14:09.291041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.406 [2024-12-05 12:14:09.291073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.406 qpair failed and we were unable to recover it. 00:30:35.406 [2024-12-05 12:14:09.291283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.406 [2024-12-05 12:14:09.291314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.406 qpair failed and we were unable to recover it. 00:30:35.406 [2024-12-05 12:14:09.291607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.406 [2024-12-05 12:14:09.291640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.406 qpair failed and we were unable to recover it. 00:30:35.406 [2024-12-05 12:14:09.291776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.406 [2024-12-05 12:14:09.291808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.406 qpair failed and we were unable to recover it. 00:30:35.406 [2024-12-05 12:14:09.291931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.406 [2024-12-05 12:14:09.291962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.406 qpair failed and we were unable to recover it. 00:30:35.406 [2024-12-05 12:14:09.292080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.406 [2024-12-05 12:14:09.292112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.406 qpair failed and we were unable to recover it. 00:30:35.406 [2024-12-05 12:14:09.292213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.406 [2024-12-05 12:14:09.292244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.406 qpair failed and we were unable to recover it. 00:30:35.406 [2024-12-05 12:14:09.292432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.406 [2024-12-05 12:14:09.292465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.406 qpair failed and we were unable to recover it. 00:30:35.406 [2024-12-05 12:14:09.292655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.406 [2024-12-05 12:14:09.292688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.406 qpair failed and we were unable to recover it. 00:30:35.406 [2024-12-05 12:14:09.292923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.406 [2024-12-05 12:14:09.292954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.406 qpair failed and we were unable to recover it. 00:30:35.406 [2024-12-05 12:14:09.293211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.406 [2024-12-05 12:14:09.293242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.406 qpair failed and we were unable to recover it. 00:30:35.406 [2024-12-05 12:14:09.293448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.406 [2024-12-05 12:14:09.293487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.406 qpair failed and we were unable to recover it. 00:30:35.406 [2024-12-05 12:14:09.293662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.406 [2024-12-05 12:14:09.293694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.406 qpair failed and we were unable to recover it. 00:30:35.406 [2024-12-05 12:14:09.293818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.406 [2024-12-05 12:14:09.293850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.406 qpair failed and we were unable to recover it. 00:30:35.406 [2024-12-05 12:14:09.293978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.406 [2024-12-05 12:14:09.294010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.406 qpair failed and we were unable to recover it. 00:30:35.406 [2024-12-05 12:14:09.294254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.406 [2024-12-05 12:14:09.294285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.406 qpair failed and we were unable to recover it. 00:30:35.406 [2024-12-05 12:14:09.294538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.406 [2024-12-05 12:14:09.294572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.406 qpair failed and we were unable to recover it. 00:30:35.406 [2024-12-05 12:14:09.294698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.407 [2024-12-05 12:14:09.294730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.407 qpair failed and we were unable to recover it. 00:30:35.407 [2024-12-05 12:14:09.294860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.407 [2024-12-05 12:14:09.294891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.407 qpair failed and we were unable to recover it. 00:30:35.407 [2024-12-05 12:14:09.295019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.407 [2024-12-05 12:14:09.295050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.407 qpair failed and we were unable to recover it. 00:30:35.407 [2024-12-05 12:14:09.295231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.407 [2024-12-05 12:14:09.295263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.407 qpair failed and we were unable to recover it. 00:30:35.407 [2024-12-05 12:14:09.295450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.407 [2024-12-05 12:14:09.295483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.407 qpair failed and we were unable to recover it. 00:30:35.407 [2024-12-05 12:14:09.295673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.407 [2024-12-05 12:14:09.295705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.407 qpair failed and we were unable to recover it. 00:30:35.407 [2024-12-05 12:14:09.295809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.407 [2024-12-05 12:14:09.295839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.407 qpair failed and we were unable to recover it. 00:30:35.407 [2024-12-05 12:14:09.296101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.407 [2024-12-05 12:14:09.296132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.407 qpair failed and we were unable to recover it. 00:30:35.407 [2024-12-05 12:14:09.296254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.407 [2024-12-05 12:14:09.296284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.407 qpair failed and we were unable to recover it. 00:30:35.407 [2024-12-05 12:14:09.296466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.407 [2024-12-05 12:14:09.296498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.407 qpair failed and we were unable to recover it. 00:30:35.407 [2024-12-05 12:14:09.296735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.407 [2024-12-05 12:14:09.296766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.407 qpair failed and we were unable to recover it. 00:30:35.407 [2024-12-05 12:14:09.296950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.407 [2024-12-05 12:14:09.296981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.407 qpair failed and we were unable to recover it. 00:30:35.407 [2024-12-05 12:14:09.297086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.407 [2024-12-05 12:14:09.297117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.407 qpair failed and we were unable to recover it. 00:30:35.407 [2024-12-05 12:14:09.297311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.407 [2024-12-05 12:14:09.297342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.407 qpair failed and we were unable to recover it. 00:30:35.407 [2024-12-05 12:14:09.297614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.407 [2024-12-05 12:14:09.297646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.407 qpair failed and we were unable to recover it. 00:30:35.407 [2024-12-05 12:14:09.297819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.407 [2024-12-05 12:14:09.297851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.407 qpair failed and we were unable to recover it. 00:30:35.407 [2024-12-05 12:14:09.298019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.407 [2024-12-05 12:14:09.298050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.407 qpair failed and we were unable to recover it. 00:30:35.407 [2024-12-05 12:14:09.298285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.407 [2024-12-05 12:14:09.298317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.407 qpair failed and we were unable to recover it. 00:30:35.407 [2024-12-05 12:14:09.298505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.407 [2024-12-05 12:14:09.298539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.407 qpair failed and we were unable to recover it. 00:30:35.407 [2024-12-05 12:14:09.298661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.407 [2024-12-05 12:14:09.298693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.407 qpair failed and we were unable to recover it. 00:30:35.407 [2024-12-05 12:14:09.298873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.407 [2024-12-05 12:14:09.298906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.407 qpair failed and we were unable to recover it. 00:30:35.407 [2024-12-05 12:14:09.299089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.407 [2024-12-05 12:14:09.299121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.407 qpair failed and we were unable to recover it. 00:30:35.407 [2024-12-05 12:14:09.299226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.407 [2024-12-05 12:14:09.299257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.407 qpair failed and we were unable to recover it. 00:30:35.407 [2024-12-05 12:14:09.299387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.407 [2024-12-05 12:14:09.299420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.407 qpair failed and we were unable to recover it. 00:30:35.407 [2024-12-05 12:14:09.299708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.407 [2024-12-05 12:14:09.299740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.407 qpair failed and we were unable to recover it. 00:30:35.407 [2024-12-05 12:14:09.299920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.407 [2024-12-05 12:14:09.299952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.407 qpair failed and we were unable to recover it. 00:30:35.407 [2024-12-05 12:14:09.300146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.407 [2024-12-05 12:14:09.300176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.407 qpair failed and we were unable to recover it. 00:30:35.407 [2024-12-05 12:14:09.300313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.407 [2024-12-05 12:14:09.300344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.407 qpair failed and we were unable to recover it. 00:30:35.407 [2024-12-05 12:14:09.300612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.407 [2024-12-05 12:14:09.300645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.407 qpair failed and we were unable to recover it. 00:30:35.407 [2024-12-05 12:14:09.300903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.407 [2024-12-05 12:14:09.300934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.407 qpair failed and we were unable to recover it. 00:30:35.407 [2024-12-05 12:14:09.301057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.407 [2024-12-05 12:14:09.301088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.407 qpair failed and we were unable to recover it. 00:30:35.407 [2024-12-05 12:14:09.301259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.407 [2024-12-05 12:14:09.301291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.407 qpair failed and we were unable to recover it. 00:30:35.407 [2024-12-05 12:14:09.301533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.407 [2024-12-05 12:14:09.301566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.407 qpair failed and we were unable to recover it. 00:30:35.407 [2024-12-05 12:14:09.301802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.407 [2024-12-05 12:14:09.301834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.407 qpair failed and we were unable to recover it. 00:30:35.407 [2024-12-05 12:14:09.301935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.407 [2024-12-05 12:14:09.301978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.407 qpair failed and we were unable to recover it. 00:30:35.407 [2024-12-05 12:14:09.302159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.407 [2024-12-05 12:14:09.302191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.407 qpair failed and we were unable to recover it. 00:30:35.407 [2024-12-05 12:14:09.302305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.407 [2024-12-05 12:14:09.302334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.407 qpair failed and we were unable to recover it. 00:30:35.407 [2024-12-05 12:14:09.302539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.407 [2024-12-05 12:14:09.302571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.407 qpair failed and we were unable to recover it. 00:30:35.407 [2024-12-05 12:14:09.302751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.408 [2024-12-05 12:14:09.302780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.408 qpair failed and we were unable to recover it. 00:30:35.408 [2024-12-05 12:14:09.303012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.408 [2024-12-05 12:14:09.303043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.408 qpair failed and we were unable to recover it. 00:30:35.408 [2024-12-05 12:14:09.303235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.408 [2024-12-05 12:14:09.303266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.408 qpair failed and we were unable to recover it. 00:30:35.408 [2024-12-05 12:14:09.303451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.408 [2024-12-05 12:14:09.303483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.408 qpair failed and we were unable to recover it. 00:30:35.408 [2024-12-05 12:14:09.303613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.408 [2024-12-05 12:14:09.303644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.408 qpair failed and we were unable to recover it. 00:30:35.408 [2024-12-05 12:14:09.303777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.408 [2024-12-05 12:14:09.303808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.408 qpair failed and we were unable to recover it. 00:30:35.408 [2024-12-05 12:14:09.304069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.408 [2024-12-05 12:14:09.304100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.408 qpair failed and we were unable to recover it. 00:30:35.408 [2024-12-05 12:14:09.304291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.408 [2024-12-05 12:14:09.304322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.408 qpair failed and we were unable to recover it. 00:30:35.408 [2024-12-05 12:14:09.304453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.408 [2024-12-05 12:14:09.304485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.408 qpair failed and we were unable to recover it. 00:30:35.408 [2024-12-05 12:14:09.304609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.408 [2024-12-05 12:14:09.304640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.408 qpair failed and we were unable to recover it. 00:30:35.408 [2024-12-05 12:14:09.304843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.408 [2024-12-05 12:14:09.304875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.408 qpair failed and we were unable to recover it. 00:30:35.408 [2024-12-05 12:14:09.305076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.408 [2024-12-05 12:14:09.305107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.408 qpair failed and we were unable to recover it. 00:30:35.408 [2024-12-05 12:14:09.305234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.408 [2024-12-05 12:14:09.305266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.408 qpair failed and we were unable to recover it. 00:30:35.408 [2024-12-05 12:14:09.305527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.408 [2024-12-05 12:14:09.305561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.408 qpair failed and we were unable to recover it. 00:30:35.408 [2024-12-05 12:14:09.305663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.408 [2024-12-05 12:14:09.305696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.408 qpair failed and we were unable to recover it. 00:30:35.408 [2024-12-05 12:14:09.305880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.408 [2024-12-05 12:14:09.305911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.408 qpair failed and we were unable to recover it. 00:30:35.408 [2024-12-05 12:14:09.306052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.408 [2024-12-05 12:14:09.306084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.408 qpair failed and we were unable to recover it. 00:30:35.408 [2024-12-05 12:14:09.306271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.408 [2024-12-05 12:14:09.306303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.408 qpair failed and we were unable to recover it. 00:30:35.408 [2024-12-05 12:14:09.306409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.408 [2024-12-05 12:14:09.306440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.408 qpair failed and we were unable to recover it. 00:30:35.408 [2024-12-05 12:14:09.306621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.408 [2024-12-05 12:14:09.306652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.408 qpair failed and we were unable to recover it. 00:30:35.408 [2024-12-05 12:14:09.306840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.408 [2024-12-05 12:14:09.306871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.408 qpair failed and we were unable to recover it. 00:30:35.408 [2024-12-05 12:14:09.307119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.408 [2024-12-05 12:14:09.307150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.408 qpair failed and we were unable to recover it. 00:30:35.408 [2024-12-05 12:14:09.307334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.408 [2024-12-05 12:14:09.307365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.408 qpair failed and we were unable to recover it. 00:30:35.408 [2024-12-05 12:14:09.307636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.408 [2024-12-05 12:14:09.307668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.408 qpair failed and we were unable to recover it. 00:30:35.408 [2024-12-05 12:14:09.307858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.408 [2024-12-05 12:14:09.307889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.408 qpair failed and we were unable to recover it. 00:30:35.408 [2024-12-05 12:14:09.308128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.408 [2024-12-05 12:14:09.308158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.408 qpair failed and we were unable to recover it. 00:30:35.408 [2024-12-05 12:14:09.308274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.408 [2024-12-05 12:14:09.308306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.408 qpair failed and we were unable to recover it. 00:30:35.408 [2024-12-05 12:14:09.308508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.408 [2024-12-05 12:14:09.308541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.408 qpair failed and we were unable to recover it. 00:30:35.408 [2024-12-05 12:14:09.308811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.408 [2024-12-05 12:14:09.308842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.408 qpair failed and we were unable to recover it. 00:30:35.408 [2024-12-05 12:14:09.309027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.408 [2024-12-05 12:14:09.309060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.408 qpair failed and we were unable to recover it. 00:30:35.408 [2024-12-05 12:14:09.309160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.408 [2024-12-05 12:14:09.309190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.408 qpair failed and we were unable to recover it. 00:30:35.408 [2024-12-05 12:14:09.309428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.408 [2024-12-05 12:14:09.309460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.408 qpair failed and we were unable to recover it. 00:30:35.408 [2024-12-05 12:14:09.309597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.408 [2024-12-05 12:14:09.309629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.408 qpair failed and we were unable to recover it. 00:30:35.408 [2024-12-05 12:14:09.309806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.408 [2024-12-05 12:14:09.309838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.408 qpair failed and we were unable to recover it. 00:30:35.408 [2024-12-05 12:14:09.310025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.408 [2024-12-05 12:14:09.310056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.408 qpair failed and we were unable to recover it. 00:30:35.408 [2024-12-05 12:14:09.310265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.408 [2024-12-05 12:14:09.310296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.408 qpair failed and we were unable to recover it. 00:30:35.408 [2024-12-05 12:14:09.310414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.408 [2024-12-05 12:14:09.310453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.408 qpair failed and we were unable to recover it. 00:30:35.408 [2024-12-05 12:14:09.310560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.408 [2024-12-05 12:14:09.310591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.408 qpair failed and we were unable to recover it. 00:30:35.408 [2024-12-05 12:14:09.310803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.409 [2024-12-05 12:14:09.310835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.409 qpair failed and we were unable to recover it. 00:30:35.409 [2024-12-05 12:14:09.311096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.409 [2024-12-05 12:14:09.311128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.409 qpair failed and we were unable to recover it. 00:30:35.409 [2024-12-05 12:14:09.311262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.409 [2024-12-05 12:14:09.311293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.409 qpair failed and we were unable to recover it. 00:30:35.409 [2024-12-05 12:14:09.311484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.409 [2024-12-05 12:14:09.311517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.409 qpair failed and we were unable to recover it. 00:30:35.409 [2024-12-05 12:14:09.311711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.409 [2024-12-05 12:14:09.311743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.409 qpair failed and we were unable to recover it. 00:30:35.409 [2024-12-05 12:14:09.311967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.409 [2024-12-05 12:14:09.311999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.409 qpair failed and we were unable to recover it. 00:30:35.409 [2024-12-05 12:14:09.312126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.409 [2024-12-05 12:14:09.312157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.409 qpair failed and we were unable to recover it. 00:30:35.409 [2024-12-05 12:14:09.312352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.409 [2024-12-05 12:14:09.312390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.409 qpair failed and we were unable to recover it. 00:30:35.409 [2024-12-05 12:14:09.312575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.409 [2024-12-05 12:14:09.312607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.409 qpair failed and we were unable to recover it. 00:30:35.409 [2024-12-05 12:14:09.312789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.409 [2024-12-05 12:14:09.312821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.409 qpair failed and we were unable to recover it. 00:30:35.409 [2024-12-05 12:14:09.313010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.409 [2024-12-05 12:14:09.313043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.409 qpair failed and we were unable to recover it. 00:30:35.409 [2024-12-05 12:14:09.313339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.409 [2024-12-05 12:14:09.313380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.409 qpair failed and we were unable to recover it. 00:30:35.409 [2024-12-05 12:14:09.313626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.409 [2024-12-05 12:14:09.313657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.409 qpair failed and we were unable to recover it. 00:30:35.409 [2024-12-05 12:14:09.313827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.409 [2024-12-05 12:14:09.313859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.409 qpair failed and we were unable to recover it. 00:30:35.409 [2024-12-05 12:14:09.314035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.409 [2024-12-05 12:14:09.314067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.409 qpair failed and we were unable to recover it. 00:30:35.409 [2024-12-05 12:14:09.314304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.409 [2024-12-05 12:14:09.314335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.409 qpair failed and we were unable to recover it. 00:30:35.409 [2024-12-05 12:14:09.314463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.409 [2024-12-05 12:14:09.314495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.409 qpair failed and we were unable to recover it. 00:30:35.409 [2024-12-05 12:14:09.314664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.409 [2024-12-05 12:14:09.314695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.409 qpair failed and we were unable to recover it. 00:30:35.409 [2024-12-05 12:14:09.314890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.409 [2024-12-05 12:14:09.314921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.409 qpair failed and we were unable to recover it. 00:30:35.409 [2024-12-05 12:14:09.315159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.409 [2024-12-05 12:14:09.315189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.409 qpair failed and we were unable to recover it. 00:30:35.409 [2024-12-05 12:14:09.315390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.409 [2024-12-05 12:14:09.315424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.409 qpair failed and we were unable to recover it. 00:30:35.409 [2024-12-05 12:14:09.315596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.409 [2024-12-05 12:14:09.315628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.409 qpair failed and we were unable to recover it. 00:30:35.409 [2024-12-05 12:14:09.315801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.409 [2024-12-05 12:14:09.315832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.409 qpair failed and we were unable to recover it. 00:30:35.409 [2024-12-05 12:14:09.316121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.409 [2024-12-05 12:14:09.316153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.409 qpair failed and we were unable to recover it. 00:30:35.409 [2024-12-05 12:14:09.316403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.409 [2024-12-05 12:14:09.316438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.409 qpair failed and we were unable to recover it. 00:30:35.409 [2024-12-05 12:14:09.316637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.409 [2024-12-05 12:14:09.316668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.409 qpair failed and we were unable to recover it. 00:30:35.409 [2024-12-05 12:14:09.316870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.409 [2024-12-05 12:14:09.316903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.409 qpair failed and we were unable to recover it. 00:30:35.409 [2024-12-05 12:14:09.317036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.409 [2024-12-05 12:14:09.317068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.409 qpair failed and we were unable to recover it. 00:30:35.409 [2024-12-05 12:14:09.317193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.409 [2024-12-05 12:14:09.317224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.409 qpair failed and we were unable to recover it. 00:30:35.409 [2024-12-05 12:14:09.317465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.409 [2024-12-05 12:14:09.317498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.409 qpair failed and we were unable to recover it. 00:30:35.409 [2024-12-05 12:14:09.317677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.409 [2024-12-05 12:14:09.317709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.409 qpair failed and we were unable to recover it. 00:30:35.409 [2024-12-05 12:14:09.317913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.409 [2024-12-05 12:14:09.317945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.409 qpair failed and we were unable to recover it. 00:30:35.409 [2024-12-05 12:14:09.318080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.409 [2024-12-05 12:14:09.318112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.409 qpair failed and we were unable to recover it. 00:30:35.409 [2024-12-05 12:14:09.318227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.409 [2024-12-05 12:14:09.318260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.409 qpair failed and we were unable to recover it. 00:30:35.409 [2024-12-05 12:14:09.318521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.410 [2024-12-05 12:14:09.318555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.410 qpair failed and we were unable to recover it. 00:30:35.410 [2024-12-05 12:14:09.318681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.410 [2024-12-05 12:14:09.318714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.410 qpair failed and we were unable to recover it. 00:30:35.410 [2024-12-05 12:14:09.318978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.410 [2024-12-05 12:14:09.319010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.410 qpair failed and we were unable to recover it. 00:30:35.410 [2024-12-05 12:14:09.319209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.410 [2024-12-05 12:14:09.319242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.410 qpair failed and we were unable to recover it. 00:30:35.410 [2024-12-05 12:14:09.319426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.410 [2024-12-05 12:14:09.319465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.410 qpair failed and we were unable to recover it. 00:30:35.410 [2024-12-05 12:14:09.319675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.410 [2024-12-05 12:14:09.319707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.410 qpair failed and we were unable to recover it. 00:30:35.410 [2024-12-05 12:14:09.319839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.410 [2024-12-05 12:14:09.319870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.410 qpair failed and we were unable to recover it. 00:30:35.410 [2024-12-05 12:14:09.320135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.410 [2024-12-05 12:14:09.320166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.410 qpair failed and we were unable to recover it. 00:30:35.410 [2024-12-05 12:14:09.320295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.410 [2024-12-05 12:14:09.320326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.410 qpair failed and we were unable to recover it. 00:30:35.410 [2024-12-05 12:14:09.320621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.410 [2024-12-05 12:14:09.320657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.410 qpair failed and we were unable to recover it. 00:30:35.410 [2024-12-05 12:14:09.320785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.410 [2024-12-05 12:14:09.320816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.410 qpair failed and we were unable to recover it. 00:30:35.410 [2024-12-05 12:14:09.321005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.410 [2024-12-05 12:14:09.321040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.410 qpair failed and we were unable to recover it. 00:30:35.410 [2024-12-05 12:14:09.321142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.410 [2024-12-05 12:14:09.321173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.410 qpair failed and we were unable to recover it. 00:30:35.410 [2024-12-05 12:14:09.321355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.410 [2024-12-05 12:14:09.321397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.410 qpair failed and we were unable to recover it. 00:30:35.410 [2024-12-05 12:14:09.321511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.410 [2024-12-05 12:14:09.321542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.410 qpair failed and we were unable to recover it. 00:30:35.410 [2024-12-05 12:14:09.321757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.410 [2024-12-05 12:14:09.321788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.410 qpair failed and we were unable to recover it. 00:30:35.410 [2024-12-05 12:14:09.322032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.410 [2024-12-05 12:14:09.322063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.410 qpair failed and we were unable to recover it. 00:30:35.410 [2024-12-05 12:14:09.322178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.410 [2024-12-05 12:14:09.322207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.410 qpair failed and we were unable to recover it. 00:30:35.410 [2024-12-05 12:14:09.322407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.410 [2024-12-05 12:14:09.322440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.410 qpair failed and we were unable to recover it. 00:30:35.410 [2024-12-05 12:14:09.322637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.410 [2024-12-05 12:14:09.322668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.410 qpair failed and we were unable to recover it. 00:30:35.410 [2024-12-05 12:14:09.322932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.410 [2024-12-05 12:14:09.322963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.410 qpair failed and we were unable to recover it. 00:30:35.410 [2024-12-05 12:14:09.323201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.410 [2024-12-05 12:14:09.323231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.410 qpair failed and we were unable to recover it. 00:30:35.410 [2024-12-05 12:14:09.323421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.410 [2024-12-05 12:14:09.323455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.410 qpair failed and we were unable to recover it. 00:30:35.410 [2024-12-05 12:14:09.323593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.410 [2024-12-05 12:14:09.323627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.410 qpair failed and we were unable to recover it. 00:30:35.410 [2024-12-05 12:14:09.323808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.410 [2024-12-05 12:14:09.323839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.410 qpair failed and we were unable to recover it. 00:30:35.410 [2024-12-05 12:14:09.324109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.410 [2024-12-05 12:14:09.324141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.410 qpair failed and we were unable to recover it. 00:30:35.410 [2024-12-05 12:14:09.324316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.410 [2024-12-05 12:14:09.324347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.410 qpair failed and we were unable to recover it. 00:30:35.410 [2024-12-05 12:14:09.324532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.410 [2024-12-05 12:14:09.324564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.410 qpair failed and we were unable to recover it. 00:30:35.410 [2024-12-05 12:14:09.324744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.410 [2024-12-05 12:14:09.324775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.410 qpair failed and we were unable to recover it. 00:30:35.410 [2024-12-05 12:14:09.324951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.410 [2024-12-05 12:14:09.324982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.410 qpair failed and we were unable to recover it. 00:30:35.410 [2024-12-05 12:14:09.325234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.410 [2024-12-05 12:14:09.325265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.410 qpair failed and we were unable to recover it. 00:30:35.410 [2024-12-05 12:14:09.325464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.410 [2024-12-05 12:14:09.325497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.410 qpair failed and we were unable to recover it. 00:30:35.410 [2024-12-05 12:14:09.325604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.410 [2024-12-05 12:14:09.325636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.410 qpair failed and we were unable to recover it. 00:30:35.410 [2024-12-05 12:14:09.325906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.410 [2024-12-05 12:14:09.325937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.410 qpair failed and we were unable to recover it. 00:30:35.410 [2024-12-05 12:14:09.326144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.410 [2024-12-05 12:14:09.326175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.410 qpair failed and we were unable to recover it. 00:30:35.410 [2024-12-05 12:14:09.326417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.410 [2024-12-05 12:14:09.326450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.410 qpair failed and we were unable to recover it. 00:30:35.410 [2024-12-05 12:14:09.326718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.410 [2024-12-05 12:14:09.326750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.410 qpair failed and we were unable to recover it. 00:30:35.410 [2024-12-05 12:14:09.326923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.410 [2024-12-05 12:14:09.326954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.410 qpair failed and we were unable to recover it. 00:30:35.410 [2024-12-05 12:14:09.327156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.411 [2024-12-05 12:14:09.327187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.411 qpair failed and we were unable to recover it. 00:30:35.411 [2024-12-05 12:14:09.327307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.411 [2024-12-05 12:14:09.327339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.411 qpair failed and we were unable to recover it. 00:30:35.411 [2024-12-05 12:14:09.327585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.411 [2024-12-05 12:14:09.327617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.411 qpair failed and we were unable to recover it. 00:30:35.411 [2024-12-05 12:14:09.327734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.411 [2024-12-05 12:14:09.327766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.411 qpair failed and we were unable to recover it. 00:30:35.411 [2024-12-05 12:14:09.327898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.411 [2024-12-05 12:14:09.327929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.411 qpair failed and we were unable to recover it. 00:30:35.411 [2024-12-05 12:14:09.328117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.411 [2024-12-05 12:14:09.328150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.411 qpair failed and we were unable to recover it. 00:30:35.411 [2024-12-05 12:14:09.328268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.411 [2024-12-05 12:14:09.328304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.411 qpair failed and we were unable to recover it. 00:30:35.411 [2024-12-05 12:14:09.328456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.411 [2024-12-05 12:14:09.328490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.411 qpair failed and we were unable to recover it. 00:30:35.411 [2024-12-05 12:14:09.328611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.411 [2024-12-05 12:14:09.328644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.411 qpair failed and we were unable to recover it. 00:30:35.411 [2024-12-05 12:14:09.328835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.411 [2024-12-05 12:14:09.328867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.411 qpair failed and we were unable to recover it. 00:30:35.411 [2024-12-05 12:14:09.329132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.411 [2024-12-05 12:14:09.329163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.411 qpair failed and we were unable to recover it. 00:30:35.411 [2024-12-05 12:14:09.329345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.411 [2024-12-05 12:14:09.329386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.411 qpair failed and we were unable to recover it. 00:30:35.411 [2024-12-05 12:14:09.329568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.411 [2024-12-05 12:14:09.329602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.411 qpair failed and we were unable to recover it. 00:30:35.411 [2024-12-05 12:14:09.329796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.411 [2024-12-05 12:14:09.329828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.411 qpair failed and we were unable to recover it. 00:30:35.411 [2024-12-05 12:14:09.330075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.411 [2024-12-05 12:14:09.330106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.411 qpair failed and we were unable to recover it. 00:30:35.411 [2024-12-05 12:14:09.330287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.411 [2024-12-05 12:14:09.330319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.411 qpair failed and we were unable to recover it. 00:30:35.411 [2024-12-05 12:14:09.330534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.411 [2024-12-05 12:14:09.330566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.411 qpair failed and we were unable to recover it. 00:30:35.411 [2024-12-05 12:14:09.330808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.411 [2024-12-05 12:14:09.330840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.411 qpair failed and we were unable to recover it. 00:30:35.411 [2024-12-05 12:14:09.331050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.411 [2024-12-05 12:14:09.331082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.411 qpair failed and we were unable to recover it. 00:30:35.411 [2024-12-05 12:14:09.331199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.411 [2024-12-05 12:14:09.331232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.411 qpair failed and we were unable to recover it. 00:30:35.411 [2024-12-05 12:14:09.331424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.411 [2024-12-05 12:14:09.331457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.411 qpair failed and we were unable to recover it. 00:30:35.411 [2024-12-05 12:14:09.331643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.411 [2024-12-05 12:14:09.331674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.411 qpair failed and we were unable to recover it. 00:30:35.411 [2024-12-05 12:14:09.331846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.411 [2024-12-05 12:14:09.331879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.411 qpair failed and we were unable to recover it. 00:30:35.411 [2024-12-05 12:14:09.332003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.411 [2024-12-05 12:14:09.332034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.411 qpair failed and we were unable to recover it. 00:30:35.411 [2024-12-05 12:14:09.332243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.411 [2024-12-05 12:14:09.332277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.411 qpair failed and we were unable to recover it. 00:30:35.411 [2024-12-05 12:14:09.332462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.411 [2024-12-05 12:14:09.332494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.411 qpair failed and we were unable to recover it. 00:30:35.411 [2024-12-05 12:14:09.332693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.411 [2024-12-05 12:14:09.332725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.411 qpair failed and we were unable to recover it. 00:30:35.411 [2024-12-05 12:14:09.332909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.411 [2024-12-05 12:14:09.332940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.411 qpair failed and we were unable to recover it. 00:30:35.411 [2024-12-05 12:14:09.333209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.411 [2024-12-05 12:14:09.333241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.411 qpair failed and we were unable to recover it. 00:30:35.411 [2024-12-05 12:14:09.333428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.411 [2024-12-05 12:14:09.333461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.411 qpair failed and we were unable to recover it. 00:30:35.411 [2024-12-05 12:14:09.333580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.411 [2024-12-05 12:14:09.333613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.411 qpair failed and we were unable to recover it. 00:30:35.411 [2024-12-05 12:14:09.333793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.411 [2024-12-05 12:14:09.333824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.411 qpair failed and we were unable to recover it. 00:30:35.411 [2024-12-05 12:14:09.334065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.411 [2024-12-05 12:14:09.334097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.411 qpair failed and we were unable to recover it. 00:30:35.411 [2024-12-05 12:14:09.334314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.411 [2024-12-05 12:14:09.334346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.411 qpair failed and we were unable to recover it. 00:30:35.411 [2024-12-05 12:14:09.334480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.411 [2024-12-05 12:14:09.334513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.411 qpair failed and we were unable to recover it. 00:30:35.411 [2024-12-05 12:14:09.334757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.411 [2024-12-05 12:14:09.334789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.411 qpair failed and we were unable to recover it. 00:30:35.411 [2024-12-05 12:14:09.334912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.411 [2024-12-05 12:14:09.334943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.411 qpair failed and we were unable to recover it. 00:30:35.411 [2024-12-05 12:14:09.335178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.411 [2024-12-05 12:14:09.335211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.411 qpair failed and we were unable to recover it. 00:30:35.411 [2024-12-05 12:14:09.335394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.412 [2024-12-05 12:14:09.335425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.412 qpair failed and we were unable to recover it. 00:30:35.412 [2024-12-05 12:14:09.335553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.412 [2024-12-05 12:14:09.335585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.412 qpair failed and we were unable to recover it. 00:30:35.412 [2024-12-05 12:14:09.335760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.412 [2024-12-05 12:14:09.335792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.412 qpair failed and we were unable to recover it. 00:30:35.412 [2024-12-05 12:14:09.336028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.412 [2024-12-05 12:14:09.336060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.412 qpair failed and we were unable to recover it. 00:30:35.412 [2024-12-05 12:14:09.336230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.412 [2024-12-05 12:14:09.336262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.412 qpair failed and we were unable to recover it. 00:30:35.412 [2024-12-05 12:14:09.336452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.412 [2024-12-05 12:14:09.336485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.412 qpair failed and we were unable to recover it. 00:30:35.412 [2024-12-05 12:14:09.336785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.412 [2024-12-05 12:14:09.336816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.412 qpair failed and we were unable to recover it. 00:30:35.412 [2024-12-05 12:14:09.337008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.412 [2024-12-05 12:14:09.337040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.412 qpair failed and we were unable to recover it. 00:30:35.412 [2024-12-05 12:14:09.337216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.412 [2024-12-05 12:14:09.337253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.412 qpair failed and we were unable to recover it. 00:30:35.412 [2024-12-05 12:14:09.337385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.412 [2024-12-05 12:14:09.337417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.412 qpair failed and we were unable to recover it. 00:30:35.412 [2024-12-05 12:14:09.337689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.412 [2024-12-05 12:14:09.337720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.412 qpair failed and we were unable to recover it. 00:30:35.412 [2024-12-05 12:14:09.337917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.412 [2024-12-05 12:14:09.337948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.412 qpair failed and we were unable to recover it. 00:30:35.412 [2024-12-05 12:14:09.338135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.412 [2024-12-05 12:14:09.338166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.412 qpair failed and we were unable to recover it. 00:30:35.412 [2024-12-05 12:14:09.338344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.412 [2024-12-05 12:14:09.338384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.412 qpair failed and we were unable to recover it. 00:30:35.412 [2024-12-05 12:14:09.338602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.412 [2024-12-05 12:14:09.338634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.412 qpair failed and we were unable to recover it. 00:30:35.412 [2024-12-05 12:14:09.338761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.412 [2024-12-05 12:14:09.338792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.412 qpair failed and we were unable to recover it. 00:30:35.412 [2024-12-05 12:14:09.338977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.412 [2024-12-05 12:14:09.339009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.412 qpair failed and we were unable to recover it. 00:30:35.412 [2024-12-05 12:14:09.339197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.412 [2024-12-05 12:14:09.339235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.412 qpair failed and we were unable to recover it. 00:30:35.412 [2024-12-05 12:14:09.339426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.412 [2024-12-05 12:14:09.339459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.412 qpair failed and we were unable to recover it. 00:30:35.412 [2024-12-05 12:14:09.339599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.412 [2024-12-05 12:14:09.339631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.412 qpair failed and we were unable to recover it. 00:30:35.412 [2024-12-05 12:14:09.339781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.412 [2024-12-05 12:14:09.339812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.412 qpair failed and we were unable to recover it. 00:30:35.412 [2024-12-05 12:14:09.339918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.412 [2024-12-05 12:14:09.339948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.412 qpair failed and we were unable to recover it. 00:30:35.412 [2024-12-05 12:14:09.340066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.412 [2024-12-05 12:14:09.340097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.412 qpair failed and we were unable to recover it. 00:30:35.412 [2024-12-05 12:14:09.340271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.412 [2024-12-05 12:14:09.340302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.412 qpair failed and we were unable to recover it. 00:30:35.412 [2024-12-05 12:14:09.340493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.412 [2024-12-05 12:14:09.340525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.412 qpair failed and we were unable to recover it. 00:30:35.412 [2024-12-05 12:14:09.340764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.412 [2024-12-05 12:14:09.340796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.412 qpair failed and we were unable to recover it. 00:30:35.412 [2024-12-05 12:14:09.340920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.412 [2024-12-05 12:14:09.340952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.412 qpair failed and we were unable to recover it. 00:30:35.412 [2024-12-05 12:14:09.341146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.412 [2024-12-05 12:14:09.341177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.412 qpair failed and we were unable to recover it. 00:30:35.412 [2024-12-05 12:14:09.341393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.412 [2024-12-05 12:14:09.341426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.412 qpair failed and we were unable to recover it. 00:30:35.412 [2024-12-05 12:14:09.341602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.412 [2024-12-05 12:14:09.341634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.412 qpair failed and we were unable to recover it. 00:30:35.412 [2024-12-05 12:14:09.341863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.412 [2024-12-05 12:14:09.341894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.412 qpair failed and we were unable to recover it. 00:30:35.412 [2024-12-05 12:14:09.342074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.412 [2024-12-05 12:14:09.342105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.412 qpair failed and we were unable to recover it. 00:30:35.412 [2024-12-05 12:14:09.342386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.412 [2024-12-05 12:14:09.342419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.412 qpair failed and we were unable to recover it. 00:30:35.412 [2024-12-05 12:14:09.342612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.412 [2024-12-05 12:14:09.342643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.412 qpair failed and we were unable to recover it. 00:30:35.412 [2024-12-05 12:14:09.342841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.412 [2024-12-05 12:14:09.342872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.412 qpair failed and we were unable to recover it. 00:30:35.412 [2024-12-05 12:14:09.343073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.412 [2024-12-05 12:14:09.343107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.412 qpair failed and we were unable to recover it. 00:30:35.412 [2024-12-05 12:14:09.343380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.412 [2024-12-05 12:14:09.343413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.412 qpair failed and we were unable to recover it. 00:30:35.412 [2024-12-05 12:14:09.343529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.412 [2024-12-05 12:14:09.343560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.412 qpair failed and we were unable to recover it. 00:30:35.412 [2024-12-05 12:14:09.343811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.413 [2024-12-05 12:14:09.343842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.413 qpair failed and we were unable to recover it. 00:30:35.413 [2024-12-05 12:14:09.343945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.413 [2024-12-05 12:14:09.343976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.413 qpair failed and we were unable to recover it. 00:30:35.413 [2024-12-05 12:14:09.344146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.413 [2024-12-05 12:14:09.344178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.413 qpair failed and we were unable to recover it. 00:30:35.413 [2024-12-05 12:14:09.344312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.413 [2024-12-05 12:14:09.344343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.413 qpair failed and we were unable to recover it. 00:30:35.413 [2024-12-05 12:14:09.344574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.413 [2024-12-05 12:14:09.344607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.413 qpair failed and we were unable to recover it. 00:30:35.413 [2024-12-05 12:14:09.344732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.413 [2024-12-05 12:14:09.344763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.413 qpair failed and we were unable to recover it. 00:30:35.413 [2024-12-05 12:14:09.344940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.413 [2024-12-05 12:14:09.344971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.413 qpair failed and we were unable to recover it. 00:30:35.413 [2024-12-05 12:14:09.345093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.413 [2024-12-05 12:14:09.345125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.413 qpair failed and we were unable to recover it. 00:30:35.413 [2024-12-05 12:14:09.345329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.413 [2024-12-05 12:14:09.345361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.413 qpair failed and we were unable to recover it. 00:30:35.413 [2024-12-05 12:14:09.345583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.413 [2024-12-05 12:14:09.345615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.413 qpair failed and we were unable to recover it. 00:30:35.413 [2024-12-05 12:14:09.345853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.413 [2024-12-05 12:14:09.345896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.413 qpair failed and we were unable to recover it. 00:30:35.413 [2024-12-05 12:14:09.346081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.413 [2024-12-05 12:14:09.346112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.413 qpair failed and we were unable to recover it. 00:30:35.413 [2024-12-05 12:14:09.346248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.413 [2024-12-05 12:14:09.346279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.413 qpair failed and we were unable to recover it. 00:30:35.413 [2024-12-05 12:14:09.346393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.413 [2024-12-05 12:14:09.346424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.413 qpair failed and we were unable to recover it. 00:30:35.413 [2024-12-05 12:14:09.346544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.413 [2024-12-05 12:14:09.346578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.413 qpair failed and we were unable to recover it. 00:30:35.413 [2024-12-05 12:14:09.346689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.413 [2024-12-05 12:14:09.346719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.413 qpair failed and we were unable to recover it. 00:30:35.413 [2024-12-05 12:14:09.346916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.413 [2024-12-05 12:14:09.346947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.413 qpair failed and we were unable to recover it. 00:30:35.413 [2024-12-05 12:14:09.347139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.413 [2024-12-05 12:14:09.347171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.413 qpair failed and we were unable to recover it. 00:30:35.413 [2024-12-05 12:14:09.347301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.413 [2024-12-05 12:14:09.347334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.413 qpair failed and we were unable to recover it. 00:30:35.413 [2024-12-05 12:14:09.347459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.413 [2024-12-05 12:14:09.347492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.413 qpair failed and we were unable to recover it. 00:30:35.413 [2024-12-05 12:14:09.347666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.413 [2024-12-05 12:14:09.347698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.413 qpair failed and we were unable to recover it. 00:30:35.413 [2024-12-05 12:14:09.347871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.413 [2024-12-05 12:14:09.347904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.413 qpair failed and we were unable to recover it. 00:30:35.413 [2024-12-05 12:14:09.348015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.413 [2024-12-05 12:14:09.348046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.413 qpair failed and we were unable to recover it. 00:30:35.413 [2024-12-05 12:14:09.348234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.413 [2024-12-05 12:14:09.348267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.413 qpair failed and we were unable to recover it. 00:30:35.413 [2024-12-05 12:14:09.348466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.413 [2024-12-05 12:14:09.348500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.413 qpair failed and we were unable to recover it. 00:30:35.413 [2024-12-05 12:14:09.348630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.413 [2024-12-05 12:14:09.348671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.413 qpair failed and we were unable to recover it. 00:30:35.413 [2024-12-05 12:14:09.348887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.413 [2024-12-05 12:14:09.348918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.413 qpair failed and we were unable to recover it. 00:30:35.413 [2024-12-05 12:14:09.349118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.413 [2024-12-05 12:14:09.349152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.413 qpair failed and we were unable to recover it. 00:30:35.413 [2024-12-05 12:14:09.349278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.413 [2024-12-05 12:14:09.349310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.413 qpair failed and we were unable to recover it. 00:30:35.413 [2024-12-05 12:14:09.349529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.413 [2024-12-05 12:14:09.349562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.413 qpair failed and we were unable to recover it. 00:30:35.413 [2024-12-05 12:14:09.349760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.413 [2024-12-05 12:14:09.349804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.413 qpair failed and we were unable to recover it. 00:30:35.413 [2024-12-05 12:14:09.350023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.413 [2024-12-05 12:14:09.350058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.413 qpair failed and we were unable to recover it. 00:30:35.413 [2024-12-05 12:14:09.350192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.413 [2024-12-05 12:14:09.350225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.413 qpair failed and we were unable to recover it. 00:30:35.413 [2024-12-05 12:14:09.350431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.413 [2024-12-05 12:14:09.350465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.413 qpair failed and we were unable to recover it. 00:30:35.413 [2024-12-05 12:14:09.350651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.413 [2024-12-05 12:14:09.350682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.413 qpair failed and we were unable to recover it. 00:30:35.413 [2024-12-05 12:14:09.350876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.413 [2024-12-05 12:14:09.350909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.413 qpair failed and we were unable to recover it. 00:30:35.413 [2024-12-05 12:14:09.351037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.413 [2024-12-05 12:14:09.351077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.413 qpair failed and we were unable to recover it. 00:30:35.413 [2024-12-05 12:14:09.351282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.413 [2024-12-05 12:14:09.351316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.413 qpair failed and we were unable to recover it. 00:30:35.413 [2024-12-05 12:14:09.351612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.414 [2024-12-05 12:14:09.351646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.414 qpair failed and we were unable to recover it. 00:30:35.414 [2024-12-05 12:14:09.351788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.414 [2024-12-05 12:14:09.351820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.414 qpair failed and we were unable to recover it. 00:30:35.414 [2024-12-05 12:14:09.351991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.414 [2024-12-05 12:14:09.352026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.414 qpair failed and we were unable to recover it. 00:30:35.414 [2024-12-05 12:14:09.352209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.414 [2024-12-05 12:14:09.352252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.414 qpair failed and we were unable to recover it. 00:30:35.414 [2024-12-05 12:14:09.352516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.414 [2024-12-05 12:14:09.352550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.414 qpair failed and we were unable to recover it. 00:30:35.414 [2024-12-05 12:14:09.352672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.414 [2024-12-05 12:14:09.352707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.414 qpair failed and we were unable to recover it. 00:30:35.414 [2024-12-05 12:14:09.352897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.414 [2024-12-05 12:14:09.352929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.414 qpair failed and we were unable to recover it. 00:30:35.414 [2024-12-05 12:14:09.353039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.414 [2024-12-05 12:14:09.353070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.414 qpair failed and we were unable to recover it. 00:30:35.414 [2024-12-05 12:14:09.353275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.414 [2024-12-05 12:14:09.353309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.414 qpair failed and we were unable to recover it. 00:30:35.414 [2024-12-05 12:14:09.353545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.414 [2024-12-05 12:14:09.353581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.414 qpair failed and we were unable to recover it. 00:30:35.414 [2024-12-05 12:14:09.353708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.414 [2024-12-05 12:14:09.353740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.414 qpair failed and we were unable to recover it. 00:30:35.414 [2024-12-05 12:14:09.353928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.414 [2024-12-05 12:14:09.353959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.414 qpair failed and we were unable to recover it. 00:30:35.414 [2024-12-05 12:14:09.354129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.414 [2024-12-05 12:14:09.354170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.414 qpair failed and we were unable to recover it. 00:30:35.414 [2024-12-05 12:14:09.354286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.414 [2024-12-05 12:14:09.354318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.414 qpair failed and we were unable to recover it. 00:30:35.414 [2024-12-05 12:14:09.354534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.414 [2024-12-05 12:14:09.354571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.414 qpair failed and we were unable to recover it. 00:30:35.414 [2024-12-05 12:14:09.354849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.414 [2024-12-05 12:14:09.354881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.414 qpair failed and we were unable to recover it. 00:30:35.414 [2024-12-05 12:14:09.355068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.414 [2024-12-05 12:14:09.355102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.414 qpair failed and we were unable to recover it. 00:30:35.414 [2024-12-05 12:14:09.355314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.414 [2024-12-05 12:14:09.355347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.414 qpair failed and we were unable to recover it. 00:30:35.414 [2024-12-05 12:14:09.355485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.414 [2024-12-05 12:14:09.355521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.414 qpair failed and we were unable to recover it. 00:30:35.414 [2024-12-05 12:14:09.355642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.414 [2024-12-05 12:14:09.355674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.414 qpair failed and we were unable to recover it. 00:30:35.414 [2024-12-05 12:14:09.355781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.414 [2024-12-05 12:14:09.355812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.414 qpair failed and we were unable to recover it. 00:30:35.414 [2024-12-05 12:14:09.355958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.414 [2024-12-05 12:14:09.355992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.414 qpair failed and we were unable to recover it. 00:30:35.414 [2024-12-05 12:14:09.356100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.414 [2024-12-05 12:14:09.356132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.414 qpair failed and we were unable to recover it. 00:30:35.414 [2024-12-05 12:14:09.356414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.414 [2024-12-05 12:14:09.356453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.414 qpair failed and we were unable to recover it. 00:30:35.414 [2024-12-05 12:14:09.356648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.414 [2024-12-05 12:14:09.356681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.414 qpair failed and we were unable to recover it. 00:30:35.414 [2024-12-05 12:14:09.356935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.414 [2024-12-05 12:14:09.356970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.414 qpair failed and we were unable to recover it. 00:30:35.414 [2024-12-05 12:14:09.357243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.414 [2024-12-05 12:14:09.357278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.414 qpair failed and we were unable to recover it. 00:30:35.414 [2024-12-05 12:14:09.357453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.414 [2024-12-05 12:14:09.357489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.414 qpair failed and we were unable to recover it. 00:30:35.414 [2024-12-05 12:14:09.357681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.414 [2024-12-05 12:14:09.357712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.414 qpair failed and we were unable to recover it. 00:30:35.414 [2024-12-05 12:14:09.357916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.414 [2024-12-05 12:14:09.357954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.414 qpair failed and we were unable to recover it. 00:30:35.414 [2024-12-05 12:14:09.358205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.414 [2024-12-05 12:14:09.358240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.414 qpair failed and we were unable to recover it. 00:30:35.414 [2024-12-05 12:14:09.358381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.414 [2024-12-05 12:14:09.358414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.414 qpair failed and we were unable to recover it. 00:30:35.414 [2024-12-05 12:14:09.358618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.414 [2024-12-05 12:14:09.358653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.414 qpair failed and we were unable to recover it. 00:30:35.414 [2024-12-05 12:14:09.358858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.414 [2024-12-05 12:14:09.358890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.414 qpair failed and we were unable to recover it. 00:30:35.414 [2024-12-05 12:14:09.359080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.414 [2024-12-05 12:14:09.359111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.414 qpair failed and we were unable to recover it. 00:30:35.415 [2024-12-05 12:14:09.359302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.415 [2024-12-05 12:14:09.359333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.415 qpair failed and we were unable to recover it. 00:30:35.415 [2024-12-05 12:14:09.359612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.415 [2024-12-05 12:14:09.359646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.415 qpair failed and we were unable to recover it. 00:30:35.415 [2024-12-05 12:14:09.359820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.415 [2024-12-05 12:14:09.359851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.415 qpair failed and we were unable to recover it. 00:30:35.415 [2024-12-05 12:14:09.360041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.415 [2024-12-05 12:14:09.360072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.415 qpair failed and we were unable to recover it. 00:30:35.415 [2024-12-05 12:14:09.360218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.415 [2024-12-05 12:14:09.360251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.415 qpair failed and we were unable to recover it. 00:30:35.415 [2024-12-05 12:14:09.360384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.415 [2024-12-05 12:14:09.360417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.415 qpair failed and we were unable to recover it. 00:30:35.415 [2024-12-05 12:14:09.360526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.415 [2024-12-05 12:14:09.360558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.415 qpair failed and we were unable to recover it. 00:30:35.415 [2024-12-05 12:14:09.360726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.415 [2024-12-05 12:14:09.360758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.415 qpair failed and we were unable to recover it. 00:30:35.415 [2024-12-05 12:14:09.360875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.415 [2024-12-05 12:14:09.360906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.415 qpair failed and we were unable to recover it. 00:30:35.415 [2024-12-05 12:14:09.361040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.415 [2024-12-05 12:14:09.361071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.415 qpair failed and we were unable to recover it. 00:30:35.415 [2024-12-05 12:14:09.361274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.415 [2024-12-05 12:14:09.361305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.415 qpair failed and we were unable to recover it. 00:30:35.415 [2024-12-05 12:14:09.361501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.415 [2024-12-05 12:14:09.361533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.415 qpair failed and we were unable to recover it. 00:30:35.415 [2024-12-05 12:14:09.361648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.415 [2024-12-05 12:14:09.361678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.415 qpair failed and we were unable to recover it. 00:30:35.415 [2024-12-05 12:14:09.361873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.415 [2024-12-05 12:14:09.361904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.415 qpair failed and we were unable to recover it. 00:30:35.415 [2024-12-05 12:14:09.362089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.415 [2024-12-05 12:14:09.362121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.415 qpair failed and we were unable to recover it. 00:30:35.415 [2024-12-05 12:14:09.362316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.415 [2024-12-05 12:14:09.362348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.415 qpair failed and we were unable to recover it. 00:30:35.415 [2024-12-05 12:14:09.362482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.415 [2024-12-05 12:14:09.362513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.415 qpair failed and we were unable to recover it. 00:30:35.415 [2024-12-05 12:14:09.362684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.415 [2024-12-05 12:14:09.362721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.415 qpair failed and we were unable to recover it. 00:30:35.415 [2024-12-05 12:14:09.362836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.415 [2024-12-05 12:14:09.362867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.415 qpair failed and we were unable to recover it. 00:30:35.415 [2024-12-05 12:14:09.363148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.415 [2024-12-05 12:14:09.363180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.415 qpair failed and we were unable to recover it. 00:30:35.415 [2024-12-05 12:14:09.363387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.415 [2024-12-05 12:14:09.363427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.415 qpair failed and we were unable to recover it. 00:30:35.415 [2024-12-05 12:14:09.363530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.415 [2024-12-05 12:14:09.363564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.415 qpair failed and we were unable to recover it. 00:30:35.415 [2024-12-05 12:14:09.363675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.415 [2024-12-05 12:14:09.363704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.415 qpair failed and we were unable to recover it. 00:30:35.415 [2024-12-05 12:14:09.363831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.415 [2024-12-05 12:14:09.363861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.415 qpair failed and we were unable to recover it. 00:30:35.415 [2024-12-05 12:14:09.364045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.415 [2024-12-05 12:14:09.364075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.415 qpair failed and we were unable to recover it. 00:30:35.415 [2024-12-05 12:14:09.364268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.415 [2024-12-05 12:14:09.364297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.415 qpair failed and we were unable to recover it. 00:30:35.415 [2024-12-05 12:14:09.364470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.415 [2024-12-05 12:14:09.364502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.415 qpair failed and we were unable to recover it. 00:30:35.415 [2024-12-05 12:14:09.364738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.415 [2024-12-05 12:14:09.364770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.415 qpair failed and we were unable to recover it. 00:30:35.415 [2024-12-05 12:14:09.364882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.415 [2024-12-05 12:14:09.364911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.415 qpair failed and we were unable to recover it. 00:30:35.415 [2024-12-05 12:14:09.365149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.415 [2024-12-05 12:14:09.365180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.415 qpair failed and we were unable to recover it. 00:30:35.415 [2024-12-05 12:14:09.365391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.415 [2024-12-05 12:14:09.365424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.415 qpair failed and we were unable to recover it. 00:30:35.415 [2024-12-05 12:14:09.365691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.415 [2024-12-05 12:14:09.365723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.415 qpair failed and we were unable to recover it. 00:30:35.415 [2024-12-05 12:14:09.365908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.415 [2024-12-05 12:14:09.365938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.415 qpair failed and we were unable to recover it. 00:30:35.415 [2024-12-05 12:14:09.366074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.415 [2024-12-05 12:14:09.366106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.415 qpair failed and we were unable to recover it. 00:30:35.415 [2024-12-05 12:14:09.366221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.415 [2024-12-05 12:14:09.366251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.415 qpair failed and we were unable to recover it. 00:30:35.415 [2024-12-05 12:14:09.366488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.415 [2024-12-05 12:14:09.366522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.415 qpair failed and we were unable to recover it. 00:30:35.415 [2024-12-05 12:14:09.366719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.415 [2024-12-05 12:14:09.366751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.415 qpair failed and we were unable to recover it. 00:30:35.415 [2024-12-05 12:14:09.366961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.415 [2024-12-05 12:14:09.366992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.415 qpair failed and we were unable to recover it. 00:30:35.415 [2024-12-05 12:14:09.367194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.416 [2024-12-05 12:14:09.367226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.416 qpair failed and we were unable to recover it. 00:30:35.416 [2024-12-05 12:14:09.367395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.416 [2024-12-05 12:14:09.367427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.416 qpair failed and we were unable to recover it. 00:30:35.416 [2024-12-05 12:14:09.367617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.416 [2024-12-05 12:14:09.367651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.416 qpair failed and we were unable to recover it. 00:30:35.416 [2024-12-05 12:14:09.367819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.416 [2024-12-05 12:14:09.367857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.416 qpair failed and we were unable to recover it. 00:30:35.416 [2024-12-05 12:14:09.368101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.416 [2024-12-05 12:14:09.368136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.416 qpair failed and we were unable to recover it. 00:30:35.416 [2024-12-05 12:14:09.368389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.416 [2024-12-05 12:14:09.368430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.416 qpair failed and we were unable to recover it. 00:30:35.416 [2024-12-05 12:14:09.368622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.416 [2024-12-05 12:14:09.368656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.416 qpair failed and we were unable to recover it. 00:30:35.416 [2024-12-05 12:14:09.368856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.416 [2024-12-05 12:14:09.368887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.416 qpair failed and we were unable to recover it. 00:30:35.416 [2024-12-05 12:14:09.368995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.416 [2024-12-05 12:14:09.369026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.416 qpair failed and we were unable to recover it. 00:30:35.416 [2024-12-05 12:14:09.369161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.416 [2024-12-05 12:14:09.369192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.416 qpair failed and we were unable to recover it. 00:30:35.416 [2024-12-05 12:14:09.369401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.416 [2024-12-05 12:14:09.369447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.416 qpair failed and we were unable to recover it. 00:30:35.416 [2024-12-05 12:14:09.369644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.416 [2024-12-05 12:14:09.369676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.416 qpair failed and we were unable to recover it. 00:30:35.416 [2024-12-05 12:14:09.369849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.416 [2024-12-05 12:14:09.369881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.416 qpair failed and we were unable to recover it. 00:30:35.416 [2024-12-05 12:14:09.370070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.416 [2024-12-05 12:14:09.370103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.416 qpair failed and we were unable to recover it. 00:30:35.416 [2024-12-05 12:14:09.370289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.416 [2024-12-05 12:14:09.370325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.416 qpair failed and we were unable to recover it. 00:30:35.416 [2024-12-05 12:14:09.370520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.416 [2024-12-05 12:14:09.370556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.416 qpair failed and we were unable to recover it. 00:30:35.416 [2024-12-05 12:14:09.370742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.416 [2024-12-05 12:14:09.370775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.416 qpair failed and we were unable to recover it. 00:30:35.416 [2024-12-05 12:14:09.371040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.416 [2024-12-05 12:14:09.371073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.416 qpair failed and we were unable to recover it. 00:30:35.416 [2024-12-05 12:14:09.371336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.416 [2024-12-05 12:14:09.371378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.416 qpair failed and we were unable to recover it. 00:30:35.416 [2024-12-05 12:14:09.371555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.416 [2024-12-05 12:14:09.371598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.416 qpair failed and we were unable to recover it. 00:30:35.416 [2024-12-05 12:14:09.371783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.416 [2024-12-05 12:14:09.371823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.416 qpair failed and we were unable to recover it. 00:30:35.416 [2024-12-05 12:14:09.371948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.416 [2024-12-05 12:14:09.371980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.416 qpair failed and we were unable to recover it. 00:30:35.416 [2024-12-05 12:14:09.372150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.416 [2024-12-05 12:14:09.372181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.416 qpair failed and we were unable to recover it. 00:30:35.416 [2024-12-05 12:14:09.372377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.416 [2024-12-05 12:14:09.372411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.416 qpair failed and we were unable to recover it. 00:30:35.416 [2024-12-05 12:14:09.372545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.416 [2024-12-05 12:14:09.372576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.416 qpair failed and we were unable to recover it. 00:30:35.416 [2024-12-05 12:14:09.372777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.416 [2024-12-05 12:14:09.372810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.416 qpair failed and we were unable to recover it. 00:30:35.416 [2024-12-05 12:14:09.373001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.416 [2024-12-05 12:14:09.373043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.416 qpair failed and we were unable to recover it. 00:30:35.416 [2024-12-05 12:14:09.373234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.416 [2024-12-05 12:14:09.373266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.416 qpair failed and we were unable to recover it. 00:30:35.416 [2024-12-05 12:14:09.373406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.416 [2024-12-05 12:14:09.373446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.416 qpair failed and we were unable to recover it. 00:30:35.416 [2024-12-05 12:14:09.373553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.416 [2024-12-05 12:14:09.373585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.416 qpair failed and we were unable to recover it. 00:30:35.416 [2024-12-05 12:14:09.373849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.416 [2024-12-05 12:14:09.373882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.416 qpair failed and we were unable to recover it. 00:30:35.416 [2024-12-05 12:14:09.373994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.416 [2024-12-05 12:14:09.374027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.416 qpair failed and we were unable to recover it. 00:30:35.416 [2024-12-05 12:14:09.374312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.416 [2024-12-05 12:14:09.374355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.416 qpair failed and we were unable to recover it. 00:30:35.416 [2024-12-05 12:14:09.374507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.416 [2024-12-05 12:14:09.374540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.416 qpair failed and we were unable to recover it. 00:30:35.416 [2024-12-05 12:14:09.374785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.416 [2024-12-05 12:14:09.374820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.416 qpair failed and we were unable to recover it. 00:30:35.416 [2024-12-05 12:14:09.375090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.416 [2024-12-05 12:14:09.375121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.416 qpair failed and we were unable to recover it. 00:30:35.416 [2024-12-05 12:14:09.375297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.416 [2024-12-05 12:14:09.375331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.416 qpair failed and we were unable to recover it. 00:30:35.416 [2024-12-05 12:14:09.375612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.416 [2024-12-05 12:14:09.375644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.416 qpair failed and we were unable to recover it. 00:30:35.416 [2024-12-05 12:14:09.375772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.417 [2024-12-05 12:14:09.375805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.417 qpair failed and we were unable to recover it. 00:30:35.417 [2024-12-05 12:14:09.376068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.417 [2024-12-05 12:14:09.376101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.417 qpair failed and we were unable to recover it. 00:30:35.417 [2024-12-05 12:14:09.376209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.417 [2024-12-05 12:14:09.376242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.417 qpair failed and we were unable to recover it. 00:30:35.417 [2024-12-05 12:14:09.376431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.417 [2024-12-05 12:14:09.376464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.417 qpair failed and we were unable to recover it. 00:30:35.417 [2024-12-05 12:14:09.376581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.417 [2024-12-05 12:14:09.376614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.417 qpair failed and we were unable to recover it. 00:30:35.417 [2024-12-05 12:14:09.376875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.417 [2024-12-05 12:14:09.376907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.417 qpair failed and we were unable to recover it. 00:30:35.417 [2024-12-05 12:14:09.377040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.417 [2024-12-05 12:14:09.377072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.417 qpair failed and we were unable to recover it. 00:30:35.417 [2024-12-05 12:14:09.377336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.417 [2024-12-05 12:14:09.377396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.417 qpair failed and we were unable to recover it. 00:30:35.417 [2024-12-05 12:14:09.377541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.417 [2024-12-05 12:14:09.377574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.417 qpair failed and we were unable to recover it. 00:30:35.417 [2024-12-05 12:14:09.377760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.417 [2024-12-05 12:14:09.377794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.417 qpair failed and we were unable to recover it. 00:30:35.417 [2024-12-05 12:14:09.377981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.417 [2024-12-05 12:14:09.378013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.417 qpair failed and we were unable to recover it. 00:30:35.417 [2024-12-05 12:14:09.378185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.417 [2024-12-05 12:14:09.378217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.417 qpair failed and we were unable to recover it. 00:30:35.417 [2024-12-05 12:14:09.378323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.417 [2024-12-05 12:14:09.378361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.417 qpair failed and we were unable to recover it. 00:30:35.417 [2024-12-05 12:14:09.378488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.417 [2024-12-05 12:14:09.378520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.417 qpair failed and we were unable to recover it. 00:30:35.417 [2024-12-05 12:14:09.378698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.417 [2024-12-05 12:14:09.378730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.417 qpair failed and we were unable to recover it. 00:30:35.417 [2024-12-05 12:14:09.378920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.417 [2024-12-05 12:14:09.378954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.417 qpair failed and we were unable to recover it. 00:30:35.417 [2024-12-05 12:14:09.379135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.417 [2024-12-05 12:14:09.379167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.417 qpair failed and we were unable to recover it. 00:30:35.417 [2024-12-05 12:14:09.379343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.417 [2024-12-05 12:14:09.379385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.417 qpair failed and we were unable to recover it. 00:30:35.417 [2024-12-05 12:14:09.379518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.417 [2024-12-05 12:14:09.379548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.417 qpair failed and we were unable to recover it. 00:30:35.417 [2024-12-05 12:14:09.379729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.417 [2024-12-05 12:14:09.379759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.417 qpair failed and we were unable to recover it. 00:30:35.417 [2024-12-05 12:14:09.379871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.417 [2024-12-05 12:14:09.379900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.417 qpair failed and we were unable to recover it. 00:30:35.417 [2024-12-05 12:14:09.380031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.417 [2024-12-05 12:14:09.380067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.417 qpair failed and we were unable to recover it. 00:30:35.417 [2024-12-05 12:14:09.380237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.417 [2024-12-05 12:14:09.380268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.417 qpair failed and we were unable to recover it. 00:30:35.417 [2024-12-05 12:14:09.380480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.417 [2024-12-05 12:14:09.380510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.417 qpair failed and we were unable to recover it. 00:30:35.417 [2024-12-05 12:14:09.380776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.417 [2024-12-05 12:14:09.380807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.417 qpair failed and we were unable to recover it. 00:30:35.417 [2024-12-05 12:14:09.380938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.417 [2024-12-05 12:14:09.380969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.417 qpair failed and we were unable to recover it. 00:30:35.417 [2024-12-05 12:14:09.381236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.417 [2024-12-05 12:14:09.381267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.417 qpair failed and we were unable to recover it. 00:30:35.417 [2024-12-05 12:14:09.381576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.417 [2024-12-05 12:14:09.381608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.417 qpair failed and we were unable to recover it. 00:30:35.417 [2024-12-05 12:14:09.381732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.417 [2024-12-05 12:14:09.381764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.417 qpair failed and we were unable to recover it. 00:30:35.417 [2024-12-05 12:14:09.381970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.417 [2024-12-05 12:14:09.382001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.417 qpair failed and we were unable to recover it. 00:30:35.417 [2024-12-05 12:14:09.382180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.417 [2024-12-05 12:14:09.382219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.417 qpair failed and we were unable to recover it. 00:30:35.417 [2024-12-05 12:14:09.382408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.417 [2024-12-05 12:14:09.382441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.417 qpair failed and we were unable to recover it. 00:30:35.417 [2024-12-05 12:14:09.382620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.417 [2024-12-05 12:14:09.382651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.417 qpair failed and we were unable to recover it. 00:30:35.417 [2024-12-05 12:14:09.382837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.417 [2024-12-05 12:14:09.382866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.417 qpair failed and we were unable to recover it. 00:30:35.417 [2024-12-05 12:14:09.383133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.417 [2024-12-05 12:14:09.383166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.417 qpair failed and we were unable to recover it. 00:30:35.417 [2024-12-05 12:14:09.383358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.417 [2024-12-05 12:14:09.383399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.417 qpair failed and we were unable to recover it. 00:30:35.417 [2024-12-05 12:14:09.383603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.417 [2024-12-05 12:14:09.383637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.417 qpair failed and we were unable to recover it. 00:30:35.417 [2024-12-05 12:14:09.383770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.417 [2024-12-05 12:14:09.383801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.417 qpair failed and we were unable to recover it. 00:30:35.417 [2024-12-05 12:14:09.383934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.418 [2024-12-05 12:14:09.383964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.418 qpair failed and we were unable to recover it. 00:30:35.418 [2024-12-05 12:14:09.384087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.418 [2024-12-05 12:14:09.384117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.418 qpair failed and we were unable to recover it. 00:30:35.418 [2024-12-05 12:14:09.384233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.418 [2024-12-05 12:14:09.384263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.418 qpair failed and we were unable to recover it. 00:30:35.418 [2024-12-05 12:14:09.384469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.418 [2024-12-05 12:14:09.384502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.418 qpair failed and we were unable to recover it. 00:30:35.418 [2024-12-05 12:14:09.384611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.418 [2024-12-05 12:14:09.384645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.418 qpair failed and we were unable to recover it. 00:30:35.418 [2024-12-05 12:14:09.384842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.418 [2024-12-05 12:14:09.384874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.418 qpair failed and we were unable to recover it. 00:30:35.418 [2024-12-05 12:14:09.385059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.418 [2024-12-05 12:14:09.385092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.418 qpair failed and we were unable to recover it. 00:30:35.418 [2024-12-05 12:14:09.385271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.418 [2024-12-05 12:14:09.385305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.418 qpair failed and we were unable to recover it. 00:30:35.418 [2024-12-05 12:14:09.385433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.418 [2024-12-05 12:14:09.385468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.418 qpair failed and we were unable to recover it. 00:30:35.418 [2024-12-05 12:14:09.385657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.418 [2024-12-05 12:14:09.385690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.418 qpair failed and we were unable to recover it. 00:30:35.418 [2024-12-05 12:14:09.385923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.418 [2024-12-05 12:14:09.385994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.418 qpair failed and we were unable to recover it. 00:30:35.418 [2024-12-05 12:14:09.386280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.418 [2024-12-05 12:14:09.386315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.418 qpair failed and we were unable to recover it. 00:30:35.418 [2024-12-05 12:14:09.386559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.418 [2024-12-05 12:14:09.386593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.418 qpair failed and we were unable to recover it. 00:30:35.418 [2024-12-05 12:14:09.386778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.418 [2024-12-05 12:14:09.386814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.418 qpair failed and we were unable to recover it. 00:30:35.418 [2024-12-05 12:14:09.386943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.418 [2024-12-05 12:14:09.386973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.418 qpair failed and we were unable to recover it. 00:30:35.418 [2024-12-05 12:14:09.387189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.418 [2024-12-05 12:14:09.387219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.418 qpair failed and we were unable to recover it. 00:30:35.418 [2024-12-05 12:14:09.387420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.418 [2024-12-05 12:14:09.387453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.418 qpair failed and we were unable to recover it. 00:30:35.418 [2024-12-05 12:14:09.387579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.418 [2024-12-05 12:14:09.387610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.418 qpair failed and we were unable to recover it. 00:30:35.418 [2024-12-05 12:14:09.387797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.418 [2024-12-05 12:14:09.387829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.418 qpair failed and we were unable to recover it. 00:30:35.418 [2024-12-05 12:14:09.388014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.418 [2024-12-05 12:14:09.388045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.418 qpair failed and we were unable to recover it. 00:30:35.418 [2024-12-05 12:14:09.388307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.418 [2024-12-05 12:14:09.388340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.418 qpair failed and we were unable to recover it. 00:30:35.418 [2024-12-05 12:14:09.388469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.418 [2024-12-05 12:14:09.388501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.418 qpair failed and we were unable to recover it. 00:30:35.418 [2024-12-05 12:14:09.388743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.418 [2024-12-05 12:14:09.388774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.418 qpair failed and we were unable to recover it. 00:30:35.418 [2024-12-05 12:14:09.388898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.418 [2024-12-05 12:14:09.388928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.418 qpair failed and we were unable to recover it. 00:30:35.418 [2024-12-05 12:14:09.389121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.418 [2024-12-05 12:14:09.389152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.418 qpair failed and we were unable to recover it. 00:30:35.418 [2024-12-05 12:14:09.389393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.418 [2024-12-05 12:14:09.389425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.418 qpair failed and we were unable to recover it. 00:30:35.418 [2024-12-05 12:14:09.389597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.418 [2024-12-05 12:14:09.389628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.418 qpair failed and we were unable to recover it. 00:30:35.418 [2024-12-05 12:14:09.389820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.418 [2024-12-05 12:14:09.389851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.418 qpair failed and we were unable to recover it. 00:30:35.418 [2024-12-05 12:14:09.389974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.418 [2024-12-05 12:14:09.390004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.418 qpair failed and we were unable to recover it. 00:30:35.418 [2024-12-05 12:14:09.390281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.418 [2024-12-05 12:14:09.390311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.418 qpair failed and we were unable to recover it. 00:30:35.418 [2024-12-05 12:14:09.390583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.418 [2024-12-05 12:14:09.390615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.418 qpair failed and we were unable to recover it. 00:30:35.418 [2024-12-05 12:14:09.390724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.418 [2024-12-05 12:14:09.390754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.418 qpair failed and we were unable to recover it. 00:30:35.418 [2024-12-05 12:14:09.390941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.418 [2024-12-05 12:14:09.390972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.418 qpair failed and we were unable to recover it. 00:30:35.418 [2024-12-05 12:14:09.391224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.418 [2024-12-05 12:14:09.391255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.418 qpair failed and we were unable to recover it. 00:30:35.418 [2024-12-05 12:14:09.391453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.418 [2024-12-05 12:14:09.391485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.419 qpair failed and we were unable to recover it. 00:30:35.419 [2024-12-05 12:14:09.391675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.419 [2024-12-05 12:14:09.391707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.419 qpair failed and we were unable to recover it. 00:30:35.419 [2024-12-05 12:14:09.391839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.419 [2024-12-05 12:14:09.391869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.419 qpair failed and we were unable to recover it. 00:30:35.419 [2024-12-05 12:14:09.391991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.419 [2024-12-05 12:14:09.392021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.419 qpair failed and we were unable to recover it. 00:30:35.419 [2024-12-05 12:14:09.392207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.419 [2024-12-05 12:14:09.392238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.419 qpair failed and we were unable to recover it. 00:30:35.419 [2024-12-05 12:14:09.392473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.419 [2024-12-05 12:14:09.392506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.419 qpair failed and we were unable to recover it. 00:30:35.419 [2024-12-05 12:14:09.392643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.419 [2024-12-05 12:14:09.392675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.419 qpair failed and we were unable to recover it. 00:30:35.419 [2024-12-05 12:14:09.392851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.419 [2024-12-05 12:14:09.392882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.419 qpair failed and we were unable to recover it. 00:30:35.419 [2024-12-05 12:14:09.393066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.419 [2024-12-05 12:14:09.393097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.419 qpair failed and we were unable to recover it. 00:30:35.419 [2024-12-05 12:14:09.393270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.419 [2024-12-05 12:14:09.393301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.419 qpair failed and we were unable to recover it. 00:30:35.419 [2024-12-05 12:14:09.393572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.419 [2024-12-05 12:14:09.393605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.419 qpair failed and we were unable to recover it. 00:30:35.419 [2024-12-05 12:14:09.393793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.419 [2024-12-05 12:14:09.393824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.419 qpair failed and we were unable to recover it. 00:30:35.419 [2024-12-05 12:14:09.394010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.419 [2024-12-05 12:14:09.394042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.419 qpair failed and we were unable to recover it. 00:30:35.419 [2024-12-05 12:14:09.394177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.419 [2024-12-05 12:14:09.394209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.419 qpair failed and we were unable to recover it. 00:30:35.419 [2024-12-05 12:14:09.394493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.419 [2024-12-05 12:14:09.394524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.419 qpair failed and we were unable to recover it. 00:30:35.419 [2024-12-05 12:14:09.394718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.419 [2024-12-05 12:14:09.394750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.419 qpair failed and we were unable to recover it. 00:30:35.419 [2024-12-05 12:14:09.394941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.419 [2024-12-05 12:14:09.394978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.419 qpair failed and we were unable to recover it. 00:30:35.419 [2024-12-05 12:14:09.395151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.419 [2024-12-05 12:14:09.395182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.419 qpair failed and we were unable to recover it. 00:30:35.419 [2024-12-05 12:14:09.395311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.419 [2024-12-05 12:14:09.395343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.419 qpair failed and we were unable to recover it. 00:30:35.419 [2024-12-05 12:14:09.395468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.419 [2024-12-05 12:14:09.395498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.419 qpair failed and we were unable to recover it. 00:30:35.419 [2024-12-05 12:14:09.395668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.419 [2024-12-05 12:14:09.395700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.419 qpair failed and we were unable to recover it. 00:30:35.419 [2024-12-05 12:14:09.395891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.419 [2024-12-05 12:14:09.395922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.419 qpair failed and we were unable to recover it. 00:30:35.419 [2024-12-05 12:14:09.396033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.419 [2024-12-05 12:14:09.396063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.419 qpair failed and we were unable to recover it. 00:30:35.419 [2024-12-05 12:14:09.396191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.419 [2024-12-05 12:14:09.396223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.419 qpair failed and we were unable to recover it. 00:30:35.419 [2024-12-05 12:14:09.396460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.419 [2024-12-05 12:14:09.396493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.419 qpair failed and we were unable to recover it. 00:30:35.419 [2024-12-05 12:14:09.396667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.419 [2024-12-05 12:14:09.396698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.419 qpair failed and we were unable to recover it. 00:30:35.419 [2024-12-05 12:14:09.396819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.419 [2024-12-05 12:14:09.396850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.419 qpair failed and we were unable to recover it. 00:30:35.419 [2024-12-05 12:14:09.397048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.419 [2024-12-05 12:14:09.397080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.419 qpair failed and we were unable to recover it. 00:30:35.419 [2024-12-05 12:14:09.397344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.419 [2024-12-05 12:14:09.397384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.419 qpair failed and we were unable to recover it. 00:30:35.419 [2024-12-05 12:14:09.397573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.419 [2024-12-05 12:14:09.397603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.419 qpair failed and we were unable to recover it. 00:30:35.419 [2024-12-05 12:14:09.397725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.419 [2024-12-05 12:14:09.397755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.419 qpair failed and we were unable to recover it. 00:30:35.419 [2024-12-05 12:14:09.397932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.419 [2024-12-05 12:14:09.397964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.419 qpair failed and we were unable to recover it. 00:30:35.419 [2024-12-05 12:14:09.398152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.419 [2024-12-05 12:14:09.398184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.419 qpair failed and we were unable to recover it. 00:30:35.419 [2024-12-05 12:14:09.398366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.419 [2024-12-05 12:14:09.398405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.419 qpair failed and we were unable to recover it. 00:30:35.419 [2024-12-05 12:14:09.398521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.419 [2024-12-05 12:14:09.398550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.419 qpair failed and we were unable to recover it. 00:30:35.419 [2024-12-05 12:14:09.398802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.419 [2024-12-05 12:14:09.398834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.419 qpair failed and we were unable to recover it. 00:30:35.419 [2024-12-05 12:14:09.398951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.419 [2024-12-05 12:14:09.398983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.419 qpair failed and we were unable to recover it. 00:30:35.419 [2024-12-05 12:14:09.399164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.419 [2024-12-05 12:14:09.399196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.419 qpair failed and we were unable to recover it. 00:30:35.419 [2024-12-05 12:14:09.399375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.419 [2024-12-05 12:14:09.399408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.420 qpair failed and we were unable to recover it. 00:30:35.420 [2024-12-05 12:14:09.399601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.420 [2024-12-05 12:14:09.399632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.420 qpair failed and we were unable to recover it. 00:30:35.420 [2024-12-05 12:14:09.399751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.420 [2024-12-05 12:14:09.399784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.420 qpair failed and we were unable to recover it. 00:30:35.420 [2024-12-05 12:14:09.399967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.420 [2024-12-05 12:14:09.399997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.420 qpair failed and we were unable to recover it. 00:30:35.420 [2024-12-05 12:14:09.400103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.420 [2024-12-05 12:14:09.400132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.420 qpair failed and we were unable to recover it. 00:30:35.420 [2024-12-05 12:14:09.400321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.420 [2024-12-05 12:14:09.400352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.420 qpair failed and we were unable to recover it. 00:30:35.420 [2024-12-05 12:14:09.400533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.420 [2024-12-05 12:14:09.400565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.420 qpair failed and we were unable to recover it. 00:30:35.420 [2024-12-05 12:14:09.400758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.420 [2024-12-05 12:14:09.400788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.420 qpair failed and we were unable to recover it. 00:30:35.420 [2024-12-05 12:14:09.400994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.420 [2024-12-05 12:14:09.401025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.420 qpair failed and we were unable to recover it. 00:30:35.420 [2024-12-05 12:14:09.401217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.420 [2024-12-05 12:14:09.401248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.420 qpair failed and we were unable to recover it. 00:30:35.420 [2024-12-05 12:14:09.401428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.420 [2024-12-05 12:14:09.401461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.420 qpair failed and we were unable to recover it. 00:30:35.420 [2024-12-05 12:14:09.401594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.420 [2024-12-05 12:14:09.401627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.420 qpair failed and we were unable to recover it. 00:30:35.420 [2024-12-05 12:14:09.401756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.420 [2024-12-05 12:14:09.401788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.420 qpair failed and we were unable to recover it. 00:30:35.420 [2024-12-05 12:14:09.401984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.420 [2024-12-05 12:14:09.402015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.420 qpair failed and we were unable to recover it. 00:30:35.420 [2024-12-05 12:14:09.402275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.420 [2024-12-05 12:14:09.402306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.420 qpair failed and we were unable to recover it. 00:30:35.420 [2024-12-05 12:14:09.402425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.420 [2024-12-05 12:14:09.402458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.420 qpair failed and we were unable to recover it. 00:30:35.420 [2024-12-05 12:14:09.402671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.420 [2024-12-05 12:14:09.402702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.420 qpair failed and we were unable to recover it. 00:30:35.420 [2024-12-05 12:14:09.402807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.420 [2024-12-05 12:14:09.402837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.420 qpair failed and we were unable to recover it. 00:30:35.420 [2024-12-05 12:14:09.402969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.420 [2024-12-05 12:14:09.403009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.420 qpair failed and we were unable to recover it. 00:30:35.420 [2024-12-05 12:14:09.403128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.420 [2024-12-05 12:14:09.403157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.420 qpair failed and we were unable to recover it. 00:30:35.420 [2024-12-05 12:14:09.403409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.420 [2024-12-05 12:14:09.403441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.420 qpair failed and we were unable to recover it. 00:30:35.420 [2024-12-05 12:14:09.403643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.420 [2024-12-05 12:14:09.403674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.420 qpair failed and we were unable to recover it. 00:30:35.420 [2024-12-05 12:14:09.403801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.420 [2024-12-05 12:14:09.403832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.420 qpair failed and we were unable to recover it. 00:30:35.420 [2024-12-05 12:14:09.404093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.420 [2024-12-05 12:14:09.404124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.420 qpair failed and we were unable to recover it. 00:30:35.420 [2024-12-05 12:14:09.404236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.420 [2024-12-05 12:14:09.404266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.420 qpair failed and we were unable to recover it. 00:30:35.420 [2024-12-05 12:14:09.404459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.420 [2024-12-05 12:14:09.404492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.420 qpair failed and we were unable to recover it. 00:30:35.420 [2024-12-05 12:14:09.404671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.420 [2024-12-05 12:14:09.404700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.420 qpair failed and we were unable to recover it. 00:30:35.420 [2024-12-05 12:14:09.404886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.420 [2024-12-05 12:14:09.404916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.420 qpair failed and we were unable to recover it. 00:30:35.420 [2024-12-05 12:14:09.405030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.420 [2024-12-05 12:14:09.405060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.420 qpair failed and we were unable to recover it. 00:30:35.420 [2024-12-05 12:14:09.405314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.420 [2024-12-05 12:14:09.405343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.420 qpair failed and we were unable to recover it. 00:30:35.420 [2024-12-05 12:14:09.405551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.420 [2024-12-05 12:14:09.405583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.420 qpair failed and we were unable to recover it. 00:30:35.420 [2024-12-05 12:14:09.405759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.420 [2024-12-05 12:14:09.405790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.420 qpair failed and we were unable to recover it. 00:30:35.420 [2024-12-05 12:14:09.405910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.420 [2024-12-05 12:14:09.405939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.420 qpair failed and we were unable to recover it. 00:30:35.420 [2024-12-05 12:14:09.406152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.420 [2024-12-05 12:14:09.406181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.420 qpair failed and we were unable to recover it. 00:30:35.420 [2024-12-05 12:14:09.406353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.420 [2024-12-05 12:14:09.406392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.420 qpair failed and we were unable to recover it. 00:30:35.420 [2024-12-05 12:14:09.406518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.420 [2024-12-05 12:14:09.406549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.420 qpair failed and we were unable to recover it. 00:30:35.420 [2024-12-05 12:14:09.406760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.420 [2024-12-05 12:14:09.406790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.420 qpair failed and we were unable to recover it. 00:30:35.420 [2024-12-05 12:14:09.407032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.420 [2024-12-05 12:14:09.407061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.420 qpair failed and we were unable to recover it. 00:30:35.420 [2024-12-05 12:14:09.407307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.420 [2024-12-05 12:14:09.407336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.421 qpair failed and we were unable to recover it. 00:30:35.421 [2024-12-05 12:14:09.407522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.421 [2024-12-05 12:14:09.407552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.421 qpair failed and we were unable to recover it. 00:30:35.421 [2024-12-05 12:14:09.407851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.421 [2024-12-05 12:14:09.407881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.421 qpair failed and we were unable to recover it. 00:30:35.421 [2024-12-05 12:14:09.408019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.421 [2024-12-05 12:14:09.408047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.421 qpair failed and we were unable to recover it. 00:30:35.421 [2024-12-05 12:14:09.408267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.421 [2024-12-05 12:14:09.408298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.421 qpair failed and we were unable to recover it. 00:30:35.421 [2024-12-05 12:14:09.408499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.421 [2024-12-05 12:14:09.408530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.421 qpair failed and we were unable to recover it. 00:30:35.421 [2024-12-05 12:14:09.408653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.421 [2024-12-05 12:14:09.408682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.421 qpair failed and we were unable to recover it. 00:30:35.421 [2024-12-05 12:14:09.408809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.421 [2024-12-05 12:14:09.408840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.421 qpair failed and we were unable to recover it. 00:30:35.421 [2024-12-05 12:14:09.408940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.421 [2024-12-05 12:14:09.408971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.421 qpair failed and we were unable to recover it. 00:30:35.421 [2024-12-05 12:14:09.409187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.421 [2024-12-05 12:14:09.409217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.421 qpair failed and we were unable to recover it. 00:30:35.421 [2024-12-05 12:14:09.409328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.421 [2024-12-05 12:14:09.409359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.421 qpair failed and we were unable to recover it. 00:30:35.421 [2024-12-05 12:14:09.409582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.421 [2024-12-05 12:14:09.409613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.421 qpair failed and we were unable to recover it. 00:30:35.421 [2024-12-05 12:14:09.409871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.421 [2024-12-05 12:14:09.409901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.421 qpair failed and we were unable to recover it. 00:30:35.421 [2024-12-05 12:14:09.410074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.421 [2024-12-05 12:14:09.410103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.421 qpair failed and we were unable to recover it. 00:30:35.421 [2024-12-05 12:14:09.410272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.421 [2024-12-05 12:14:09.410302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.421 qpair failed and we were unable to recover it. 00:30:35.421 [2024-12-05 12:14:09.410490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.421 [2024-12-05 12:14:09.410523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.421 qpair failed and we were unable to recover it. 00:30:35.421 [2024-12-05 12:14:09.410717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.421 [2024-12-05 12:14:09.410748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.421 qpair failed and we were unable to recover it. 00:30:35.421 [2024-12-05 12:14:09.411006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.421 [2024-12-05 12:14:09.411037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.421 qpair failed and we were unable to recover it. 00:30:35.421 [2024-12-05 12:14:09.411142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.421 [2024-12-05 12:14:09.411174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.421 qpair failed and we were unable to recover it. 00:30:35.421 [2024-12-05 12:14:09.411352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.421 [2024-12-05 12:14:09.411390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.421 qpair failed and we were unable to recover it. 00:30:35.421 [2024-12-05 12:14:09.411524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.421 [2024-12-05 12:14:09.411562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.421 qpair failed and we were unable to recover it. 00:30:35.421 [2024-12-05 12:14:09.411837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.421 [2024-12-05 12:14:09.411870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.421 qpair failed and we were unable to recover it. 00:30:35.421 [2024-12-05 12:14:09.411979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.421 [2024-12-05 12:14:09.412010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.421 qpair failed and we were unable to recover it. 00:30:35.421 [2024-12-05 12:14:09.412262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.421 [2024-12-05 12:14:09.412292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.421 qpair failed and we were unable to recover it. 00:30:35.421 [2024-12-05 12:14:09.412407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.421 [2024-12-05 12:14:09.412438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.421 qpair failed and we were unable to recover it. 00:30:35.421 [2024-12-05 12:14:09.412702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.421 [2024-12-05 12:14:09.412734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.421 qpair failed and we were unable to recover it. 00:30:35.421 [2024-12-05 12:14:09.412924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.421 [2024-12-05 12:14:09.412954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.421 qpair failed and we were unable to recover it. 00:30:35.421 [2024-12-05 12:14:09.413146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.421 [2024-12-05 12:14:09.413176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.421 qpair failed and we were unable to recover it. 00:30:35.421 [2024-12-05 12:14:09.413346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.421 [2024-12-05 12:14:09.413403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.421 qpair failed and we were unable to recover it. 00:30:35.421 [2024-12-05 12:14:09.413618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.421 [2024-12-05 12:14:09.413650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.421 qpair failed and we were unable to recover it. 00:30:35.421 [2024-12-05 12:14:09.413855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.421 [2024-12-05 12:14:09.413886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.421 qpair failed and we were unable to recover it. 00:30:35.421 [2024-12-05 12:14:09.414097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.421 [2024-12-05 12:14:09.414128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.421 qpair failed and we were unable to recover it. 00:30:35.421 [2024-12-05 12:14:09.414248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.421 [2024-12-05 12:14:09.414281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.421 qpair failed and we were unable to recover it. 00:30:35.421 [2024-12-05 12:14:09.414407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.421 [2024-12-05 12:14:09.414438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.421 qpair failed and we were unable to recover it. 00:30:35.421 [2024-12-05 12:14:09.414665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.421 [2024-12-05 12:14:09.414697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.421 qpair failed and we were unable to recover it. 00:30:35.421 [2024-12-05 12:14:09.414873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.421 [2024-12-05 12:14:09.414904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.421 qpair failed and we were unable to recover it. 00:30:35.421 [2024-12-05 12:14:09.415098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.421 [2024-12-05 12:14:09.415127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.421 qpair failed and we were unable to recover it. 00:30:35.421 [2024-12-05 12:14:09.415339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.421 [2024-12-05 12:14:09.415377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.421 qpair failed and we were unable to recover it. 00:30:35.421 [2024-12-05 12:14:09.415505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.421 [2024-12-05 12:14:09.415536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.422 qpair failed and we were unable to recover it. 00:30:35.422 [2024-12-05 12:14:09.415658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.422 [2024-12-05 12:14:09.415691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.422 qpair failed and we were unable to recover it. 00:30:35.422 [2024-12-05 12:14:09.415871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.422 [2024-12-05 12:14:09.415902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.422 qpair failed and we were unable to recover it. 00:30:35.422 [2024-12-05 12:14:09.416020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.422 [2024-12-05 12:14:09.416052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.422 qpair failed and we were unable to recover it. 00:30:35.422 [2024-12-05 12:14:09.416238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.422 [2024-12-05 12:14:09.416269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.422 qpair failed and we were unable to recover it. 00:30:35.422 [2024-12-05 12:14:09.416442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.422 [2024-12-05 12:14:09.416475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.422 qpair failed and we were unable to recover it. 00:30:35.422 [2024-12-05 12:14:09.416657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.422 [2024-12-05 12:14:09.416689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.422 qpair failed and we were unable to recover it. 00:30:35.422 [2024-12-05 12:14:09.416904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.422 [2024-12-05 12:14:09.416935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.422 qpair failed and we were unable to recover it. 00:30:35.422 [2024-12-05 12:14:09.417199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.422 [2024-12-05 12:14:09.417230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.422 qpair failed and we were unable to recover it. 00:30:35.422 [2024-12-05 12:14:09.417422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.422 [2024-12-05 12:14:09.417455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.422 qpair failed and we were unable to recover it. 00:30:35.422 [2024-12-05 12:14:09.417575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.422 [2024-12-05 12:14:09.417606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.422 qpair failed and we were unable to recover it. 00:30:35.422 [2024-12-05 12:14:09.417786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.422 [2024-12-05 12:14:09.417817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.422 qpair failed and we were unable to recover it. 00:30:35.422 [2024-12-05 12:14:09.418009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.422 [2024-12-05 12:14:09.418041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.422 qpair failed and we were unable to recover it. 00:30:35.422 [2024-12-05 12:14:09.418279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.422 [2024-12-05 12:14:09.418310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.422 qpair failed and we were unable to recover it. 00:30:35.422 [2024-12-05 12:14:09.418424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.422 [2024-12-05 12:14:09.418457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.422 qpair failed and we were unable to recover it. 00:30:35.422 [2024-12-05 12:14:09.418673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.422 [2024-12-05 12:14:09.418705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.422 qpair failed and we were unable to recover it. 00:30:35.422 [2024-12-05 12:14:09.418964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.422 [2024-12-05 12:14:09.418995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.422 qpair failed and we were unable to recover it. 00:30:35.422 [2024-12-05 12:14:09.419209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.422 [2024-12-05 12:14:09.419240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.422 qpair failed and we were unable to recover it. 00:30:35.422 [2024-12-05 12:14:09.419452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.422 [2024-12-05 12:14:09.419483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.422 qpair failed and we were unable to recover it. 00:30:35.422 [2024-12-05 12:14:09.419686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.422 [2024-12-05 12:14:09.419717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.422 qpair failed and we were unable to recover it. 00:30:35.422 [2024-12-05 12:14:09.419902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.422 [2024-12-05 12:14:09.419932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.422 qpair failed and we were unable to recover it. 00:30:35.422 [2024-12-05 12:14:09.420208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.422 [2024-12-05 12:14:09.420239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.422 qpair failed and we were unable to recover it. 00:30:35.422 [2024-12-05 12:14:09.420422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.422 [2024-12-05 12:14:09.420462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.422 qpair failed and we were unable to recover it. 00:30:35.422 [2024-12-05 12:14:09.420650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.422 [2024-12-05 12:14:09.420682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.422 qpair failed and we were unable to recover it. 00:30:35.422 [2024-12-05 12:14:09.420867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.422 [2024-12-05 12:14:09.420898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.422 qpair failed and we were unable to recover it. 00:30:35.422 [2024-12-05 12:14:09.421071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.422 [2024-12-05 12:14:09.421103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.422 qpair failed and we were unable to recover it. 00:30:35.422 [2024-12-05 12:14:09.421285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.422 [2024-12-05 12:14:09.421316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.422 qpair failed and we were unable to recover it. 00:30:35.422 [2024-12-05 12:14:09.421583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.422 [2024-12-05 12:14:09.421615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.422 qpair failed and we were unable to recover it. 00:30:35.422 [2024-12-05 12:14:09.421813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.422 [2024-12-05 12:14:09.421844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.422 qpair failed and we were unable to recover it. 00:30:35.422 [2024-12-05 12:14:09.422110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.422 [2024-12-05 12:14:09.422142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.422 qpair failed and we were unable to recover it. 00:30:35.422 [2024-12-05 12:14:09.422439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.422 [2024-12-05 12:14:09.422471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.422 qpair failed and we were unable to recover it. 00:30:35.422 [2024-12-05 12:14:09.422676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.422 [2024-12-05 12:14:09.422707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.422 qpair failed and we were unable to recover it. 00:30:35.422 [2024-12-05 12:14:09.422887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.422 [2024-12-05 12:14:09.422919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.422 qpair failed and we were unable to recover it. 00:30:35.422 [2024-12-05 12:14:09.423023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.422 [2024-12-05 12:14:09.423056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.422 qpair failed and we were unable to recover it. 00:30:35.422 [2024-12-05 12:14:09.423242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.422 [2024-12-05 12:14:09.423273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.422 qpair failed and we were unable to recover it. 00:30:35.422 [2024-12-05 12:14:09.423411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.422 [2024-12-05 12:14:09.423442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.422 qpair failed and we were unable to recover it. 00:30:35.422 [2024-12-05 12:14:09.423635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.422 [2024-12-05 12:14:09.423667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.422 qpair failed and we were unable to recover it. 00:30:35.422 [2024-12-05 12:14:09.423918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.422 [2024-12-05 12:14:09.423949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.422 qpair failed and we were unable to recover it. 00:30:35.422 [2024-12-05 12:14:09.424066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.422 [2024-12-05 12:14:09.424098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.422 qpair failed and we were unable to recover it. 00:30:35.423 [2024-12-05 12:14:09.424266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.423 [2024-12-05 12:14:09.424297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.423 qpair failed and we were unable to recover it. 00:30:35.423 [2024-12-05 12:14:09.424482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.423 [2024-12-05 12:14:09.424514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.423 qpair failed and we were unable to recover it. 00:30:35.423 [2024-12-05 12:14:09.424687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.423 [2024-12-05 12:14:09.424718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.423 qpair failed and we were unable to recover it. 00:30:35.423 [2024-12-05 12:14:09.424832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.423 [2024-12-05 12:14:09.424862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.423 qpair failed and we were unable to recover it. 00:30:35.423 [2024-12-05 12:14:09.425176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.423 [2024-12-05 12:14:09.425209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.423 qpair failed and we were unable to recover it. 00:30:35.423 [2024-12-05 12:14:09.425387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.423 [2024-12-05 12:14:09.425420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.423 qpair failed and we were unable to recover it. 00:30:35.423 [2024-12-05 12:14:09.425554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.423 [2024-12-05 12:14:09.425585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.423 qpair failed and we were unable to recover it. 00:30:35.423 [2024-12-05 12:14:09.425861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.423 [2024-12-05 12:14:09.425892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.423 qpair failed and we were unable to recover it. 00:30:35.423 [2024-12-05 12:14:09.426196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.423 [2024-12-05 12:14:09.426227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.423 qpair failed and we were unable to recover it. 00:30:35.423 [2024-12-05 12:14:09.426357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.423 [2024-12-05 12:14:09.426396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.423 qpair failed and we were unable to recover it. 00:30:35.423 [2024-12-05 12:14:09.426536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.423 [2024-12-05 12:14:09.426567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.423 qpair failed and we were unable to recover it. 00:30:35.423 [2024-12-05 12:14:09.426757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.423 [2024-12-05 12:14:09.426790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.423 qpair failed and we were unable to recover it. 00:30:35.423 [2024-12-05 12:14:09.426975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.423 [2024-12-05 12:14:09.427006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.423 qpair failed and we were unable to recover it. 00:30:35.423 [2024-12-05 12:14:09.427219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.423 [2024-12-05 12:14:09.427249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.423 qpair failed and we were unable to recover it. 00:30:35.423 [2024-12-05 12:14:09.427487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.423 [2024-12-05 12:14:09.427519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.423 qpair failed and we were unable to recover it. 00:30:35.423 [2024-12-05 12:14:09.427704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.423 [2024-12-05 12:14:09.427736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.423 qpair failed and we were unable to recover it. 00:30:35.423 [2024-12-05 12:14:09.427909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.423 [2024-12-05 12:14:09.427941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.423 qpair failed and we were unable to recover it. 00:30:35.423 [2024-12-05 12:14:09.428065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.423 [2024-12-05 12:14:09.428096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.423 qpair failed and we were unable to recover it. 00:30:35.423 [2024-12-05 12:14:09.428308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.423 [2024-12-05 12:14:09.428339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.423 qpair failed and we were unable to recover it. 00:30:35.423 [2024-12-05 12:14:09.428476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.423 [2024-12-05 12:14:09.428509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.423 qpair failed and we were unable to recover it. 00:30:35.423 [2024-12-05 12:14:09.428617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.423 [2024-12-05 12:14:09.428647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.423 qpair failed and we were unable to recover it. 00:30:35.423 [2024-12-05 12:14:09.428840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.423 [2024-12-05 12:14:09.428871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.423 qpair failed and we were unable to recover it. 00:30:35.423 [2024-12-05 12:14:09.429120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.423 [2024-12-05 12:14:09.429151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.423 qpair failed and we were unable to recover it. 00:30:35.423 [2024-12-05 12:14:09.429411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.423 [2024-12-05 12:14:09.429451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.423 qpair failed and we were unable to recover it. 00:30:35.423 [2024-12-05 12:14:09.429566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.423 [2024-12-05 12:14:09.429597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.423 qpair failed and we were unable to recover it. 00:30:35.423 [2024-12-05 12:14:09.429778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.423 [2024-12-05 12:14:09.429809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.423 qpair failed and we were unable to recover it. 00:30:35.423 [2024-12-05 12:14:09.429989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.423 [2024-12-05 12:14:09.430021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.423 qpair failed and we were unable to recover it. 00:30:35.423 [2024-12-05 12:14:09.430192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.423 [2024-12-05 12:14:09.430223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.423 qpair failed and we were unable to recover it. 00:30:35.423 [2024-12-05 12:14:09.430409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.423 [2024-12-05 12:14:09.430439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.423 qpair failed and we were unable to recover it. 00:30:35.423 [2024-12-05 12:14:09.430654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.423 [2024-12-05 12:14:09.430685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.423 qpair failed and we were unable to recover it. 00:30:35.423 [2024-12-05 12:14:09.430914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.423 [2024-12-05 12:14:09.430944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.423 qpair failed and we were unable to recover it. 00:30:35.423 [2024-12-05 12:14:09.431049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.423 [2024-12-05 12:14:09.431079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.423 qpair failed and we were unable to recover it. 00:30:35.423 [2024-12-05 12:14:09.431266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.423 [2024-12-05 12:14:09.431298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.423 qpair failed and we were unable to recover it. 00:30:35.423 [2024-12-05 12:14:09.431474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.423 [2024-12-05 12:14:09.431506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.423 qpair failed and we were unable to recover it. 00:30:35.423 [2024-12-05 12:14:09.431699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.423 [2024-12-05 12:14:09.431731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.424 qpair failed and we were unable to recover it. 00:30:35.424 [2024-12-05 12:14:09.431977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.424 [2024-12-05 12:14:09.432009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.424 qpair failed and we were unable to recover it. 00:30:35.424 [2024-12-05 12:14:09.432258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.424 [2024-12-05 12:14:09.432289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.424 qpair failed and we were unable to recover it. 00:30:35.424 [2024-12-05 12:14:09.432511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.424 [2024-12-05 12:14:09.432543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.424 qpair failed and we were unable to recover it. 00:30:35.424 [2024-12-05 12:14:09.432737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.424 [2024-12-05 12:14:09.432769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.424 qpair failed and we were unable to recover it. 00:30:35.424 [2024-12-05 12:14:09.432988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.424 [2024-12-05 12:14:09.433019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.424 qpair failed and we were unable to recover it. 00:30:35.424 [2024-12-05 12:14:09.433199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.424 [2024-12-05 12:14:09.433230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.424 qpair failed and we were unable to recover it. 00:30:35.424 [2024-12-05 12:14:09.433433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.424 [2024-12-05 12:14:09.433466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.424 qpair failed and we were unable to recover it. 00:30:35.424 [2024-12-05 12:14:09.433638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.424 [2024-12-05 12:14:09.433670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.424 qpair failed and we were unable to recover it. 00:30:35.424 [2024-12-05 12:14:09.433906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.424 [2024-12-05 12:14:09.433938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.424 qpair failed and we were unable to recover it. 00:30:35.424 [2024-12-05 12:14:09.434109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.424 [2024-12-05 12:14:09.434140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.424 qpair failed and we were unable to recover it. 00:30:35.424 [2024-12-05 12:14:09.434254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.424 [2024-12-05 12:14:09.434283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.424 qpair failed and we were unable to recover it. 00:30:35.424 [2024-12-05 12:14:09.434475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.424 [2024-12-05 12:14:09.434508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.424 qpair failed and we were unable to recover it. 00:30:35.424 [2024-12-05 12:14:09.434624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.424 [2024-12-05 12:14:09.434655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.424 qpair failed and we were unable to recover it. 00:30:35.424 [2024-12-05 12:14:09.434894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.424 [2024-12-05 12:14:09.434926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.424 qpair failed and we were unable to recover it. 00:30:35.424 [2024-12-05 12:14:09.435098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.424 [2024-12-05 12:14:09.435129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.424 qpair failed and we were unable to recover it. 00:30:35.424 [2024-12-05 12:14:09.435257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.424 [2024-12-05 12:14:09.435288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.424 qpair failed and we were unable to recover it. 00:30:35.424 [2024-12-05 12:14:09.435552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.424 [2024-12-05 12:14:09.435584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.424 qpair failed and we were unable to recover it. 00:30:35.424 [2024-12-05 12:14:09.435712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.424 [2024-12-05 12:14:09.435743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.424 qpair failed and we were unable to recover it. 00:30:35.424 [2024-12-05 12:14:09.435873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.424 [2024-12-05 12:14:09.435903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.424 qpair failed and we were unable to recover it. 00:30:35.424 [2024-12-05 12:14:09.436091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.424 [2024-12-05 12:14:09.436122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.424 qpair failed and we were unable to recover it. 00:30:35.424 [2024-12-05 12:14:09.436322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.424 [2024-12-05 12:14:09.436353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.424 qpair failed and we were unable to recover it. 00:30:35.424 [2024-12-05 12:14:09.436571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.424 [2024-12-05 12:14:09.436604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.424 qpair failed and we were unable to recover it. 00:30:35.424 [2024-12-05 12:14:09.436774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.424 [2024-12-05 12:14:09.436804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.424 qpair failed and we were unable to recover it. 00:30:35.424 [2024-12-05 12:14:09.437010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.424 [2024-12-05 12:14:09.437042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.424 qpair failed and we were unable to recover it. 00:30:35.424 [2024-12-05 12:14:09.437156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.424 [2024-12-05 12:14:09.437188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.424 qpair failed and we were unable to recover it. 00:30:35.424 [2024-12-05 12:14:09.437460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.424 [2024-12-05 12:14:09.437492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.424 qpair failed and we were unable to recover it. 00:30:35.424 [2024-12-05 12:14:09.437682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.424 [2024-12-05 12:14:09.437714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.424 qpair failed and we were unable to recover it. 00:30:35.424 [2024-12-05 12:14:09.437951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.424 [2024-12-05 12:14:09.437981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.424 qpair failed and we were unable to recover it. 00:30:35.424 [2024-12-05 12:14:09.438155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.424 [2024-12-05 12:14:09.438194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.424 qpair failed and we were unable to recover it. 00:30:35.424 [2024-12-05 12:14:09.438380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.424 [2024-12-05 12:14:09.438413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.424 qpair failed and we were unable to recover it. 00:30:35.424 [2024-12-05 12:14:09.438540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.424 [2024-12-05 12:14:09.438571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.424 qpair failed and we were unable to recover it. 00:30:35.424 [2024-12-05 12:14:09.438753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.424 [2024-12-05 12:14:09.438785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.424 qpair failed and we were unable to recover it. 00:30:35.424 [2024-12-05 12:14:09.438972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.424 [2024-12-05 12:14:09.439003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.424 qpair failed and we were unable to recover it. 00:30:35.424 [2024-12-05 12:14:09.439190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.424 [2024-12-05 12:14:09.439222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.424 qpair failed and we were unable to recover it. 00:30:35.424 [2024-12-05 12:14:09.439420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.424 [2024-12-05 12:14:09.439453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.424 qpair failed and we were unable to recover it. 00:30:35.424 [2024-12-05 12:14:09.439563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.424 [2024-12-05 12:14:09.439593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.424 qpair failed and we were unable to recover it. 00:30:35.424 [2024-12-05 12:14:09.439782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.424 [2024-12-05 12:14:09.439814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.424 qpair failed and we were unable to recover it. 00:30:35.424 [2024-12-05 12:14:09.439993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.424 [2024-12-05 12:14:09.440024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.425 qpair failed and we were unable to recover it. 00:30:35.425 [2024-12-05 12:14:09.440155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.425 [2024-12-05 12:14:09.440186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.425 qpair failed and we were unable to recover it. 00:30:35.425 [2024-12-05 12:14:09.440364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.425 [2024-12-05 12:14:09.440405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.425 qpair failed and we were unable to recover it. 00:30:35.425 [2024-12-05 12:14:09.440590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.425 [2024-12-05 12:14:09.440622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.425 qpair failed and we were unable to recover it. 00:30:35.425 [2024-12-05 12:14:09.440836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.425 [2024-12-05 12:14:09.440867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.425 qpair failed and we were unable to recover it. 00:30:35.425 [2024-12-05 12:14:09.441001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.425 [2024-12-05 12:14:09.441032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.425 qpair failed and we were unable to recover it. 00:30:35.425 [2024-12-05 12:14:09.441202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.425 [2024-12-05 12:14:09.441233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.425 qpair failed and we were unable to recover it. 00:30:35.425 [2024-12-05 12:14:09.441457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.425 [2024-12-05 12:14:09.441490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.425 qpair failed and we were unable to recover it. 00:30:35.425 [2024-12-05 12:14:09.441665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.425 [2024-12-05 12:14:09.441695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.425 qpair failed and we were unable to recover it. 00:30:35.425 [2024-12-05 12:14:09.441888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.425 [2024-12-05 12:14:09.441920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.425 qpair failed and we were unable to recover it. 00:30:35.425 [2024-12-05 12:14:09.442116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.425 [2024-12-05 12:14:09.442147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.425 qpair failed and we were unable to recover it. 00:30:35.425 [2024-12-05 12:14:09.442266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.425 [2024-12-05 12:14:09.442296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.425 qpair failed and we were unable to recover it. 00:30:35.425 [2024-12-05 12:14:09.442403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.425 [2024-12-05 12:14:09.442434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.425 qpair failed and we were unable to recover it. 00:30:35.425 [2024-12-05 12:14:09.442566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.425 [2024-12-05 12:14:09.442598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.425 qpair failed and we were unable to recover it. 00:30:35.425 [2024-12-05 12:14:09.442839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.425 [2024-12-05 12:14:09.442870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.425 qpair failed and we were unable to recover it. 00:30:35.425 [2024-12-05 12:14:09.443009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.425 [2024-12-05 12:14:09.443040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.425 qpair failed and we were unable to recover it. 00:30:35.425 [2024-12-05 12:14:09.443298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.425 [2024-12-05 12:14:09.443330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.425 qpair failed and we were unable to recover it. 00:30:35.425 [2024-12-05 12:14:09.443531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.425 [2024-12-05 12:14:09.443564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.425 qpair failed and we were unable to recover it. 00:30:35.425 [2024-12-05 12:14:09.443697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.425 [2024-12-05 12:14:09.443726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.425 qpair failed and we were unable to recover it. 00:30:35.425 [2024-12-05 12:14:09.443911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.425 [2024-12-05 12:14:09.443943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.425 qpair failed and we were unable to recover it. 00:30:35.425 [2024-12-05 12:14:09.444118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.425 [2024-12-05 12:14:09.444151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.425 qpair failed and we were unable to recover it. 00:30:35.425 [2024-12-05 12:14:09.444389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.425 [2024-12-05 12:14:09.444421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.425 qpair failed and we were unable to recover it. 00:30:35.425 [2024-12-05 12:14:09.444547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.425 [2024-12-05 12:14:09.444578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.425 qpair failed and we were unable to recover it. 00:30:35.425 [2024-12-05 12:14:09.444697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.425 [2024-12-05 12:14:09.444728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.425 qpair failed and we were unable to recover it. 00:30:35.425 [2024-12-05 12:14:09.444916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.425 [2024-12-05 12:14:09.444947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.425 qpair failed and we were unable to recover it. 00:30:35.425 [2024-12-05 12:14:09.445076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.425 [2024-12-05 12:14:09.445107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.425 qpair failed and we were unable to recover it. 00:30:35.425 [2024-12-05 12:14:09.445210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.425 [2024-12-05 12:14:09.445241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.425 qpair failed and we were unable to recover it. 00:30:35.425 [2024-12-05 12:14:09.445365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.425 [2024-12-05 12:14:09.445425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.425 qpair failed and we were unable to recover it. 00:30:35.425 [2024-12-05 12:14:09.445614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.425 [2024-12-05 12:14:09.445647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.425 qpair failed and we were unable to recover it. 00:30:35.425 [2024-12-05 12:14:09.445833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.425 [2024-12-05 12:14:09.445866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.425 qpair failed and we were unable to recover it. 00:30:35.425 [2024-12-05 12:14:09.445983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.425 [2024-12-05 12:14:09.446016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.425 qpair failed and we were unable to recover it. 00:30:35.425 [2024-12-05 12:14:09.446190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.425 [2024-12-05 12:14:09.446228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.425 qpair failed and we were unable to recover it. 00:30:35.425 [2024-12-05 12:14:09.446356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.425 [2024-12-05 12:14:09.446399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.425 qpair failed and we were unable to recover it. 00:30:35.425 [2024-12-05 12:14:09.446686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.425 [2024-12-05 12:14:09.446717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.425 qpair failed and we were unable to recover it. 00:30:35.425 [2024-12-05 12:14:09.446834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.425 [2024-12-05 12:14:09.446864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.425 qpair failed and we were unable to recover it. 00:30:35.425 [2024-12-05 12:14:09.447125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.425 [2024-12-05 12:14:09.447156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.425 qpair failed and we were unable to recover it. 00:30:35.425 [2024-12-05 12:14:09.447263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.425 [2024-12-05 12:14:09.447292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.425 qpair failed and we were unable to recover it. 00:30:35.425 [2024-12-05 12:14:09.447610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.425 [2024-12-05 12:14:09.447644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.425 qpair failed and we were unable to recover it. 00:30:35.425 [2024-12-05 12:14:09.447851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.425 [2024-12-05 12:14:09.447882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.425 qpair failed and we were unable to recover it. 00:30:35.426 [2024-12-05 12:14:09.448129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.426 [2024-12-05 12:14:09.448162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.426 qpair failed and we were unable to recover it. 00:30:35.426 [2024-12-05 12:14:09.448351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.426 [2024-12-05 12:14:09.448391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.426 qpair failed and we were unable to recover it. 00:30:35.426 [2024-12-05 12:14:09.448514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.426 [2024-12-05 12:14:09.448545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.426 qpair failed and we were unable to recover it. 00:30:35.426 [2024-12-05 12:14:09.448742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.426 [2024-12-05 12:14:09.448774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.426 qpair failed and we were unable to recover it. 00:30:35.426 [2024-12-05 12:14:09.448911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.426 [2024-12-05 12:14:09.448942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.426 qpair failed and we were unable to recover it. 00:30:35.426 [2024-12-05 12:14:09.449187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.426 [2024-12-05 12:14:09.449219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.426 qpair failed and we were unable to recover it. 00:30:35.426 [2024-12-05 12:14:09.449347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.426 [2024-12-05 12:14:09.449383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.426 qpair failed and we were unable to recover it. 00:30:35.426 [2024-12-05 12:14:09.449504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.426 [2024-12-05 12:14:09.449533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.426 qpair failed and we were unable to recover it. 00:30:35.426 [2024-12-05 12:14:09.449690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.426 [2024-12-05 12:14:09.449722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.426 qpair failed and we were unable to recover it. 00:30:35.426 [2024-12-05 12:14:09.449901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.426 [2024-12-05 12:14:09.449939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.426 qpair failed and we were unable to recover it. 00:30:35.426 [2024-12-05 12:14:09.450118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.426 [2024-12-05 12:14:09.450150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.426 qpair failed and we were unable to recover it. 00:30:35.426 [2024-12-05 12:14:09.450268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.426 [2024-12-05 12:14:09.450299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.426 qpair failed and we were unable to recover it. 00:30:35.426 [2024-12-05 12:14:09.450421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.426 [2024-12-05 12:14:09.450453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.426 qpair failed and we were unable to recover it. 00:30:35.426 [2024-12-05 12:14:09.450579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.426 [2024-12-05 12:14:09.450610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.426 qpair failed and we were unable to recover it. 00:30:35.426 [2024-12-05 12:14:09.450843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.426 [2024-12-05 12:14:09.450874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.426 qpair failed and we were unable to recover it. 00:30:35.426 [2024-12-05 12:14:09.451060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.426 [2024-12-05 12:14:09.451090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.426 qpair failed and we were unable to recover it. 00:30:35.426 [2024-12-05 12:14:09.451265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.426 [2024-12-05 12:14:09.451296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.426 qpair failed and we were unable to recover it. 00:30:35.426 [2024-12-05 12:14:09.451482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.426 [2024-12-05 12:14:09.451515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.426 qpair failed and we were unable to recover it. 00:30:35.426 [2024-12-05 12:14:09.451651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.426 [2024-12-05 12:14:09.451683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.426 qpair failed and we were unable to recover it. 00:30:35.426 [2024-12-05 12:14:09.451928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.426 [2024-12-05 12:14:09.451961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.426 qpair failed and we were unable to recover it. 00:30:35.426 [2024-12-05 12:14:09.452133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.426 [2024-12-05 12:14:09.452164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.426 qpair failed and we were unable to recover it. 00:30:35.426 [2024-12-05 12:14:09.452273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.426 [2024-12-05 12:14:09.452303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.426 qpair failed and we were unable to recover it. 00:30:35.426 [2024-12-05 12:14:09.452432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.426 [2024-12-05 12:14:09.452465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.426 qpair failed and we were unable to recover it. 00:30:35.426 [2024-12-05 12:14:09.452640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.426 [2024-12-05 12:14:09.452671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.426 qpair failed and we were unable to recover it. 00:30:35.426 [2024-12-05 12:14:09.452883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.426 [2024-12-05 12:14:09.452914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.426 qpair failed and we were unable to recover it. 00:30:35.426 [2024-12-05 12:14:09.453084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.426 [2024-12-05 12:14:09.453114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.426 qpair failed and we were unable to recover it. 00:30:35.426 [2024-12-05 12:14:09.453221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.426 [2024-12-05 12:14:09.453251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.426 qpair failed and we were unable to recover it. 00:30:35.426 [2024-12-05 12:14:09.453361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.426 [2024-12-05 12:14:09.453420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.426 qpair failed and we were unable to recover it. 00:30:35.426 [2024-12-05 12:14:09.453558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.426 [2024-12-05 12:14:09.453589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.426 qpair failed and we were unable to recover it. 00:30:35.426 [2024-12-05 12:14:09.453766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.426 [2024-12-05 12:14:09.453797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.426 qpair failed and we were unable to recover it. 00:30:35.426 [2024-12-05 12:14:09.453986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.426 [2024-12-05 12:14:09.454018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.426 qpair failed and we were unable to recover it. 00:30:35.426 [2024-12-05 12:14:09.454209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.426 [2024-12-05 12:14:09.454241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.426 qpair failed and we were unable to recover it. 00:30:35.426 [2024-12-05 12:14:09.454437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.426 [2024-12-05 12:14:09.454476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.426 qpair failed and we were unable to recover it. 00:30:35.426 [2024-12-05 12:14:09.454600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.426 [2024-12-05 12:14:09.454631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.426 qpair failed and we were unable to recover it. 00:30:35.426 [2024-12-05 12:14:09.454733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.426 [2024-12-05 12:14:09.454762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.426 qpair failed and we were unable to recover it. 00:30:35.426 [2024-12-05 12:14:09.454877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.426 [2024-12-05 12:14:09.454908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.426 qpair failed and we were unable to recover it. 00:30:35.426 [2024-12-05 12:14:09.455030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.426 [2024-12-05 12:14:09.455060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.426 qpair failed and we were unable to recover it. 00:30:35.426 [2024-12-05 12:14:09.455168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.426 [2024-12-05 12:14:09.455198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.426 qpair failed and we were unable to recover it. 00:30:35.427 [2024-12-05 12:14:09.455444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.427 [2024-12-05 12:14:09.455477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.427 qpair failed and we were unable to recover it. 00:30:35.427 [2024-12-05 12:14:09.455717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.427 [2024-12-05 12:14:09.455749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.427 qpair failed and we were unable to recover it. 00:30:35.427 [2024-12-05 12:14:09.456013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.427 [2024-12-05 12:14:09.456045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.427 qpair failed and we were unable to recover it. 00:30:35.427 [2024-12-05 12:14:09.456239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.427 [2024-12-05 12:14:09.456270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.427 qpair failed and we were unable to recover it. 00:30:35.427 [2024-12-05 12:14:09.456395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.427 [2024-12-05 12:14:09.456426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.427 qpair failed and we were unable to recover it. 00:30:35.427 [2024-12-05 12:14:09.456601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.427 [2024-12-05 12:14:09.456633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.427 qpair failed and we were unable to recover it. 00:30:35.427 [2024-12-05 12:14:09.456762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.427 [2024-12-05 12:14:09.456793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.427 qpair failed and we were unable to recover it. 00:30:35.427 [2024-12-05 12:14:09.456981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.427 [2024-12-05 12:14:09.457012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.427 qpair failed and we were unable to recover it. 00:30:35.427 [2024-12-05 12:14:09.457198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.427 [2024-12-05 12:14:09.457229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.427 qpair failed and we were unable to recover it. 00:30:35.427 [2024-12-05 12:14:09.457350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.427 [2024-12-05 12:14:09.457387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.427 qpair failed and we were unable to recover it. 00:30:35.427 [2024-12-05 12:14:09.457510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.427 [2024-12-05 12:14:09.457541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.427 qpair failed and we were unable to recover it. 00:30:35.427 [2024-12-05 12:14:09.457727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.427 [2024-12-05 12:14:09.457759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.427 qpair failed and we were unable to recover it. 00:30:35.427 [2024-12-05 12:14:09.457889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.427 [2024-12-05 12:14:09.457922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.427 qpair failed and we were unable to recover it. 00:30:35.427 [2024-12-05 12:14:09.458096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.427 [2024-12-05 12:14:09.458126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.427 qpair failed and we were unable to recover it. 00:30:35.427 [2024-12-05 12:14:09.458351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.427 [2024-12-05 12:14:09.458391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.427 qpair failed and we were unable to recover it. 00:30:35.427 [2024-12-05 12:14:09.458590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.427 [2024-12-05 12:14:09.458624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.427 qpair failed and we were unable to recover it. 00:30:35.427 [2024-12-05 12:14:09.458742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.427 [2024-12-05 12:14:09.458773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.427 qpair failed and we were unable to recover it. 00:30:35.427 [2024-12-05 12:14:09.458968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.427 [2024-12-05 12:14:09.459000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.427 qpair failed and we were unable to recover it. 00:30:35.427 [2024-12-05 12:14:09.459193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.427 [2024-12-05 12:14:09.459224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.427 qpair failed and we were unable to recover it. 00:30:35.427 [2024-12-05 12:14:09.459511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.427 [2024-12-05 12:14:09.459544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.427 qpair failed and we were unable to recover it. 00:30:35.427 [2024-12-05 12:14:09.459763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.427 [2024-12-05 12:14:09.459795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.427 qpair failed and we were unable to recover it. 00:30:35.427 [2024-12-05 12:14:09.459985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.427 [2024-12-05 12:14:09.460057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.427 qpair failed and we were unable to recover it. 00:30:35.427 [2024-12-05 12:14:09.460347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.427 [2024-12-05 12:14:09.460401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.427 qpair failed and we were unable to recover it. 00:30:35.427 [2024-12-05 12:14:09.460607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.427 [2024-12-05 12:14:09.460640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.427 qpair failed and we were unable to recover it. 00:30:35.427 [2024-12-05 12:14:09.460771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.427 [2024-12-05 12:14:09.460804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.427 qpair failed and we were unable to recover it. 00:30:35.427 [2024-12-05 12:14:09.460942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.427 [2024-12-05 12:14:09.460974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.427 qpair failed and we were unable to recover it. 00:30:35.427 [2024-12-05 12:14:09.461151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.427 [2024-12-05 12:14:09.461185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.427 qpair failed and we were unable to recover it. 00:30:35.427 [2024-12-05 12:14:09.461394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.427 [2024-12-05 12:14:09.461431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.427 qpair failed and we were unable to recover it. 00:30:35.427 [2024-12-05 12:14:09.461618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.427 [2024-12-05 12:14:09.461650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.427 qpair failed and we were unable to recover it. 00:30:35.427 [2024-12-05 12:14:09.461888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.427 [2024-12-05 12:14:09.461919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.427 qpair failed and we were unable to recover it. 00:30:35.427 [2024-12-05 12:14:09.462159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.427 [2024-12-05 12:14:09.462190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.427 qpair failed and we were unable to recover it. 00:30:35.427 [2024-12-05 12:14:09.462323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.427 [2024-12-05 12:14:09.462354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.427 qpair failed and we were unable to recover it. 00:30:35.427 [2024-12-05 12:14:09.462480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.427 [2024-12-05 12:14:09.462513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.427 qpair failed and we were unable to recover it. 00:30:35.427 [2024-12-05 12:14:09.462791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.427 [2024-12-05 12:14:09.462823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.427 qpair failed and we were unable to recover it. 00:30:35.427 [2024-12-05 12:14:09.462948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.428 [2024-12-05 12:14:09.462980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.428 qpair failed and we were unable to recover it. 00:30:35.428 [2024-12-05 12:14:09.463266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.428 [2024-12-05 12:14:09.463299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.428 qpair failed and we were unable to recover it. 00:30:35.428 [2024-12-05 12:14:09.463539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.428 [2024-12-05 12:14:09.463572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.428 qpair failed and we were unable to recover it. 00:30:35.428 [2024-12-05 12:14:09.463851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.428 [2024-12-05 12:14:09.463882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.428 qpair failed and we were unable to recover it. 00:30:35.428 [2024-12-05 12:14:09.463986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.428 [2024-12-05 12:14:09.464037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.428 qpair failed and we were unable to recover it. 00:30:35.428 [2024-12-05 12:14:09.464162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.428 [2024-12-05 12:14:09.464194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.428 qpair failed and we were unable to recover it. 00:30:35.428 [2024-12-05 12:14:09.464392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.428 [2024-12-05 12:14:09.464424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.428 qpair failed and we were unable to recover it. 00:30:35.428 [2024-12-05 12:14:09.464695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.428 [2024-12-05 12:14:09.464726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.428 qpair failed and we were unable to recover it. 00:30:35.428 [2024-12-05 12:14:09.464895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.428 [2024-12-05 12:14:09.464926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.428 qpair failed and we were unable to recover it. 00:30:35.428 [2024-12-05 12:14:09.465130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.428 [2024-12-05 12:14:09.465161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.428 qpair failed and we were unable to recover it. 00:30:35.428 [2024-12-05 12:14:09.465337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.428 [2024-12-05 12:14:09.465381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.428 qpair failed and we were unable to recover it. 00:30:35.428 [2024-12-05 12:14:09.465638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.428 [2024-12-05 12:14:09.465670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.428 qpair failed and we were unable to recover it. 00:30:35.428 [2024-12-05 12:14:09.465856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.428 [2024-12-05 12:14:09.465888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.428 qpair failed and we were unable to recover it. 00:30:35.428 [2024-12-05 12:14:09.466084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.428 [2024-12-05 12:14:09.466115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.428 qpair failed and we were unable to recover it. 00:30:35.428 [2024-12-05 12:14:09.466290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.428 [2024-12-05 12:14:09.466327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.428 qpair failed and we were unable to recover it. 00:30:35.428 [2024-12-05 12:14:09.466523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.428 [2024-12-05 12:14:09.466556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.428 qpair failed and we were unable to recover it. 00:30:35.428 [2024-12-05 12:14:09.466794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.428 [2024-12-05 12:14:09.466826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.428 qpair failed and we were unable to recover it. 00:30:35.428 [2024-12-05 12:14:09.467007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.428 [2024-12-05 12:14:09.467039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.428 qpair failed and we were unable to recover it. 00:30:35.428 [2024-12-05 12:14:09.467219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.428 [2024-12-05 12:14:09.467252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.428 qpair failed and we were unable to recover it. 00:30:35.428 [2024-12-05 12:14:09.467376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.428 [2024-12-05 12:14:09.467408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.428 qpair failed and we were unable to recover it. 00:30:35.428 [2024-12-05 12:14:09.467582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.428 [2024-12-05 12:14:09.467615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.428 qpair failed and we were unable to recover it. 00:30:35.428 [2024-12-05 12:14:09.467801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.428 [2024-12-05 12:14:09.467833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.428 qpair failed and we were unable to recover it. 00:30:35.428 [2024-12-05 12:14:09.467949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.428 [2024-12-05 12:14:09.467981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.428 qpair failed and we were unable to recover it. 00:30:35.428 [2024-12-05 12:14:09.468091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.428 [2024-12-05 12:14:09.468123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.428 qpair failed and we were unable to recover it. 00:30:35.428 [2024-12-05 12:14:09.468239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.428 [2024-12-05 12:14:09.468272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.428 qpair failed and we were unable to recover it. 00:30:35.428 [2024-12-05 12:14:09.468388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.428 [2024-12-05 12:14:09.468421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.428 qpair failed and we were unable to recover it. 00:30:35.428 [2024-12-05 12:14:09.468601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.428 [2024-12-05 12:14:09.468632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.428 qpair failed and we were unable to recover it. 00:30:35.428 [2024-12-05 12:14:09.468810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.428 [2024-12-05 12:14:09.468842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.428 qpair failed and we were unable to recover it. 00:30:35.428 [2024-12-05 12:14:09.469046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.428 [2024-12-05 12:14:09.469077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.428 qpair failed and we were unable to recover it. 00:30:35.428 [2024-12-05 12:14:09.469201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.428 [2024-12-05 12:14:09.469233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.428 qpair failed and we were unable to recover it. 00:30:35.428 [2024-12-05 12:14:09.469363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.428 [2024-12-05 12:14:09.469405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.428 qpair failed and we were unable to recover it. 00:30:35.428 [2024-12-05 12:14:09.469578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.428 [2024-12-05 12:14:09.469610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.428 qpair failed and we were unable to recover it. 00:30:35.428 [2024-12-05 12:14:09.469733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.428 [2024-12-05 12:14:09.469765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.428 qpair failed and we were unable to recover it. 00:30:35.428 [2024-12-05 12:14:09.470008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.428 [2024-12-05 12:14:09.470039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.428 qpair failed and we were unable to recover it. 00:30:35.428 [2024-12-05 12:14:09.470167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.428 [2024-12-05 12:14:09.470199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.428 qpair failed and we were unable to recover it. 00:30:35.428 [2024-12-05 12:14:09.470306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.428 [2024-12-05 12:14:09.470338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.428 qpair failed and we were unable to recover it. 00:30:35.428 [2024-12-05 12:14:09.470579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.428 [2024-12-05 12:14:09.470646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.428 qpair failed and we were unable to recover it. 00:30:35.428 [2024-12-05 12:14:09.470807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.428 [2024-12-05 12:14:09.470865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.428 qpair failed and we were unable to recover it. 00:30:35.428 [2024-12-05 12:14:09.471000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.428 [2024-12-05 12:14:09.471035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.428 qpair failed and we were unable to recover it. 00:30:35.429 [2024-12-05 12:14:09.471220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.429 [2024-12-05 12:14:09.471253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.429 qpair failed and we were unable to recover it. 00:30:35.429 [2024-12-05 12:14:09.471438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.429 [2024-12-05 12:14:09.471473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.429 qpair failed and we were unable to recover it. 00:30:35.429 [2024-12-05 12:14:09.471593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.429 [2024-12-05 12:14:09.471635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.429 qpair failed and we were unable to recover it. 00:30:35.429 [2024-12-05 12:14:09.471810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.429 [2024-12-05 12:14:09.471841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.429 qpair failed and we were unable to recover it. 00:30:35.429 [2024-12-05 12:14:09.471964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.429 [2024-12-05 12:14:09.471995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.429 qpair failed and we were unable to recover it. 00:30:35.429 [2024-12-05 12:14:09.472118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.429 [2024-12-05 12:14:09.472150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.429 qpair failed and we were unable to recover it. 00:30:35.429 [2024-12-05 12:14:09.472266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.429 [2024-12-05 12:14:09.472296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.429 qpair failed and we were unable to recover it. 00:30:35.429 [2024-12-05 12:14:09.472489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.429 [2024-12-05 12:14:09.472522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.429 qpair failed and we were unable to recover it. 00:30:35.429 [2024-12-05 12:14:09.472693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.429 [2024-12-05 12:14:09.472725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.429 qpair failed and we were unable to recover it. 00:30:35.429 [2024-12-05 12:14:09.472837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.429 [2024-12-05 12:14:09.472868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.429 qpair failed and we were unable to recover it. 00:30:35.429 [2024-12-05 12:14:09.472984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.429 [2024-12-05 12:14:09.473016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.429 qpair failed and we were unable to recover it. 00:30:35.429 [2024-12-05 12:14:09.473211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.429 [2024-12-05 12:14:09.473242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.429 qpair failed and we were unable to recover it. 00:30:35.429 [2024-12-05 12:14:09.473431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.429 [2024-12-05 12:14:09.473462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.429 qpair failed and we were unable to recover it. 00:30:35.429 [2024-12-05 12:14:09.473651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.429 [2024-12-05 12:14:09.473681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.429 qpair failed and we were unable to recover it. 00:30:35.429 [2024-12-05 12:14:09.473794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.429 [2024-12-05 12:14:09.473826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.429 qpair failed and we were unable to recover it. 00:30:35.429 [2024-12-05 12:14:09.474019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.429 [2024-12-05 12:14:09.474051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.429 qpair failed and we were unable to recover it. 00:30:35.429 [2024-12-05 12:14:09.474238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.429 [2024-12-05 12:14:09.474269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.429 qpair failed and we were unable to recover it. 00:30:35.429 [2024-12-05 12:14:09.474508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.429 [2024-12-05 12:14:09.474542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.429 qpair failed and we were unable to recover it. 00:30:35.429 [2024-12-05 12:14:09.474667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.429 [2024-12-05 12:14:09.474699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.429 qpair failed and we were unable to recover it. 00:30:35.429 [2024-12-05 12:14:09.474879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.429 [2024-12-05 12:14:09.474912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.429 qpair failed and we were unable to recover it. 00:30:35.429 [2024-12-05 12:14:09.475092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.429 [2024-12-05 12:14:09.475123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.429 qpair failed and we were unable to recover it. 00:30:35.429 [2024-12-05 12:14:09.475297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.429 [2024-12-05 12:14:09.475329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.429 qpair failed and we were unable to recover it. 00:30:35.429 [2024-12-05 12:14:09.475551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.429 [2024-12-05 12:14:09.475585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.429 qpair failed and we were unable to recover it. 00:30:35.429 [2024-12-05 12:14:09.475710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.429 [2024-12-05 12:14:09.475742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.429 qpair failed and we were unable to recover it. 00:30:35.429 [2024-12-05 12:14:09.475908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.429 [2024-12-05 12:14:09.475939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.429 qpair failed and we were unable to recover it. 00:30:35.429 [2024-12-05 12:14:09.476108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.429 [2024-12-05 12:14:09.476138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.429 qpair failed and we were unable to recover it. 00:30:35.429 [2024-12-05 12:14:09.476268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.429 [2024-12-05 12:14:09.476298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.429 qpair failed and we were unable to recover it. 00:30:35.429 [2024-12-05 12:14:09.476407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.429 [2024-12-05 12:14:09.476442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.429 qpair failed and we were unable to recover it. 00:30:35.429 [2024-12-05 12:14:09.476555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.429 [2024-12-05 12:14:09.476586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.429 qpair failed and we were unable to recover it. 00:30:35.429 [2024-12-05 12:14:09.476757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.429 [2024-12-05 12:14:09.476826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.429 qpair failed and we were unable to recover it. 00:30:35.429 [2024-12-05 12:14:09.477030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.429 [2024-12-05 12:14:09.477064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.429 qpair failed and we were unable to recover it. 00:30:35.429 [2024-12-05 12:14:09.477242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.429 [2024-12-05 12:14:09.477273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.429 qpair failed and we were unable to recover it. 00:30:35.429 [2024-12-05 12:14:09.477396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.429 [2024-12-05 12:14:09.477428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.429 qpair failed and we were unable to recover it. 00:30:35.429 [2024-12-05 12:14:09.477620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.429 [2024-12-05 12:14:09.477653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.429 qpair failed and we were unable to recover it. 00:30:35.429 [2024-12-05 12:14:09.477828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.429 [2024-12-05 12:14:09.477861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.429 qpair failed and we were unable to recover it. 00:30:35.429 [2024-12-05 12:14:09.477979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.429 [2024-12-05 12:14:09.478012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.429 qpair failed and we were unable to recover it. 00:30:35.429 [2024-12-05 12:14:09.478136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.429 [2024-12-05 12:14:09.478168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.430 qpair failed and we were unable to recover it. 00:30:35.430 [2024-12-05 12:14:09.478348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.430 [2024-12-05 12:14:09.478398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.430 qpair failed and we were unable to recover it. 00:30:35.430 [2024-12-05 12:14:09.478518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.430 [2024-12-05 12:14:09.478548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.430 qpair failed and we were unable to recover it. 00:30:35.430 [2024-12-05 12:14:09.478730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.430 [2024-12-05 12:14:09.478761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.430 qpair failed and we were unable to recover it. 00:30:35.430 [2024-12-05 12:14:09.478875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.430 [2024-12-05 12:14:09.478906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.430 qpair failed and we were unable to recover it. 00:30:35.430 [2024-12-05 12:14:09.479079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.430 [2024-12-05 12:14:09.479110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.430 qpair failed and we were unable to recover it. 00:30:35.430 [2024-12-05 12:14:09.479243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.430 [2024-12-05 12:14:09.479276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.430 qpair failed and we were unable to recover it. 00:30:35.430 [2024-12-05 12:14:09.479393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.430 [2024-12-05 12:14:09.479426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.430 qpair failed and we were unable to recover it. 00:30:35.430 [2024-12-05 12:14:09.479534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.430 [2024-12-05 12:14:09.479565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.430 qpair failed and we were unable to recover it. 00:30:35.430 [2024-12-05 12:14:09.479831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.430 [2024-12-05 12:14:09.479863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.430 qpair failed and we were unable to recover it. 00:30:35.430 [2024-12-05 12:14:09.479987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.430 [2024-12-05 12:14:09.480019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.430 qpair failed and we were unable to recover it. 00:30:35.430 [2024-12-05 12:14:09.480207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.430 [2024-12-05 12:14:09.480237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.430 qpair failed and we were unable to recover it. 00:30:35.430 [2024-12-05 12:14:09.480411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.430 [2024-12-05 12:14:09.480444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.430 qpair failed and we were unable to recover it. 00:30:35.430 [2024-12-05 12:14:09.480681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.430 [2024-12-05 12:14:09.480712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.430 qpair failed and we were unable to recover it. 00:30:35.430 [2024-12-05 12:14:09.480814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.430 [2024-12-05 12:14:09.480844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.430 qpair failed and we were unable to recover it. 00:30:35.430 [2024-12-05 12:14:09.480955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.430 [2024-12-05 12:14:09.480987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.430 qpair failed and we were unable to recover it. 00:30:35.430 [2024-12-05 12:14:09.481178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.430 [2024-12-05 12:14:09.481210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.430 qpair failed and we were unable to recover it. 00:30:35.430 [2024-12-05 12:14:09.481415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.430 [2024-12-05 12:14:09.481447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.430 qpair failed and we were unable to recover it. 00:30:35.430 [2024-12-05 12:14:09.481646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.430 [2024-12-05 12:14:09.481678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.430 qpair failed and we were unable to recover it. 00:30:35.430 [2024-12-05 12:14:09.481784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.430 [2024-12-05 12:14:09.481816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.430 qpair failed and we were unable to recover it. 00:30:35.430 [2024-12-05 12:14:09.482056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.430 [2024-12-05 12:14:09.482094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.430 qpair failed and we were unable to recover it. 00:30:35.430 [2024-12-05 12:14:09.482196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.430 [2024-12-05 12:14:09.482227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.430 qpair failed and we were unable to recover it. 00:30:35.430 [2024-12-05 12:14:09.482419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.430 [2024-12-05 12:14:09.482451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.430 qpair failed and we were unable to recover it. 00:30:35.430 [2024-12-05 12:14:09.482637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.430 [2024-12-05 12:14:09.482668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.430 qpair failed and we were unable to recover it. 00:30:35.430 [2024-12-05 12:14:09.482855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.430 [2024-12-05 12:14:09.482886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.430 qpair failed and we were unable to recover it. 00:30:35.430 [2024-12-05 12:14:09.482996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.430 [2024-12-05 12:14:09.483027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.430 qpair failed and we were unable to recover it. 00:30:35.430 [2024-12-05 12:14:09.483211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.430 [2024-12-05 12:14:09.483243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.430 qpair failed and we were unable to recover it. 00:30:35.430 [2024-12-05 12:14:09.483417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.430 [2024-12-05 12:14:09.483449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.430 qpair failed and we were unable to recover it. 00:30:35.430 [2024-12-05 12:14:09.483576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.430 [2024-12-05 12:14:09.483608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.430 qpair failed and we were unable to recover it. 00:30:35.430 [2024-12-05 12:14:09.483797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.430 [2024-12-05 12:14:09.483828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.430 qpair failed and we were unable to recover it. 00:30:35.430 [2024-12-05 12:14:09.484009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.430 [2024-12-05 12:14:09.484040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.430 qpair failed and we were unable to recover it. 00:30:35.430 [2024-12-05 12:14:09.484222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.430 [2024-12-05 12:14:09.484253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.430 qpair failed and we were unable to recover it. 00:30:35.430 [2024-12-05 12:14:09.484464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.430 [2024-12-05 12:14:09.484496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.430 qpair failed and we were unable to recover it. 00:30:35.430 [2024-12-05 12:14:09.484613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.430 [2024-12-05 12:14:09.484644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.430 qpair failed and we were unable to recover it. 00:30:35.430 [2024-12-05 12:14:09.484769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.430 [2024-12-05 12:14:09.484801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.430 qpair failed and we were unable to recover it. 00:30:35.430 [2024-12-05 12:14:09.485070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.430 [2024-12-05 12:14:09.485101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.430 qpair failed and we were unable to recover it. 00:30:35.430 [2024-12-05 12:14:09.485278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.430 [2024-12-05 12:14:09.485310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.430 qpair failed and we were unable to recover it. 00:30:35.430 [2024-12-05 12:14:09.485436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.430 [2024-12-05 12:14:09.485469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.430 qpair failed and we were unable to recover it. 00:30:35.430 [2024-12-05 12:14:09.485566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.430 [2024-12-05 12:14:09.485596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.430 qpair failed and we were unable to recover it. 00:30:35.430 [2024-12-05 12:14:09.485878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.431 [2024-12-05 12:14:09.485910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.431 qpair failed and we were unable to recover it. 00:30:35.431 [2024-12-05 12:14:09.486152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.431 [2024-12-05 12:14:09.486185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.431 qpair failed and we were unable to recover it. 00:30:35.431 [2024-12-05 12:14:09.486378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.431 [2024-12-05 12:14:09.486411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.431 qpair failed and we were unable to recover it. 00:30:35.431 [2024-12-05 12:14:09.486535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.431 [2024-12-05 12:14:09.486567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.431 qpair failed and we were unable to recover it. 00:30:35.431 [2024-12-05 12:14:09.486738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.431 [2024-12-05 12:14:09.486770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.431 qpair failed and we were unable to recover it. 00:30:35.431 [2024-12-05 12:14:09.486960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.431 [2024-12-05 12:14:09.486993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.431 qpair failed and we were unable to recover it. 00:30:35.431 [2024-12-05 12:14:09.487192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.431 [2024-12-05 12:14:09.487222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.431 qpair failed and we were unable to recover it. 00:30:35.431 [2024-12-05 12:14:09.487337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.431 [2024-12-05 12:14:09.487373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.431 qpair failed and we were unable to recover it. 00:30:35.431 [2024-12-05 12:14:09.487489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.431 [2024-12-05 12:14:09.487524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.431 qpair failed and we were unable to recover it. 00:30:35.431 [2024-12-05 12:14:09.487634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.431 [2024-12-05 12:14:09.487665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.431 qpair failed and we were unable to recover it. 00:30:35.431 [2024-12-05 12:14:09.487763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.431 [2024-12-05 12:14:09.487793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.431 qpair failed and we were unable to recover it. 00:30:35.431 [2024-12-05 12:14:09.487970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.431 [2024-12-05 12:14:09.488001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.431 qpair failed and we were unable to recover it. 00:30:35.431 [2024-12-05 12:14:09.488245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.431 [2024-12-05 12:14:09.488277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.431 qpair failed and we were unable to recover it. 00:30:35.431 [2024-12-05 12:14:09.488567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.431 [2024-12-05 12:14:09.488599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.431 qpair failed and we were unable to recover it. 00:30:35.431 [2024-12-05 12:14:09.488736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.431 [2024-12-05 12:14:09.488768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.431 qpair failed and we were unable to recover it. 00:30:35.431 [2024-12-05 12:14:09.488878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.431 [2024-12-05 12:14:09.488910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.431 qpair failed and we were unable to recover it. 00:30:35.431 [2024-12-05 12:14:09.489027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.431 [2024-12-05 12:14:09.489058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.431 qpair failed and we were unable to recover it. 00:30:35.431 [2024-12-05 12:14:09.489180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.431 [2024-12-05 12:14:09.489214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.431 qpair failed and we were unable to recover it. 00:30:35.431 [2024-12-05 12:14:09.489346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.431 [2024-12-05 12:14:09.489401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.431 qpair failed and we were unable to recover it. 00:30:35.431 [2024-12-05 12:14:09.489511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.431 [2024-12-05 12:14:09.489542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.431 qpair failed and we were unable to recover it. 00:30:35.431 [2024-12-05 12:14:09.489653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.431 [2024-12-05 12:14:09.489684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.431 qpair failed and we were unable to recover it. 00:30:35.431 [2024-12-05 12:14:09.489852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.431 [2024-12-05 12:14:09.489883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.431 qpair failed and we were unable to recover it. 00:30:35.431 [2024-12-05 12:14:09.490122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.431 [2024-12-05 12:14:09.490192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.431 qpair failed and we were unable to recover it. 00:30:35.431 [2024-12-05 12:14:09.490322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.431 [2024-12-05 12:14:09.490357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.431 qpair failed and we were unable to recover it. 00:30:35.431 [2024-12-05 12:14:09.490620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.431 [2024-12-05 12:14:09.490653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.431 qpair failed and we were unable to recover it. 00:30:35.431 [2024-12-05 12:14:09.490850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.431 [2024-12-05 12:14:09.490882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.431 qpair failed and we were unable to recover it. 00:30:35.431 [2024-12-05 12:14:09.491055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.431 [2024-12-05 12:14:09.491088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.431 qpair failed and we were unable to recover it. 00:30:35.431 [2024-12-05 12:14:09.491278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.431 [2024-12-05 12:14:09.491309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.431 qpair failed and we were unable to recover it. 00:30:35.431 [2024-12-05 12:14:09.491493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.431 [2024-12-05 12:14:09.491525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.431 qpair failed and we were unable to recover it. 00:30:35.431 [2024-12-05 12:14:09.491631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.431 [2024-12-05 12:14:09.491663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.431 qpair failed and we were unable to recover it. 00:30:35.431 [2024-12-05 12:14:09.491842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.431 [2024-12-05 12:14:09.491873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.431 qpair failed and we were unable to recover it. 00:30:35.431 [2024-12-05 12:14:09.492083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.431 [2024-12-05 12:14:09.492114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.431 qpair failed and we were unable to recover it. 00:30:35.431 [2024-12-05 12:14:09.492309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.431 [2024-12-05 12:14:09.492341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.431 qpair failed and we were unable to recover it. 00:30:35.431 [2024-12-05 12:14:09.492535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.431 [2024-12-05 12:14:09.492566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.431 qpair failed and we were unable to recover it. 00:30:35.431 [2024-12-05 12:14:09.492743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.431 [2024-12-05 12:14:09.492775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.431 qpair failed and we were unable to recover it. 00:30:35.431 [2024-12-05 12:14:09.493012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.431 [2024-12-05 12:14:09.493052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.431 qpair failed and we were unable to recover it. 00:30:35.431 [2024-12-05 12:14:09.493227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.431 [2024-12-05 12:14:09.493258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.431 qpair failed and we were unable to recover it. 00:30:35.431 [2024-12-05 12:14:09.493464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.431 [2024-12-05 12:14:09.493497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.431 qpair failed and we were unable to recover it. 00:30:35.431 [2024-12-05 12:14:09.493685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.431 [2024-12-05 12:14:09.493716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.432 qpair failed and we were unable to recover it. 00:30:35.432 [2024-12-05 12:14:09.493906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.432 [2024-12-05 12:14:09.493939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.432 qpair failed and we were unable to recover it. 00:30:35.432 [2024-12-05 12:14:09.494159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.432 [2024-12-05 12:14:09.494191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.432 qpair failed and we were unable to recover it. 00:30:35.432 [2024-12-05 12:14:09.494308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.432 [2024-12-05 12:14:09.494340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.432 qpair failed and we were unable to recover it. 00:30:35.432 [2024-12-05 12:14:09.494521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.432 [2024-12-05 12:14:09.494554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.432 qpair failed and we were unable to recover it. 00:30:35.432 [2024-12-05 12:14:09.494675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.432 [2024-12-05 12:14:09.494706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.432 qpair failed and we were unable to recover it. 00:30:35.432 [2024-12-05 12:14:09.494892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.432 [2024-12-05 12:14:09.494923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.432 qpair failed and we were unable to recover it. 00:30:35.432 [2024-12-05 12:14:09.495096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.432 [2024-12-05 12:14:09.495127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.432 qpair failed and we were unable to recover it. 00:30:35.432 [2024-12-05 12:14:09.495244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.432 [2024-12-05 12:14:09.495273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.432 qpair failed and we were unable to recover it. 00:30:35.432 [2024-12-05 12:14:09.495546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.432 [2024-12-05 12:14:09.495577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.432 qpair failed and we were unable to recover it. 00:30:35.432 [2024-12-05 12:14:09.495791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.432 [2024-12-05 12:14:09.495822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.432 qpair failed and we were unable to recover it. 00:30:35.432 [2024-12-05 12:14:09.495999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.432 [2024-12-05 12:14:09.496030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.432 qpair failed and we were unable to recover it. 00:30:35.432 [2024-12-05 12:14:09.496201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.432 [2024-12-05 12:14:09.496232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.432 qpair failed and we were unable to recover it. 00:30:35.432 [2024-12-05 12:14:09.496374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.432 [2024-12-05 12:14:09.496406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.432 qpair failed and we were unable to recover it. 00:30:35.432 [2024-12-05 12:14:09.496673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.432 [2024-12-05 12:14:09.496706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.432 qpair failed and we were unable to recover it. 00:30:35.432 [2024-12-05 12:14:09.496817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.432 [2024-12-05 12:14:09.496849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.432 qpair failed and we were unable to recover it. 00:30:35.432 [2024-12-05 12:14:09.496967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.432 [2024-12-05 12:14:09.496999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.432 qpair failed and we were unable to recover it. 00:30:35.432 [2024-12-05 12:14:09.497174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.432 [2024-12-05 12:14:09.497205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.432 qpair failed and we were unable to recover it. 00:30:35.432 [2024-12-05 12:14:09.497317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.432 [2024-12-05 12:14:09.497349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.432 qpair failed and we were unable to recover it. 00:30:35.432 [2024-12-05 12:14:09.497611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.432 [2024-12-05 12:14:09.497645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.432 qpair failed and we were unable to recover it. 00:30:35.432 [2024-12-05 12:14:09.497769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.432 [2024-12-05 12:14:09.497801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.432 qpair failed and we were unable to recover it. 00:30:35.432 [2024-12-05 12:14:09.498011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.432 [2024-12-05 12:14:09.498043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.432 qpair failed and we were unable to recover it. 00:30:35.432 [2024-12-05 12:14:09.498165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.432 [2024-12-05 12:14:09.498195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.432 qpair failed and we were unable to recover it. 00:30:35.432 [2024-12-05 12:14:09.498396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.432 [2024-12-05 12:14:09.498429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.432 qpair failed and we were unable to recover it. 00:30:35.432 [2024-12-05 12:14:09.498560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.432 [2024-12-05 12:14:09.498591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.432 qpair failed and we were unable to recover it. 00:30:35.432 [2024-12-05 12:14:09.498806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.432 [2024-12-05 12:14:09.498837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.432 qpair failed and we were unable to recover it. 00:30:35.432 [2024-12-05 12:14:09.498956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.432 [2024-12-05 12:14:09.498987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.432 qpair failed and we were unable to recover it. 00:30:35.432 [2024-12-05 12:14:09.499168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.432 [2024-12-05 12:14:09.499198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.432 qpair failed and we were unable to recover it. 00:30:35.432 [2024-12-05 12:14:09.499310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.432 [2024-12-05 12:14:09.499339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.432 qpair failed and we were unable to recover it. 00:30:35.432 [2024-12-05 12:14:09.499595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.432 [2024-12-05 12:14:09.499665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.432 qpair failed and we were unable to recover it. 00:30:35.432 [2024-12-05 12:14:09.499889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.432 [2024-12-05 12:14:09.499926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.432 qpair failed and we were unable to recover it. 00:30:35.432 [2024-12-05 12:14:09.500115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.432 [2024-12-05 12:14:09.500148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.432 qpair failed and we were unable to recover it. 00:30:35.432 [2024-12-05 12:14:09.500316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.432 [2024-12-05 12:14:09.500347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.432 qpair failed and we were unable to recover it. 00:30:35.432 [2024-12-05 12:14:09.500482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.432 [2024-12-05 12:14:09.500514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.432 qpair failed and we were unable to recover it. 00:30:35.432 [2024-12-05 12:14:09.500799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.432 [2024-12-05 12:14:09.500831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.433 qpair failed and we were unable to recover it. 00:30:35.433 [2024-12-05 12:14:09.501071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.433 [2024-12-05 12:14:09.501102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.433 qpair failed and we were unable to recover it. 00:30:35.433 [2024-12-05 12:14:09.501282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.433 [2024-12-05 12:14:09.501314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.433 qpair failed and we were unable to recover it. 00:30:35.433 [2024-12-05 12:14:09.501520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.433 [2024-12-05 12:14:09.501553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.433 qpair failed and we were unable to recover it. 00:30:35.433 [2024-12-05 12:14:09.501681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.433 [2024-12-05 12:14:09.501711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.433 qpair failed and we were unable to recover it. 00:30:35.433 [2024-12-05 12:14:09.501892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.433 [2024-12-05 12:14:09.501923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.433 qpair failed and we were unable to recover it. 00:30:35.433 [2024-12-05 12:14:09.502186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.433 [2024-12-05 12:14:09.502217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.433 qpair failed and we were unable to recover it. 00:30:35.433 [2024-12-05 12:14:09.502483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.433 [2024-12-05 12:14:09.502516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.433 qpair failed and we were unable to recover it. 00:30:35.433 [2024-12-05 12:14:09.502653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.433 [2024-12-05 12:14:09.502683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.433 qpair failed and we were unable to recover it. 00:30:35.433 [2024-12-05 12:14:09.502791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.433 [2024-12-05 12:14:09.502822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.433 qpair failed and we were unable to recover it. 00:30:35.433 [2024-12-05 12:14:09.503016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.433 [2024-12-05 12:14:09.503048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.433 qpair failed and we were unable to recover it. 00:30:35.433 [2024-12-05 12:14:09.503321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.433 [2024-12-05 12:14:09.503353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.433 qpair failed and we were unable to recover it. 00:30:35.433 [2024-12-05 12:14:09.503553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.433 [2024-12-05 12:14:09.503585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.433 qpair failed and we were unable to recover it. 00:30:35.433 [2024-12-05 12:14:09.503774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.433 [2024-12-05 12:14:09.503805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.433 qpair failed and we were unable to recover it. 00:30:35.433 [2024-12-05 12:14:09.503908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.433 [2024-12-05 12:14:09.503940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.433 qpair failed and we were unable to recover it. 00:30:35.433 [2024-12-05 12:14:09.504106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.433 [2024-12-05 12:14:09.504137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.433 qpair failed and we were unable to recover it. 00:30:35.433 [2024-12-05 12:14:09.504333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.433 [2024-12-05 12:14:09.504364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.433 qpair failed and we were unable to recover it. 00:30:35.433 [2024-12-05 12:14:09.504585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.433 [2024-12-05 12:14:09.504623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.433 qpair failed and we were unable to recover it. 00:30:35.433 [2024-12-05 12:14:09.504811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.433 [2024-12-05 12:14:09.504843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.433 qpair failed and we were unable to recover it. 00:30:35.433 [2024-12-05 12:14:09.504969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.433 [2024-12-05 12:14:09.504998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.433 qpair failed and we were unable to recover it. 00:30:35.433 [2024-12-05 12:14:09.505251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.433 [2024-12-05 12:14:09.505282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.433 qpair failed and we were unable to recover it. 00:30:35.433 [2024-12-05 12:14:09.505490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.433 [2024-12-05 12:14:09.505522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.433 qpair failed and we were unable to recover it. 00:30:35.433 [2024-12-05 12:14:09.505660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.433 [2024-12-05 12:14:09.505690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.433 qpair failed and we were unable to recover it. 00:30:35.433 [2024-12-05 12:14:09.505812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.433 [2024-12-05 12:14:09.505842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.433 qpair failed and we were unable to recover it. 00:30:35.433 [2024-12-05 12:14:09.506046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.433 [2024-12-05 12:14:09.506076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.433 qpair failed and we were unable to recover it. 00:30:35.433 [2024-12-05 12:14:09.506193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.433 [2024-12-05 12:14:09.506222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.433 qpair failed and we were unable to recover it. 00:30:35.433 [2024-12-05 12:14:09.506344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.433 [2024-12-05 12:14:09.506386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.433 qpair failed and we were unable to recover it. 00:30:35.433 [2024-12-05 12:14:09.506598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.433 [2024-12-05 12:14:09.506631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.433 qpair failed and we were unable to recover it. 00:30:35.433 [2024-12-05 12:14:09.506812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.433 [2024-12-05 12:14:09.506843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.433 qpair failed and we were unable to recover it. 00:30:35.433 [2024-12-05 12:14:09.507019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.433 [2024-12-05 12:14:09.507051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.433 qpair failed and we were unable to recover it. 00:30:35.433 [2024-12-05 12:14:09.507221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.433 [2024-12-05 12:14:09.507253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.433 qpair failed and we were unable to recover it. 00:30:35.433 [2024-12-05 12:14:09.507387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.433 [2024-12-05 12:14:09.507424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.433 qpair failed and we were unable to recover it. 00:30:35.433 [2024-12-05 12:14:09.507666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.433 [2024-12-05 12:14:09.507699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.433 qpair failed and we were unable to recover it. 00:30:35.433 [2024-12-05 12:14:09.507890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.433 [2024-12-05 12:14:09.507921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.433 qpair failed and we were unable to recover it. 00:30:35.433 [2024-12-05 12:14:09.508102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.433 [2024-12-05 12:14:09.508131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.433 qpair failed and we were unable to recover it. 00:30:35.433 [2024-12-05 12:14:09.508348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.433 [2024-12-05 12:14:09.508389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.433 qpair failed and we were unable to recover it. 00:30:35.433 [2024-12-05 12:14:09.508564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.433 [2024-12-05 12:14:09.508594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.433 qpair failed and we were unable to recover it. 00:30:35.433 [2024-12-05 12:14:09.508779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.433 [2024-12-05 12:14:09.508810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.433 qpair failed and we were unable to recover it. 00:30:35.433 [2024-12-05 12:14:09.508927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.433 [2024-12-05 12:14:09.508957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.433 qpair failed and we were unable to recover it. 00:30:35.433 [2024-12-05 12:14:09.509259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.434 [2024-12-05 12:14:09.509291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.434 qpair failed and we were unable to recover it. 00:30:35.434 [2024-12-05 12:14:09.509501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.434 [2024-12-05 12:14:09.509533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.434 qpair failed and we were unable to recover it. 00:30:35.434 [2024-12-05 12:14:09.509648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.434 [2024-12-05 12:14:09.509679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.434 qpair failed and we were unable to recover it. 00:30:35.434 [2024-12-05 12:14:09.509831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.434 [2024-12-05 12:14:09.509862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.434 qpair failed and we were unable to recover it. 00:30:35.434 [2024-12-05 12:14:09.510054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.434 [2024-12-05 12:14:09.510086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.434 qpair failed and we were unable to recover it. 00:30:35.434 [2024-12-05 12:14:09.510253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.434 [2024-12-05 12:14:09.510290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.434 qpair failed and we were unable to recover it. 00:30:35.434 [2024-12-05 12:14:09.510498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.434 [2024-12-05 12:14:09.510530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.434 qpair failed and we were unable to recover it. 00:30:35.434 [2024-12-05 12:14:09.510665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.434 [2024-12-05 12:14:09.510696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.434 qpair failed and we were unable to recover it. 00:30:35.434 [2024-12-05 12:14:09.510827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.434 [2024-12-05 12:14:09.510859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.434 qpair failed and we were unable to recover it. 00:30:35.434 [2024-12-05 12:14:09.511035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.434 [2024-12-05 12:14:09.511066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.434 qpair failed and we were unable to recover it. 00:30:35.434 [2024-12-05 12:14:09.511179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.434 [2024-12-05 12:14:09.511208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.434 qpair failed and we were unable to recover it. 00:30:35.434 [2024-12-05 12:14:09.511476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.434 [2024-12-05 12:14:09.511508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.434 qpair failed and we were unable to recover it. 00:30:35.434 [2024-12-05 12:14:09.511716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.434 [2024-12-05 12:14:09.511747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.434 qpair failed and we were unable to recover it. 00:30:35.434 [2024-12-05 12:14:09.511881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.434 [2024-12-05 12:14:09.511912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.434 qpair failed and we were unable to recover it. 00:30:35.434 [2024-12-05 12:14:09.512099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.434 [2024-12-05 12:14:09.512130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.434 qpair failed and we were unable to recover it. 00:30:35.434 [2024-12-05 12:14:09.512403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.434 [2024-12-05 12:14:09.512436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.434 qpair failed and we were unable to recover it. 00:30:35.434 [2024-12-05 12:14:09.512538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.434 [2024-12-05 12:14:09.512569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.434 qpair failed and we were unable to recover it. 00:30:35.434 [2024-12-05 12:14:09.512741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.434 [2024-12-05 12:14:09.512772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.434 qpair failed and we were unable to recover it. 00:30:35.434 [2024-12-05 12:14:09.512953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.434 [2024-12-05 12:14:09.512987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.434 qpair failed and we were unable to recover it. 00:30:35.434 [2024-12-05 12:14:09.513175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.434 [2024-12-05 12:14:09.513207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.434 qpair failed and we were unable to recover it. 00:30:35.434 [2024-12-05 12:14:09.513402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.434 [2024-12-05 12:14:09.513434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.434 qpair failed and we were unable to recover it. 00:30:35.434 [2024-12-05 12:14:09.513634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.434 [2024-12-05 12:14:09.513665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.434 qpair failed and we were unable to recover it. 00:30:35.434 [2024-12-05 12:14:09.513804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.434 [2024-12-05 12:14:09.513835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.434 qpair failed and we were unable to recover it. 00:30:35.434 [2024-12-05 12:14:09.514098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.434 [2024-12-05 12:14:09.514129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.434 qpair failed and we were unable to recover it. 00:30:35.434 [2024-12-05 12:14:09.514304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.434 [2024-12-05 12:14:09.514334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.434 qpair failed and we were unable to recover it. 00:30:35.434 [2024-12-05 12:14:09.514469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.434 [2024-12-05 12:14:09.514501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.434 qpair failed and we were unable to recover it. 00:30:35.434 [2024-12-05 12:14:09.514766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.434 [2024-12-05 12:14:09.514797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.434 qpair failed and we were unable to recover it. 00:30:35.434 [2024-12-05 12:14:09.514911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.434 [2024-12-05 12:14:09.514943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.434 qpair failed and we were unable to recover it. 00:30:35.434 [2024-12-05 12:14:09.515114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.434 [2024-12-05 12:14:09.515145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.434 qpair failed and we were unable to recover it. 00:30:35.434 [2024-12-05 12:14:09.515326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.434 [2024-12-05 12:14:09.515357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.434 qpair failed and we were unable to recover it. 00:30:35.434 [2024-12-05 12:14:09.515556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.434 [2024-12-05 12:14:09.515588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.434 qpair failed and we were unable to recover it. 00:30:35.434 [2024-12-05 12:14:09.515828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.434 [2024-12-05 12:14:09.515858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.434 qpair failed and we were unable to recover it. 00:30:35.434 [2024-12-05 12:14:09.515982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.434 [2024-12-05 12:14:09.516013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.434 qpair failed and we were unable to recover it. 00:30:35.434 [2024-12-05 12:14:09.516197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.434 [2024-12-05 12:14:09.516228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.434 qpair failed and we were unable to recover it. 00:30:35.434 [2024-12-05 12:14:09.516492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.434 [2024-12-05 12:14:09.516525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.434 qpair failed and we were unable to recover it. 00:30:35.434 [2024-12-05 12:14:09.516790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.434 [2024-12-05 12:14:09.516822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.434 qpair failed and we were unable to recover it. 00:30:35.434 [2024-12-05 12:14:09.517067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.434 [2024-12-05 12:14:09.517098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.434 qpair failed and we were unable to recover it. 00:30:35.434 [2024-12-05 12:14:09.517354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.434 [2024-12-05 12:14:09.517395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.434 qpair failed and we were unable to recover it. 00:30:35.434 [2024-12-05 12:14:09.517639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.435 [2024-12-05 12:14:09.517671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.435 qpair failed and we were unable to recover it. 00:30:35.435 [2024-12-05 12:14:09.517853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.435 [2024-12-05 12:14:09.517883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.435 qpair failed and we were unable to recover it. 00:30:35.435 [2024-12-05 12:14:09.518076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.435 [2024-12-05 12:14:09.518108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.435 qpair failed and we were unable to recover it. 00:30:35.435 [2024-12-05 12:14:09.518314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.435 [2024-12-05 12:14:09.518345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.435 qpair failed and we were unable to recover it. 00:30:35.435 [2024-12-05 12:14:09.518467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.435 [2024-12-05 12:14:09.518499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.435 qpair failed and we were unable to recover it. 00:30:35.435 [2024-12-05 12:14:09.518679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.435 [2024-12-05 12:14:09.518711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.435 qpair failed and we were unable to recover it. 00:30:35.435 [2024-12-05 12:14:09.518827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.435 [2024-12-05 12:14:09.518864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.435 qpair failed and we were unable to recover it. 00:30:35.435 [2024-12-05 12:14:09.519120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.435 [2024-12-05 12:14:09.519157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.435 qpair failed and we were unable to recover it. 00:30:35.435 [2024-12-05 12:14:09.519341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.435 [2024-12-05 12:14:09.519384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.435 qpair failed and we were unable to recover it. 00:30:35.435 [2024-12-05 12:14:09.519488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.435 [2024-12-05 12:14:09.519519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.435 qpair failed and we were unable to recover it. 00:30:35.435 [2024-12-05 12:14:09.519640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.435 [2024-12-05 12:14:09.519671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.435 qpair failed and we were unable to recover it. 00:30:35.435 [2024-12-05 12:14:09.519851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.435 [2024-12-05 12:14:09.519883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.435 qpair failed and we were unable to recover it. 00:30:35.435 [2024-12-05 12:14:09.520061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.435 [2024-12-05 12:14:09.520093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.435 qpair failed and we were unable to recover it. 00:30:35.435 [2024-12-05 12:14:09.520363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.435 [2024-12-05 12:14:09.520405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.435 qpair failed and we were unable to recover it. 00:30:35.435 [2024-12-05 12:14:09.520538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.435 [2024-12-05 12:14:09.520570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.435 qpair failed and we were unable to recover it. 00:30:35.435 [2024-12-05 12:14:09.520679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.435 [2024-12-05 12:14:09.520711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.435 qpair failed and we were unable to recover it. 00:30:35.435 [2024-12-05 12:14:09.520828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.435 [2024-12-05 12:14:09.520859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.435 qpair failed and we were unable to recover it. 00:30:35.435 [2024-12-05 12:14:09.521041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.435 [2024-12-05 12:14:09.521071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.435 qpair failed and we were unable to recover it. 00:30:35.435 [2024-12-05 12:14:09.521200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.435 [2024-12-05 12:14:09.521231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.435 qpair failed and we were unable to recover it. 00:30:35.435 [2024-12-05 12:14:09.521398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.435 [2024-12-05 12:14:09.521431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.435 qpair failed and we were unable to recover it. 00:30:35.435 [2024-12-05 12:14:09.521552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.435 [2024-12-05 12:14:09.521583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.435 qpair failed and we were unable to recover it. 00:30:35.435 [2024-12-05 12:14:09.521768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.435 [2024-12-05 12:14:09.521799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.435 qpair failed and we were unable to recover it. 00:30:35.435 [2024-12-05 12:14:09.521971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.435 [2024-12-05 12:14:09.522004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.435 qpair failed and we were unable to recover it. 00:30:35.435 [2024-12-05 12:14:09.522201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.435 [2024-12-05 12:14:09.522232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.435 qpair failed and we were unable to recover it. 00:30:35.435 [2024-12-05 12:14:09.522345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.435 [2024-12-05 12:14:09.522384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.435 qpair failed and we were unable to recover it. 00:30:35.435 [2024-12-05 12:14:09.522564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.435 [2024-12-05 12:14:09.522595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.435 qpair failed and we were unable to recover it. 00:30:35.435 [2024-12-05 12:14:09.522812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.435 [2024-12-05 12:14:09.522844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.435 qpair failed and we were unable to recover it. 00:30:35.435 [2024-12-05 12:14:09.523031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.435 [2024-12-05 12:14:09.523063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.435 qpair failed and we were unable to recover it. 00:30:35.435 [2024-12-05 12:14:09.523247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.435 [2024-12-05 12:14:09.523279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.435 qpair failed and we were unable to recover it. 00:30:35.435 [2024-12-05 12:14:09.523384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.435 [2024-12-05 12:14:09.523416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.435 qpair failed and we were unable to recover it. 00:30:35.435 [2024-12-05 12:14:09.523598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.435 [2024-12-05 12:14:09.523630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.435 qpair failed and we were unable to recover it. 00:30:35.435 [2024-12-05 12:14:09.523810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.435 [2024-12-05 12:14:09.523842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.435 qpair failed and we were unable to recover it. 00:30:35.435 [2024-12-05 12:14:09.524079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.435 [2024-12-05 12:14:09.524110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.435 qpair failed and we were unable to recover it. 00:30:35.435 [2024-12-05 12:14:09.524284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.435 [2024-12-05 12:14:09.524316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.435 qpair failed and we were unable to recover it. 00:30:35.435 [2024-12-05 12:14:09.524514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.435 [2024-12-05 12:14:09.524547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.435 qpair failed and we were unable to recover it. 00:30:35.435 [2024-12-05 12:14:09.524683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.435 [2024-12-05 12:14:09.524714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.435 qpair failed and we were unable to recover it. 00:30:35.435 [2024-12-05 12:14:09.524898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.435 [2024-12-05 12:14:09.524929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.435 qpair failed and we were unable to recover it. 00:30:35.435 [2024-12-05 12:14:09.525107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.435 [2024-12-05 12:14:09.525138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.435 qpair failed and we were unable to recover it. 00:30:35.435 [2024-12-05 12:14:09.525400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.436 [2024-12-05 12:14:09.525433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.436 qpair failed and we were unable to recover it. 00:30:35.436 [2024-12-05 12:14:09.525568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.436 [2024-12-05 12:14:09.525600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.436 qpair failed and we were unable to recover it. 00:30:35.436 [2024-12-05 12:14:09.525773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.436 [2024-12-05 12:14:09.525804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.436 qpair failed and we were unable to recover it. 00:30:35.436 [2024-12-05 12:14:09.526048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.436 [2024-12-05 12:14:09.526079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.436 qpair failed and we were unable to recover it. 00:30:35.436 [2024-12-05 12:14:09.526189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.436 [2024-12-05 12:14:09.526220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.436 qpair failed and we were unable to recover it. 00:30:35.436 [2024-12-05 12:14:09.526457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.436 [2024-12-05 12:14:09.526490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.436 qpair failed and we were unable to recover it. 00:30:35.436 [2024-12-05 12:14:09.526669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.436 [2024-12-05 12:14:09.526701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.436 qpair failed and we were unable to recover it. 00:30:35.436 [2024-12-05 12:14:09.526985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.436 [2024-12-05 12:14:09.527016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.436 qpair failed and we were unable to recover it. 00:30:35.436 [2024-12-05 12:14:09.527219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.436 [2024-12-05 12:14:09.527250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.436 qpair failed and we were unable to recover it. 00:30:35.436 [2024-12-05 12:14:09.527450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.436 [2024-12-05 12:14:09.527488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.436 qpair failed and we were unable to recover it. 00:30:35.436 [2024-12-05 12:14:09.527727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.436 [2024-12-05 12:14:09.527758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.436 qpair failed and we were unable to recover it. 00:30:35.436 [2024-12-05 12:14:09.527963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.436 [2024-12-05 12:14:09.527994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.436 qpair failed and we were unable to recover it. 00:30:35.436 [2024-12-05 12:14:09.528175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.436 [2024-12-05 12:14:09.528208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.436 qpair failed and we were unable to recover it. 00:30:35.436 [2024-12-05 12:14:09.528482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.436 [2024-12-05 12:14:09.528515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.436 qpair failed and we were unable to recover it. 00:30:35.436 [2024-12-05 12:14:09.528777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.436 [2024-12-05 12:14:09.528808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.436 qpair failed and we were unable to recover it. 00:30:35.436 [2024-12-05 12:14:09.528929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.436 [2024-12-05 12:14:09.528960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.436 qpair failed and we were unable to recover it. 00:30:35.436 [2024-12-05 12:14:09.529202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.436 [2024-12-05 12:14:09.529233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.436 qpair failed and we were unable to recover it. 00:30:35.436 [2024-12-05 12:14:09.529362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.436 [2024-12-05 12:14:09.529405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.436 qpair failed and we were unable to recover it. 00:30:35.436 [2024-12-05 12:14:09.529616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.436 [2024-12-05 12:14:09.529647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.436 qpair failed and we were unable to recover it. 00:30:35.436 [2024-12-05 12:14:09.529830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.436 [2024-12-05 12:14:09.529862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.436 qpair failed and we were unable to recover it. 00:30:35.436 [2024-12-05 12:14:09.530030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.436 [2024-12-05 12:14:09.530061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.436 qpair failed and we were unable to recover it. 00:30:35.436 [2024-12-05 12:14:09.530230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.436 [2024-12-05 12:14:09.530261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.436 qpair failed and we were unable to recover it. 00:30:35.436 [2024-12-05 12:14:09.530447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.436 [2024-12-05 12:14:09.530480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.436 qpair failed and we were unable to recover it. 00:30:35.436 [2024-12-05 12:14:09.530594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.436 [2024-12-05 12:14:09.530626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.436 qpair failed and we were unable to recover it. 00:30:35.436 [2024-12-05 12:14:09.530766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.436 [2024-12-05 12:14:09.530798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.436 qpair failed and we were unable to recover it. 00:30:35.436 [2024-12-05 12:14:09.530982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.436 [2024-12-05 12:14:09.531013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.436 qpair failed and we were unable to recover it. 00:30:35.436 [2024-12-05 12:14:09.531136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.436 [2024-12-05 12:14:09.531168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.436 qpair failed and we were unable to recover it. 00:30:35.436 [2024-12-05 12:14:09.531333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.436 [2024-12-05 12:14:09.531365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.436 qpair failed and we were unable to recover it. 00:30:35.436 [2024-12-05 12:14:09.531576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.436 [2024-12-05 12:14:09.531608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.436 qpair failed and we were unable to recover it. 00:30:35.436 [2024-12-05 12:14:09.531779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.436 [2024-12-05 12:14:09.531811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.436 qpair failed and we were unable to recover it. 00:30:35.436 [2024-12-05 12:14:09.532065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.436 [2024-12-05 12:14:09.532097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.436 qpair failed and we were unable to recover it. 00:30:35.436 [2024-12-05 12:14:09.532375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.436 [2024-12-05 12:14:09.532409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.436 qpair failed and we were unable to recover it. 00:30:35.436 [2024-12-05 12:14:09.532675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.436 [2024-12-05 12:14:09.532705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.436 qpair failed and we were unable to recover it. 00:30:35.436 [2024-12-05 12:14:09.532956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.436 [2024-12-05 12:14:09.532987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.436 qpair failed and we were unable to recover it. 00:30:35.437 [2024-12-05 12:14:09.533109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.437 [2024-12-05 12:14:09.533141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.437 qpair failed and we were unable to recover it. 00:30:35.437 [2024-12-05 12:14:09.533311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.437 [2024-12-05 12:14:09.533341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.437 qpair failed and we were unable to recover it. 00:30:35.437 [2024-12-05 12:14:09.533471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.437 [2024-12-05 12:14:09.533504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.437 qpair failed and we were unable to recover it. 00:30:35.437 [2024-12-05 12:14:09.533698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.437 [2024-12-05 12:14:09.533729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.437 qpair failed and we were unable to recover it. 00:30:35.437 [2024-12-05 12:14:09.533945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.437 [2024-12-05 12:14:09.533977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.437 qpair failed and we were unable to recover it. 00:30:35.437 [2024-12-05 12:14:09.534239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.437 [2024-12-05 12:14:09.534270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.437 qpair failed and we were unable to recover it. 00:30:35.437 [2024-12-05 12:14:09.534388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.437 [2024-12-05 12:14:09.534421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.437 qpair failed and we were unable to recover it. 00:30:35.437 [2024-12-05 12:14:09.534603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.437 [2024-12-05 12:14:09.534635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.437 qpair failed and we were unable to recover it. 00:30:35.437 [2024-12-05 12:14:09.534874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.437 [2024-12-05 12:14:09.534905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.437 qpair failed and we were unable to recover it. 00:30:35.437 [2024-12-05 12:14:09.535187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.437 [2024-12-05 12:14:09.535218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.437 qpair failed and we were unable to recover it. 00:30:35.437 [2024-12-05 12:14:09.535423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.437 [2024-12-05 12:14:09.535457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.437 qpair failed and we were unable to recover it. 00:30:35.437 [2024-12-05 12:14:09.535724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.437 [2024-12-05 12:14:09.535755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.437 qpair failed and we were unable to recover it. 00:30:35.437 [2024-12-05 12:14:09.535941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.437 [2024-12-05 12:14:09.535973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.437 qpair failed and we were unable to recover it. 00:30:35.437 [2024-12-05 12:14:09.536086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.437 [2024-12-05 12:14:09.536116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.437 qpair failed and we were unable to recover it. 00:30:35.437 [2024-12-05 12:14:09.536353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.437 [2024-12-05 12:14:09.536405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.437 qpair failed and we were unable to recover it. 00:30:35.437 [2024-12-05 12:14:09.536657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.437 [2024-12-05 12:14:09.536694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.437 qpair failed and we were unable to recover it. 00:30:35.437 [2024-12-05 12:14:09.536822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.437 [2024-12-05 12:14:09.536853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.437 qpair failed and we were unable to recover it. 00:30:35.437 [2024-12-05 12:14:09.537118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.437 [2024-12-05 12:14:09.537149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.437 qpair failed and we were unable to recover it. 00:30:35.437 [2024-12-05 12:14:09.537405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.437 [2024-12-05 12:14:09.537437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.437 qpair failed and we were unable to recover it. 00:30:35.437 [2024-12-05 12:14:09.537634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.437 [2024-12-05 12:14:09.537666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.437 qpair failed and we were unable to recover it. 00:30:35.437 [2024-12-05 12:14:09.537931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.437 [2024-12-05 12:14:09.537961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.437 qpair failed and we were unable to recover it. 00:30:35.437 [2024-12-05 12:14:09.538155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.437 [2024-12-05 12:14:09.538186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.437 qpair failed and we were unable to recover it. 00:30:35.437 [2024-12-05 12:14:09.538461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.437 [2024-12-05 12:14:09.538494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.437 qpair failed and we were unable to recover it. 00:30:35.437 [2024-12-05 12:14:09.538690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.437 [2024-12-05 12:14:09.538721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.437 qpair failed and we were unable to recover it. 00:30:35.437 [2024-12-05 12:14:09.538980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.437 [2024-12-05 12:14:09.539010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.437 qpair failed and we were unable to recover it. 00:30:35.437 [2024-12-05 12:14:09.539202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.437 [2024-12-05 12:14:09.539233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.437 qpair failed and we were unable to recover it. 00:30:35.437 [2024-12-05 12:14:09.539434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.437 [2024-12-05 12:14:09.539467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.437 qpair failed and we were unable to recover it. 00:30:35.437 [2024-12-05 12:14:09.539641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.437 [2024-12-05 12:14:09.539671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.437 qpair failed and we were unable to recover it. 00:30:35.437 [2024-12-05 12:14:09.539914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.437 [2024-12-05 12:14:09.539945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.437 qpair failed and we were unable to recover it. 00:30:35.437 [2024-12-05 12:14:09.540221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.437 [2024-12-05 12:14:09.540253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.437 qpair failed and we were unable to recover it. 00:30:35.437 [2024-12-05 12:14:09.540443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.437 [2024-12-05 12:14:09.540475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.437 qpair failed and we were unable to recover it. 00:30:35.437 [2024-12-05 12:14:09.540669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.437 [2024-12-05 12:14:09.540700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.437 qpair failed and we were unable to recover it. 00:30:35.437 [2024-12-05 12:14:09.540824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.437 [2024-12-05 12:14:09.540855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.437 qpair failed and we were unable to recover it. 00:30:35.437 [2024-12-05 12:14:09.540985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.437 [2024-12-05 12:14:09.541016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.437 qpair failed and we were unable to recover it. 00:30:35.437 [2024-12-05 12:14:09.541278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.437 [2024-12-05 12:14:09.541310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.437 qpair failed and we were unable to recover it. 00:30:35.437 [2024-12-05 12:14:09.541505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.437 [2024-12-05 12:14:09.541538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.437 qpair failed and we were unable to recover it. 00:30:35.437 [2024-12-05 12:14:09.541659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.437 [2024-12-05 12:14:09.541690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.437 qpair failed and we were unable to recover it. 00:30:35.437 [2024-12-05 12:14:09.541807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.437 [2024-12-05 12:14:09.541837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.437 qpair failed and we were unable to recover it. 00:30:35.438 [2024-12-05 12:14:09.542044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.438 [2024-12-05 12:14:09.542076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.438 qpair failed and we were unable to recover it. 00:30:35.438 [2024-12-05 12:14:09.542277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.438 [2024-12-05 12:14:09.542308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.438 qpair failed and we were unable to recover it. 00:30:35.438 [2024-12-05 12:14:09.542505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.438 [2024-12-05 12:14:09.542538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.438 qpair failed and we were unable to recover it. 00:30:35.438 [2024-12-05 12:14:09.542783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.438 [2024-12-05 12:14:09.542815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.438 qpair failed and we were unable to recover it. 00:30:35.438 [2024-12-05 12:14:09.543040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.438 [2024-12-05 12:14:09.543071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.438 qpair failed and we were unable to recover it. 00:30:35.438 [2024-12-05 12:14:09.543280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.438 [2024-12-05 12:14:09.543311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.438 qpair failed and we were unable to recover it. 00:30:35.438 [2024-12-05 12:14:09.543441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.438 [2024-12-05 12:14:09.543473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.438 qpair failed and we were unable to recover it. 00:30:35.438 [2024-12-05 12:14:09.543647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.438 [2024-12-05 12:14:09.543678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.438 qpair failed and we were unable to recover it. 00:30:35.438 [2024-12-05 12:14:09.543941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.438 [2024-12-05 12:14:09.543973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.438 qpair failed and we were unable to recover it. 00:30:35.438 [2024-12-05 12:14:09.544218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.438 [2024-12-05 12:14:09.544249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.438 qpair failed and we were unable to recover it. 00:30:35.438 [2024-12-05 12:14:09.544519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.438 [2024-12-05 12:14:09.544552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.438 qpair failed and we were unable to recover it. 00:30:35.438 [2024-12-05 12:14:09.544737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.438 [2024-12-05 12:14:09.544768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.438 qpair failed and we were unable to recover it. 00:30:35.438 [2024-12-05 12:14:09.544945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.438 [2024-12-05 12:14:09.544976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.438 qpair failed and we were unable to recover it. 00:30:35.438 [2024-12-05 12:14:09.545217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.438 [2024-12-05 12:14:09.545248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.438 qpair failed and we were unable to recover it. 00:30:35.438 [2024-12-05 12:14:09.545449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.438 [2024-12-05 12:14:09.545482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.438 qpair failed and we were unable to recover it. 00:30:35.438 [2024-12-05 12:14:09.545608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.438 [2024-12-05 12:14:09.545640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.438 qpair failed and we were unable to recover it. 00:30:35.438 [2024-12-05 12:14:09.545901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.438 [2024-12-05 12:14:09.545932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.438 qpair failed and we were unable to recover it. 00:30:35.438 [2024-12-05 12:14:09.546122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.438 [2024-12-05 12:14:09.546159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.438 qpair failed and we were unable to recover it. 00:30:35.438 [2024-12-05 12:14:09.546340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.438 [2024-12-05 12:14:09.546381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.438 qpair failed and we were unable to recover it. 00:30:35.438 [2024-12-05 12:14:09.546506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.438 [2024-12-05 12:14:09.546537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.438 qpair failed and we were unable to recover it. 00:30:35.438 [2024-12-05 12:14:09.546749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.438 [2024-12-05 12:14:09.546780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.438 qpair failed and we were unable to recover it. 00:30:35.438 [2024-12-05 12:14:09.546960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.438 [2024-12-05 12:14:09.546992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.438 qpair failed and we were unable to recover it. 00:30:35.438 [2024-12-05 12:14:09.547175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.438 [2024-12-05 12:14:09.547206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.438 qpair failed and we were unable to recover it. 00:30:35.438 [2024-12-05 12:14:09.547396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.438 [2024-12-05 12:14:09.547429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.438 qpair failed and we were unable to recover it. 00:30:35.438 [2024-12-05 12:14:09.547642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.438 [2024-12-05 12:14:09.547674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.438 qpair failed and we were unable to recover it. 00:30:35.438 [2024-12-05 12:14:09.547851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.438 [2024-12-05 12:14:09.547883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.438 qpair failed and we were unable to recover it. 00:30:35.438 [2024-12-05 12:14:09.548059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.438 [2024-12-05 12:14:09.548090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.438 qpair failed and we were unable to recover it. 00:30:35.438 [2024-12-05 12:14:09.548263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.438 [2024-12-05 12:14:09.548294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.438 qpair failed and we were unable to recover it. 00:30:35.438 [2024-12-05 12:14:09.548486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.438 [2024-12-05 12:14:09.548519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.438 qpair failed and we were unable to recover it. 00:30:35.438 [2024-12-05 12:14:09.548703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.438 [2024-12-05 12:14:09.548734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.438 qpair failed and we were unable to recover it. 00:30:35.438 [2024-12-05 12:14:09.548915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.438 [2024-12-05 12:14:09.548946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.438 qpair failed and we were unable to recover it. 00:30:35.438 [2024-12-05 12:14:09.549061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.438 [2024-12-05 12:14:09.549093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.438 qpair failed and we were unable to recover it. 00:30:35.438 [2024-12-05 12:14:09.549275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.438 [2024-12-05 12:14:09.549306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.438 qpair failed and we were unable to recover it. 00:30:35.438 [2024-12-05 12:14:09.549512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.438 [2024-12-05 12:14:09.549545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.438 qpair failed and we were unable to recover it. 00:30:35.438 [2024-12-05 12:14:09.549810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.438 [2024-12-05 12:14:09.549841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.438 qpair failed and we were unable to recover it. 00:30:35.438 [2024-12-05 12:14:09.549957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.438 [2024-12-05 12:14:09.549989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.438 qpair failed and we were unable to recover it. 00:30:35.438 [2024-12-05 12:14:09.550279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.438 [2024-12-05 12:14:09.550310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.438 qpair failed and we were unable to recover it. 00:30:35.438 [2024-12-05 12:14:09.550523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.438 [2024-12-05 12:14:09.550556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.438 qpair failed and we were unable to recover it. 00:30:35.439 [2024-12-05 12:14:09.550774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.439 [2024-12-05 12:14:09.550805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.439 qpair failed and we were unable to recover it. 00:30:35.439 [2024-12-05 12:14:09.550928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.439 [2024-12-05 12:14:09.550958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.439 qpair failed and we were unable to recover it. 00:30:35.439 [2024-12-05 12:14:09.551153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.439 [2024-12-05 12:14:09.551185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.439 qpair failed and we were unable to recover it. 00:30:35.439 [2024-12-05 12:14:09.551317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.439 [2024-12-05 12:14:09.551347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.439 qpair failed and we were unable to recover it. 00:30:35.439 [2024-12-05 12:14:09.551524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.439 [2024-12-05 12:14:09.551556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.439 qpair failed and we were unable to recover it. 00:30:35.439 [2024-12-05 12:14:09.551741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.439 [2024-12-05 12:14:09.551772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.439 qpair failed and we were unable to recover it. 00:30:35.439 [2024-12-05 12:14:09.551967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.439 [2024-12-05 12:14:09.551999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.439 qpair failed and we were unable to recover it. 00:30:35.439 [2024-12-05 12:14:09.552187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.439 [2024-12-05 12:14:09.552219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.439 qpair failed and we were unable to recover it. 00:30:35.439 [2024-12-05 12:14:09.552421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.439 [2024-12-05 12:14:09.552453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.439 qpair failed and we were unable to recover it. 00:30:35.439 [2024-12-05 12:14:09.552589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.439 [2024-12-05 12:14:09.552621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.439 qpair failed and we were unable to recover it. 00:30:35.439 [2024-12-05 12:14:09.552815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.439 [2024-12-05 12:14:09.552847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.439 qpair failed and we were unable to recover it. 00:30:35.439 [2024-12-05 12:14:09.552976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.439 [2024-12-05 12:14:09.553007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.439 qpair failed and we were unable to recover it. 00:30:35.439 [2024-12-05 12:14:09.553114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.439 [2024-12-05 12:14:09.553145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.439 qpair failed and we were unable to recover it. 00:30:35.439 [2024-12-05 12:14:09.553253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.439 [2024-12-05 12:14:09.553284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.439 qpair failed and we were unable to recover it. 00:30:35.439 [2024-12-05 12:14:09.553470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.439 [2024-12-05 12:14:09.553502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.439 qpair failed and we were unable to recover it. 00:30:35.439 [2024-12-05 12:14:09.553716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.439 [2024-12-05 12:14:09.553747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.439 qpair failed and we were unable to recover it. 00:30:35.439 [2024-12-05 12:14:09.553883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.439 [2024-12-05 12:14:09.553915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.439 qpair failed and we were unable to recover it. 00:30:35.439 [2024-12-05 12:14:09.554058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.439 [2024-12-05 12:14:09.554089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.439 qpair failed and we were unable to recover it. 00:30:35.439 [2024-12-05 12:14:09.554349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.439 [2024-12-05 12:14:09.554390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.439 qpair failed and we were unable to recover it. 00:30:35.439 [2024-12-05 12:14:09.554630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.439 [2024-12-05 12:14:09.554667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.439 qpair failed and we were unable to recover it. 00:30:35.439 [2024-12-05 12:14:09.554908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.439 [2024-12-05 12:14:09.554940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.439 qpair failed and we were unable to recover it. 00:30:35.439 [2024-12-05 12:14:09.555124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.439 [2024-12-05 12:14:09.555155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.439 qpair failed and we were unable to recover it. 00:30:35.439 [2024-12-05 12:14:09.555338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.439 [2024-12-05 12:14:09.555394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.439 qpair failed and we were unable to recover it. 00:30:35.439 [2024-12-05 12:14:09.555514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.439 [2024-12-05 12:14:09.555546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.439 qpair failed and we were unable to recover it. 00:30:35.439 [2024-12-05 12:14:09.555670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.439 [2024-12-05 12:14:09.555701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.439 qpair failed and we were unable to recover it. 00:30:35.439 [2024-12-05 12:14:09.555909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.439 [2024-12-05 12:14:09.555941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.439 qpair failed and we were unable to recover it. 00:30:35.439 [2024-12-05 12:14:09.556081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.439 [2024-12-05 12:14:09.556112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.439 qpair failed and we were unable to recover it. 00:30:35.439 [2024-12-05 12:14:09.556295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.439 [2024-12-05 12:14:09.556326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.439 qpair failed and we were unable to recover it. 00:30:35.439 [2024-12-05 12:14:09.556454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.439 [2024-12-05 12:14:09.556487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.439 qpair failed and we were unable to recover it. 00:30:35.439 [2024-12-05 12:14:09.556654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.439 [2024-12-05 12:14:09.556685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.439 qpair failed and we were unable to recover it. 00:30:35.439 [2024-12-05 12:14:09.556791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.439 [2024-12-05 12:14:09.556822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.439 qpair failed and we were unable to recover it. 00:30:35.439 [2024-12-05 12:14:09.557059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.439 [2024-12-05 12:14:09.557091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.439 qpair failed and we were unable to recover it. 00:30:35.439 [2024-12-05 12:14:09.557204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.439 [2024-12-05 12:14:09.557234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.439 qpair failed and we were unable to recover it. 00:30:35.439 [2024-12-05 12:14:09.557507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.439 [2024-12-05 12:14:09.557539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.439 qpair failed and we were unable to recover it. 00:30:35.439 [2024-12-05 12:14:09.557659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.439 [2024-12-05 12:14:09.557691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.439 qpair failed and we were unable to recover it. 00:30:35.439 [2024-12-05 12:14:09.557810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.439 [2024-12-05 12:14:09.557841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.439 qpair failed and we were unable to recover it. 00:30:35.439 [2024-12-05 12:14:09.558098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.439 [2024-12-05 12:14:09.558129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.439 qpair failed and we were unable to recover it. 00:30:35.439 [2024-12-05 12:14:09.558319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.439 [2024-12-05 12:14:09.558350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.439 qpair failed and we were unable to recover it. 00:30:35.440 [2024-12-05 12:14:09.558550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.440 [2024-12-05 12:14:09.558582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.440 qpair failed and we were unable to recover it. 00:30:35.440 [2024-12-05 12:14:09.558795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.440 [2024-12-05 12:14:09.558826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.440 qpair failed and we were unable to recover it. 00:30:35.440 [2024-12-05 12:14:09.559060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.440 [2024-12-05 12:14:09.559091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.440 qpair failed and we were unable to recover it. 00:30:35.440 [2024-12-05 12:14:09.559276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.440 [2024-12-05 12:14:09.559307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.440 qpair failed and we were unable to recover it. 00:30:35.440 [2024-12-05 12:14:09.559565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.440 [2024-12-05 12:14:09.559597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.440 qpair failed and we were unable to recover it. 00:30:35.440 [2024-12-05 12:14:09.559811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.440 [2024-12-05 12:14:09.559841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.440 qpair failed and we were unable to recover it. 00:30:35.440 [2024-12-05 12:14:09.559947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.440 [2024-12-05 12:14:09.559979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.440 qpair failed and we were unable to recover it. 00:30:35.440 [2024-12-05 12:14:09.560169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.440 [2024-12-05 12:14:09.560200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.440 qpair failed and we were unable to recover it. 00:30:35.440 [2024-12-05 12:14:09.560494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.440 [2024-12-05 12:14:09.560527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.440 qpair failed and we were unable to recover it. 00:30:35.440 [2024-12-05 12:14:09.560808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.440 [2024-12-05 12:14:09.560839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.440 qpair failed and we were unable to recover it. 00:30:35.440 [2024-12-05 12:14:09.561109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.440 [2024-12-05 12:14:09.561140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.440 qpair failed and we were unable to recover it. 00:30:35.440 [2024-12-05 12:14:09.561273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.440 [2024-12-05 12:14:09.561305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.440 qpair failed and we were unable to recover it. 00:30:35.440 [2024-12-05 12:14:09.561494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.440 [2024-12-05 12:14:09.561526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.440 qpair failed and we were unable to recover it. 00:30:35.440 [2024-12-05 12:14:09.561785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.440 [2024-12-05 12:14:09.561818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.440 qpair failed and we were unable to recover it. 00:30:35.440 [2024-12-05 12:14:09.562006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.440 [2024-12-05 12:14:09.562037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.440 qpair failed and we were unable to recover it. 00:30:35.440 [2024-12-05 12:14:09.562245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.440 [2024-12-05 12:14:09.562276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.440 qpair failed and we were unable to recover it. 00:30:35.440 [2024-12-05 12:14:09.562442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.440 [2024-12-05 12:14:09.562475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.440 qpair failed and we were unable to recover it. 00:30:35.440 [2024-12-05 12:14:09.562742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.440 [2024-12-05 12:14:09.562774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.440 qpair failed and we were unable to recover it. 00:30:35.440 [2024-12-05 12:14:09.562948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.440 [2024-12-05 12:14:09.562979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.440 qpair failed and we were unable to recover it. 00:30:35.440 [2024-12-05 12:14:09.563243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.440 [2024-12-05 12:14:09.563274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.440 qpair failed and we were unable to recover it. 00:30:35.440 [2024-12-05 12:14:09.563445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.440 [2024-12-05 12:14:09.563478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.440 qpair failed and we were unable to recover it. 00:30:35.440 [2024-12-05 12:14:09.563717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.440 [2024-12-05 12:14:09.563753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.440 qpair failed and we were unable to recover it. 00:30:35.440 [2024-12-05 12:14:09.563993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.440 [2024-12-05 12:14:09.564024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.440 qpair failed and we were unable to recover it. 00:30:35.440 [2024-12-05 12:14:09.564136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.440 [2024-12-05 12:14:09.564168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.440 qpair failed and we were unable to recover it. 00:30:35.440 [2024-12-05 12:14:09.564434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.440 [2024-12-05 12:14:09.564466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.440 qpair failed and we were unable to recover it. 00:30:35.440 [2024-12-05 12:14:09.564730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.440 [2024-12-05 12:14:09.564761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.440 qpair failed and we were unable to recover it. 00:30:35.440 [2024-12-05 12:14:09.564946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.440 [2024-12-05 12:14:09.564978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.440 qpair failed and we were unable to recover it. 00:30:35.440 [2024-12-05 12:14:09.565111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.440 [2024-12-05 12:14:09.565142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.440 qpair failed and we were unable to recover it. 00:30:35.440 [2024-12-05 12:14:09.565328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.440 [2024-12-05 12:14:09.565360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.440 qpair failed and we were unable to recover it. 00:30:35.440 [2024-12-05 12:14:09.565558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.440 [2024-12-05 12:14:09.565590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.440 qpair failed and we were unable to recover it. 00:30:35.440 [2024-12-05 12:14:09.565725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.440 [2024-12-05 12:14:09.565756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.440 qpair failed and we were unable to recover it. 00:30:35.440 [2024-12-05 12:14:09.565934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.440 [2024-12-05 12:14:09.565966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.440 qpair failed and we were unable to recover it. 00:30:35.440 [2024-12-05 12:14:09.566096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.440 [2024-12-05 12:14:09.566127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.440 qpair failed and we were unable to recover it. 00:30:35.440 [2024-12-05 12:14:09.566300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.440 [2024-12-05 12:14:09.566332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.440 qpair failed and we were unable to recover it. 00:30:35.440 [2024-12-05 12:14:09.566452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.440 [2024-12-05 12:14:09.566484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.440 qpair failed and we were unable to recover it. 00:30:35.440 [2024-12-05 12:14:09.566779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.440 [2024-12-05 12:14:09.566810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.440 qpair failed and we were unable to recover it. 00:30:35.440 [2024-12-05 12:14:09.567047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.440 [2024-12-05 12:14:09.567078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.440 qpair failed and we were unable to recover it. 00:30:35.440 [2024-12-05 12:14:09.567259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.440 [2024-12-05 12:14:09.567290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.440 qpair failed and we were unable to recover it. 00:30:35.441 [2024-12-05 12:14:09.567411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.441 [2024-12-05 12:14:09.567444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.441 qpair failed and we were unable to recover it. 00:30:35.441 [2024-12-05 12:14:09.567640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.441 [2024-12-05 12:14:09.567671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.441 qpair failed and we were unable to recover it. 00:30:35.441 [2024-12-05 12:14:09.567777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.441 [2024-12-05 12:14:09.567808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.441 qpair failed and we were unable to recover it. 00:30:35.441 [2024-12-05 12:14:09.568068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.441 [2024-12-05 12:14:09.568099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.441 qpair failed and we were unable to recover it. 00:30:35.441 [2024-12-05 12:14:09.568234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.441 [2024-12-05 12:14:09.568266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.441 qpair failed and we were unable to recover it. 00:30:35.441 [2024-12-05 12:14:09.568445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.441 [2024-12-05 12:14:09.568478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.441 qpair failed and we were unable to recover it. 00:30:35.441 [2024-12-05 12:14:09.568616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.441 [2024-12-05 12:14:09.568646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.441 qpair failed and we were unable to recover it. 00:30:35.441 [2024-12-05 12:14:09.568845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.441 [2024-12-05 12:14:09.568876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.441 qpair failed and we were unable to recover it. 00:30:35.441 [2024-12-05 12:14:09.569088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.441 [2024-12-05 12:14:09.569120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.441 qpair failed and we were unable to recover it. 00:30:35.441 [2024-12-05 12:14:09.569310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.441 [2024-12-05 12:14:09.569341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.441 qpair failed and we were unable to recover it. 00:30:35.441 [2024-12-05 12:14:09.569536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.441 [2024-12-05 12:14:09.569569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.441 qpair failed and we were unable to recover it. 00:30:35.441 [2024-12-05 12:14:09.569762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.441 [2024-12-05 12:14:09.569794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.441 qpair failed and we were unable to recover it. 00:30:35.441 [2024-12-05 12:14:09.569992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.441 [2024-12-05 12:14:09.570023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.441 qpair failed and we were unable to recover it. 00:30:35.441 [2024-12-05 12:14:09.570216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.441 [2024-12-05 12:14:09.570247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.441 qpair failed and we were unable to recover it. 00:30:35.441 [2024-12-05 12:14:09.570432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.441 [2024-12-05 12:14:09.570464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.441 qpair failed and we were unable to recover it. 00:30:35.441 [2024-12-05 12:14:09.570590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.441 [2024-12-05 12:14:09.570621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.441 qpair failed and we were unable to recover it. 00:30:35.441 [2024-12-05 12:14:09.570802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.441 [2024-12-05 12:14:09.570833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.441 qpair failed and we were unable to recover it. 00:30:35.441 [2024-12-05 12:14:09.571024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.441 [2024-12-05 12:14:09.571055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.441 qpair failed and we were unable to recover it. 00:30:35.441 [2024-12-05 12:14:09.571319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.441 [2024-12-05 12:14:09.571349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.441 qpair failed and we were unable to recover it. 00:30:35.441 [2024-12-05 12:14:09.571585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.441 [2024-12-05 12:14:09.571617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.441 qpair failed and we were unable to recover it. 00:30:35.441 [2024-12-05 12:14:09.571808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.441 [2024-12-05 12:14:09.571841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.441 qpair failed and we were unable to recover it. 00:30:35.441 [2024-12-05 12:14:09.572045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.441 [2024-12-05 12:14:09.572075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.441 qpair failed and we were unable to recover it. 00:30:35.441 [2024-12-05 12:14:09.572190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.441 [2024-12-05 12:14:09.572221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.441 qpair failed and we were unable to recover it. 00:30:35.441 [2024-12-05 12:14:09.572480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.441 [2024-12-05 12:14:09.572518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.441 qpair failed and we were unable to recover it. 00:30:35.441 [2024-12-05 12:14:09.572708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.441 [2024-12-05 12:14:09.572739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.441 qpair failed and we were unable to recover it. 00:30:35.441 [2024-12-05 12:14:09.572872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.441 [2024-12-05 12:14:09.572905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.441 qpair failed and we were unable to recover it. 00:30:35.441 [2024-12-05 12:14:09.573090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.441 [2024-12-05 12:14:09.573122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.441 qpair failed and we were unable to recover it. 00:30:35.441 [2024-12-05 12:14:09.573374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.441 [2024-12-05 12:14:09.573406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.441 qpair failed and we were unable to recover it. 00:30:35.441 [2024-12-05 12:14:09.573603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.441 [2024-12-05 12:14:09.573634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.441 qpair failed and we were unable to recover it. 00:30:35.441 [2024-12-05 12:14:09.573822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.441 [2024-12-05 12:14:09.573854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.441 qpair failed and we were unable to recover it. 00:30:35.720 [2024-12-05 12:14:09.573969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.720 [2024-12-05 12:14:09.574000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.720 qpair failed and we were unable to recover it. 00:30:35.720 [2024-12-05 12:14:09.574170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.720 [2024-12-05 12:14:09.574202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.720 qpair failed and we were unable to recover it. 00:30:35.720 [2024-12-05 12:14:09.574444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.720 [2024-12-05 12:14:09.574477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.720 qpair failed and we were unable to recover it. 00:30:35.721 [2024-12-05 12:14:09.574652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.721 [2024-12-05 12:14:09.574684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.721 qpair failed and we were unable to recover it. 00:30:35.721 [2024-12-05 12:14:09.574957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.721 [2024-12-05 12:14:09.574989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.721 qpair failed and we were unable to recover it. 00:30:35.721 [2024-12-05 12:14:09.575251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.721 [2024-12-05 12:14:09.575282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.721 qpair failed and we were unable to recover it. 00:30:35.721 [2024-12-05 12:14:09.575468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.721 [2024-12-05 12:14:09.575500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.721 qpair failed and we were unable to recover it. 00:30:35.721 [2024-12-05 12:14:09.575786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.721 [2024-12-05 12:14:09.575818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.721 qpair failed and we were unable to recover it. 00:30:35.721 [2024-12-05 12:14:09.576083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.721 [2024-12-05 12:14:09.576115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.721 qpair failed and we were unable to recover it. 00:30:35.721 [2024-12-05 12:14:09.576303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.721 [2024-12-05 12:14:09.576334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.721 qpair failed and we were unable to recover it. 00:30:35.721 [2024-12-05 12:14:09.576582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.721 [2024-12-05 12:14:09.576615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.721 qpair failed and we were unable to recover it. 00:30:35.721 [2024-12-05 12:14:09.576793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.721 [2024-12-05 12:14:09.576823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.721 qpair failed and we were unable to recover it. 00:30:35.721 [2024-12-05 12:14:09.577029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.721 [2024-12-05 12:14:09.577061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.721 qpair failed and we were unable to recover it. 00:30:35.721 [2024-12-05 12:14:09.577167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.721 [2024-12-05 12:14:09.577199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.721 qpair failed and we were unable to recover it. 00:30:35.721 [2024-12-05 12:14:09.577388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.721 [2024-12-05 12:14:09.577422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.721 qpair failed and we were unable to recover it. 00:30:35.721 [2024-12-05 12:14:09.577610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.721 [2024-12-05 12:14:09.577641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.721 qpair failed and we were unable to recover it. 00:30:35.721 [2024-12-05 12:14:09.577842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.721 [2024-12-05 12:14:09.577873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.721 qpair failed and we were unable to recover it. 00:30:35.721 [2024-12-05 12:14:09.577989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.721 [2024-12-05 12:14:09.578021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.721 qpair failed and we were unable to recover it. 00:30:35.721 [2024-12-05 12:14:09.578151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.721 [2024-12-05 12:14:09.578182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.721 qpair failed and we were unable to recover it. 00:30:35.721 [2024-12-05 12:14:09.578308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.721 [2024-12-05 12:14:09.578340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.721 qpair failed and we were unable to recover it. 00:30:35.721 [2024-12-05 12:14:09.578658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.721 [2024-12-05 12:14:09.578729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.721 qpair failed and we were unable to recover it. 00:30:35.721 [2024-12-05 12:14:09.578863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.721 [2024-12-05 12:14:09.578897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.721 qpair failed and we were unable to recover it. 00:30:35.721 [2024-12-05 12:14:09.579005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.721 [2024-12-05 12:14:09.579037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.721 qpair failed and we were unable to recover it. 00:30:35.721 [2024-12-05 12:14:09.579233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.721 [2024-12-05 12:14:09.579265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.721 qpair failed and we were unable to recover it. 00:30:35.721 [2024-12-05 12:14:09.579545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.721 [2024-12-05 12:14:09.579581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.721 qpair failed and we were unable to recover it. 00:30:35.721 [2024-12-05 12:14:09.579780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.721 [2024-12-05 12:14:09.579812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.721 qpair failed and we were unable to recover it. 00:30:35.721 [2024-12-05 12:14:09.580003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.721 [2024-12-05 12:14:09.580034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.721 qpair failed and we were unable to recover it. 00:30:35.721 [2024-12-05 12:14:09.580300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.721 [2024-12-05 12:14:09.580330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.721 qpair failed and we were unable to recover it. 00:30:35.721 [2024-12-05 12:14:09.580529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.721 [2024-12-05 12:14:09.580562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.721 qpair failed and we were unable to recover it. 00:30:35.721 [2024-12-05 12:14:09.580743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.721 [2024-12-05 12:14:09.580774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.721 qpair failed and we were unable to recover it. 00:30:35.721 [2024-12-05 12:14:09.580960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.721 [2024-12-05 12:14:09.580991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.721 qpair failed and we were unable to recover it. 00:30:35.721 [2024-12-05 12:14:09.581178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.721 [2024-12-05 12:14:09.581209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.721 qpair failed and we were unable to recover it. 00:30:35.721 [2024-12-05 12:14:09.581447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.721 [2024-12-05 12:14:09.581480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.721 qpair failed and we were unable to recover it. 00:30:35.721 [2024-12-05 12:14:09.581674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.721 [2024-12-05 12:14:09.581720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.721 qpair failed and we were unable to recover it. 00:30:35.721 [2024-12-05 12:14:09.581914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.721 [2024-12-05 12:14:09.581945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.721 qpair failed and we were unable to recover it. 00:30:35.721 [2024-12-05 12:14:09.582132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.721 [2024-12-05 12:14:09.582163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.721 qpair failed and we were unable to recover it. 00:30:35.721 [2024-12-05 12:14:09.582387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.721 [2024-12-05 12:14:09.582420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.721 qpair failed and we were unable to recover it. 00:30:35.721 [2024-12-05 12:14:09.582597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.721 [2024-12-05 12:14:09.582628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.721 qpair failed and we were unable to recover it. 00:30:35.721 [2024-12-05 12:14:09.582803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.722 [2024-12-05 12:14:09.582835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.722 qpair failed and we were unable to recover it. 00:30:35.722 [2024-12-05 12:14:09.582953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.722 [2024-12-05 12:14:09.582984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.722 qpair failed and we were unable to recover it. 00:30:35.722 [2024-12-05 12:14:09.583095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.722 [2024-12-05 12:14:09.583126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.722 qpair failed and we were unable to recover it. 00:30:35.722 [2024-12-05 12:14:09.583332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.722 [2024-12-05 12:14:09.583364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.722 qpair failed and we were unable to recover it. 00:30:35.722 [2024-12-05 12:14:09.583645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.722 [2024-12-05 12:14:09.583675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.722 qpair failed and we were unable to recover it. 00:30:35.722 [2024-12-05 12:14:09.583798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.722 [2024-12-05 12:14:09.583829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.722 qpair failed and we were unable to recover it. 00:30:35.722 [2024-12-05 12:14:09.584043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.722 [2024-12-05 12:14:09.584074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.722 qpair failed and we were unable to recover it. 00:30:35.722 [2024-12-05 12:14:09.584273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.722 [2024-12-05 12:14:09.584304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.722 qpair failed and we were unable to recover it. 00:30:35.722 [2024-12-05 12:14:09.584507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.722 [2024-12-05 12:14:09.584539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.722 qpair failed and we were unable to recover it. 00:30:35.722 [2024-12-05 12:14:09.584720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.722 [2024-12-05 12:14:09.584751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.722 qpair failed and we were unable to recover it. 00:30:35.722 [2024-12-05 12:14:09.584921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.722 [2024-12-05 12:14:09.584953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.722 qpair failed and we were unable to recover it. 00:30:35.722 [2024-12-05 12:14:09.585069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.722 [2024-12-05 12:14:09.585100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.722 qpair failed and we were unable to recover it. 00:30:35.722 [2024-12-05 12:14:09.585332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.722 [2024-12-05 12:14:09.585362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.722 qpair failed and we were unable to recover it. 00:30:35.722 [2024-12-05 12:14:09.585562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.722 [2024-12-05 12:14:09.585594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.722 qpair failed and we were unable to recover it. 00:30:35.722 [2024-12-05 12:14:09.585776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.722 [2024-12-05 12:14:09.585809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.722 qpair failed and we were unable to recover it. 00:30:35.722 [2024-12-05 12:14:09.585941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.722 [2024-12-05 12:14:09.585972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.722 qpair failed and we were unable to recover it. 00:30:35.722 [2024-12-05 12:14:09.586173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.722 [2024-12-05 12:14:09.586204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.722 qpair failed and we were unable to recover it. 00:30:35.722 [2024-12-05 12:14:09.586494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.722 [2024-12-05 12:14:09.586527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.722 qpair failed and we were unable to recover it. 00:30:35.722 [2024-12-05 12:14:09.586767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.722 [2024-12-05 12:14:09.586798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.722 qpair failed and we were unable to recover it. 00:30:35.722 [2024-12-05 12:14:09.587059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.722 [2024-12-05 12:14:09.587090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.722 qpair failed and we were unable to recover it. 00:30:35.722 [2024-12-05 12:14:09.587332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.722 [2024-12-05 12:14:09.587363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.722 qpair failed and we were unable to recover it. 00:30:35.722 [2024-12-05 12:14:09.587640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.722 [2024-12-05 12:14:09.587672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:35.722 qpair failed and we were unable to recover it. 00:30:35.722 [2024-12-05 12:14:09.588011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.722 [2024-12-05 12:14:09.588082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.722 qpair failed and we were unable to recover it. 00:30:35.722 [2024-12-05 12:14:09.588384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.722 [2024-12-05 12:14:09.588421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.722 qpair failed and we were unable to recover it. 00:30:35.722 [2024-12-05 12:14:09.588625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.722 [2024-12-05 12:14:09.588657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.722 qpair failed and we were unable to recover it. 00:30:35.722 [2024-12-05 12:14:09.588916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.722 [2024-12-05 12:14:09.588948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.722 qpair failed and we were unable to recover it. 00:30:35.722 [2024-12-05 12:14:09.589069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.722 [2024-12-05 12:14:09.589099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.722 qpair failed and we were unable to recover it. 00:30:35.722 [2024-12-05 12:14:09.589237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.722 [2024-12-05 12:14:09.589268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.722 qpair failed and we were unable to recover it. 00:30:35.722 [2024-12-05 12:14:09.589392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.722 [2024-12-05 12:14:09.589424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.722 qpair failed and we were unable to recover it. 00:30:35.722 [2024-12-05 12:14:09.589605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.722 [2024-12-05 12:14:09.589636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.722 qpair failed and we were unable to recover it. 00:30:35.722 [2024-12-05 12:14:09.589820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.722 [2024-12-05 12:14:09.589851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.722 qpair failed and we were unable to recover it. 00:30:35.722 [2024-12-05 12:14:09.589968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.722 [2024-12-05 12:14:09.589999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.722 qpair failed and we were unable to recover it. 00:30:35.722 [2024-12-05 12:14:09.590170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.722 [2024-12-05 12:14:09.590200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.722 qpair failed and we were unable to recover it. 00:30:35.722 [2024-12-05 12:14:09.590334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.722 [2024-12-05 12:14:09.590365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.722 qpair failed and we were unable to recover it. 00:30:35.722 [2024-12-05 12:14:09.590507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.722 [2024-12-05 12:14:09.590539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.722 qpair failed and we were unable to recover it. 00:30:35.723 [2024-12-05 12:14:09.590717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.723 [2024-12-05 12:14:09.590749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.723 qpair failed and we were unable to recover it. 00:30:35.723 [2024-12-05 12:14:09.590934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.723 [2024-12-05 12:14:09.590966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.723 qpair failed and we were unable to recover it. 00:30:35.723 [2024-12-05 12:14:09.591151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.723 [2024-12-05 12:14:09.591182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.723 qpair failed and we were unable to recover it. 00:30:35.723 [2024-12-05 12:14:09.591361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.723 [2024-12-05 12:14:09.591405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.723 qpair failed and we were unable to recover it. 00:30:35.723 [2024-12-05 12:14:09.591594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.723 [2024-12-05 12:14:09.591625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.723 qpair failed and we were unable to recover it. 00:30:35.723 [2024-12-05 12:14:09.591805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.723 [2024-12-05 12:14:09.591836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.723 qpair failed and we were unable to recover it. 00:30:35.723 [2024-12-05 12:14:09.592097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.723 [2024-12-05 12:14:09.592129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.723 qpair failed and we were unable to recover it. 00:30:35.723 [2024-12-05 12:14:09.592382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.723 [2024-12-05 12:14:09.592415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.723 qpair failed and we were unable to recover it. 00:30:35.723 [2024-12-05 12:14:09.592596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.723 [2024-12-05 12:14:09.592628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.723 qpair failed and we were unable to recover it. 00:30:35.723 [2024-12-05 12:14:09.592839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.723 [2024-12-05 12:14:09.592870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.723 qpair failed and we were unable to recover it. 00:30:35.723 [2024-12-05 12:14:09.593059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.723 [2024-12-05 12:14:09.593091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.723 qpair failed and we were unable to recover it. 00:30:35.723 [2024-12-05 12:14:09.593279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.723 [2024-12-05 12:14:09.593310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.723 qpair failed and we were unable to recover it. 00:30:35.723 [2024-12-05 12:14:09.593447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.723 [2024-12-05 12:14:09.593479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.723 qpair failed and we were unable to recover it. 00:30:35.723 [2024-12-05 12:14:09.593652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.723 [2024-12-05 12:14:09.593684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.723 qpair failed and we were unable to recover it. 00:30:35.723 [2024-12-05 12:14:09.593948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.723 [2024-12-05 12:14:09.593985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.723 qpair failed and we were unable to recover it. 00:30:35.723 [2024-12-05 12:14:09.594169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.723 [2024-12-05 12:14:09.594200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.723 qpair failed and we were unable to recover it. 00:30:35.723 [2024-12-05 12:14:09.594392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.723 [2024-12-05 12:14:09.594425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.723 qpair failed and we were unable to recover it. 00:30:35.723 [2024-12-05 12:14:09.594542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.723 [2024-12-05 12:14:09.594574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.723 qpair failed and we were unable to recover it. 00:30:35.723 [2024-12-05 12:14:09.594760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.723 [2024-12-05 12:14:09.594792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.723 qpair failed and we were unable to recover it. 00:30:35.723 [2024-12-05 12:14:09.594908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.723 [2024-12-05 12:14:09.594939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.723 qpair failed and we were unable to recover it. 00:30:35.723 [2024-12-05 12:14:09.595132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.723 [2024-12-05 12:14:09.595164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.723 qpair failed and we were unable to recover it. 00:30:35.723 [2024-12-05 12:14:09.595347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.723 [2024-12-05 12:14:09.595388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.723 qpair failed and we were unable to recover it. 00:30:35.723 [2024-12-05 12:14:09.595587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.723 [2024-12-05 12:14:09.595619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.723 qpair failed and we were unable to recover it. 00:30:35.723 [2024-12-05 12:14:09.595863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.723 [2024-12-05 12:14:09.595894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.723 qpair failed and we were unable to recover it. 00:30:35.723 [2024-12-05 12:14:09.596002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.723 [2024-12-05 12:14:09.596034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.723 qpair failed and we were unable to recover it. 00:30:35.723 [2024-12-05 12:14:09.596216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.723 [2024-12-05 12:14:09.596247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.723 qpair failed and we were unable to recover it. 00:30:35.723 [2024-12-05 12:14:09.596349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.723 [2024-12-05 12:14:09.596389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.723 qpair failed and we were unable to recover it. 00:30:35.723 [2024-12-05 12:14:09.596651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.723 [2024-12-05 12:14:09.596683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.723 qpair failed and we were unable to recover it. 00:30:35.723 [2024-12-05 12:14:09.596952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.723 [2024-12-05 12:14:09.596984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.723 qpair failed and we were unable to recover it. 00:30:35.723 [2024-12-05 12:14:09.597114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.723 [2024-12-05 12:14:09.597145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.723 qpair failed and we were unable to recover it. 00:30:35.723 [2024-12-05 12:14:09.597259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.723 [2024-12-05 12:14:09.597291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.723 qpair failed and we were unable to recover it. 00:30:35.723 [2024-12-05 12:14:09.597477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.723 [2024-12-05 12:14:09.597510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.723 qpair failed and we were unable to recover it. 00:30:35.723 [2024-12-05 12:14:09.597689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.723 [2024-12-05 12:14:09.597722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.723 qpair failed and we were unable to recover it. 00:30:35.723 [2024-12-05 12:14:09.597913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.723 [2024-12-05 12:14:09.597945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.723 qpair failed and we were unable to recover it. 00:30:35.723 [2024-12-05 12:14:09.598053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.723 [2024-12-05 12:14:09.598083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.723 qpair failed and we were unable to recover it. 00:30:35.723 [2024-12-05 12:14:09.598345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.723 [2024-12-05 12:14:09.598389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.723 qpair failed and we were unable to recover it. 00:30:35.723 [2024-12-05 12:14:09.598509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.723 [2024-12-05 12:14:09.598540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.723 qpair failed and we were unable to recover it. 00:30:35.723 [2024-12-05 12:14:09.598783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.723 [2024-12-05 12:14:09.598815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.723 qpair failed and we were unable to recover it. 00:30:35.723 [2024-12-05 12:14:09.598949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.724 [2024-12-05 12:14:09.598981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.724 qpair failed and we were unable to recover it. 00:30:35.724 [2024-12-05 12:14:09.599087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.724 [2024-12-05 12:14:09.599118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.724 qpair failed and we were unable to recover it. 00:30:35.724 [2024-12-05 12:14:09.599300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.724 [2024-12-05 12:14:09.599332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.724 qpair failed and we were unable to recover it. 00:30:35.724 [2024-12-05 12:14:09.599524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.724 [2024-12-05 12:14:09.599558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.724 qpair failed and we were unable to recover it. 00:30:35.724 [2024-12-05 12:14:09.599757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.724 [2024-12-05 12:14:09.599789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.724 qpair failed and we were unable to recover it. 00:30:35.724 [2024-12-05 12:14:09.599921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.724 [2024-12-05 12:14:09.599953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.724 qpair failed and we were unable to recover it. 00:30:35.724 [2024-12-05 12:14:09.600062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.724 [2024-12-05 12:14:09.600094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.724 qpair failed and we were unable to recover it. 00:30:35.724 [2024-12-05 12:14:09.600330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.724 [2024-12-05 12:14:09.600361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.724 qpair failed and we were unable to recover it. 00:30:35.724 [2024-12-05 12:14:09.600489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.724 [2024-12-05 12:14:09.600520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.724 qpair failed and we were unable to recover it. 00:30:35.724 [2024-12-05 12:14:09.600630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.724 [2024-12-05 12:14:09.600662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.724 qpair failed and we were unable to recover it. 00:30:35.724 [2024-12-05 12:14:09.600831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.724 [2024-12-05 12:14:09.600862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.724 qpair failed and we were unable to recover it. 00:30:35.724 [2024-12-05 12:14:09.601033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.724 [2024-12-05 12:14:09.601065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.724 qpair failed and we were unable to recover it. 00:30:35.724 [2024-12-05 12:14:09.601254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.724 [2024-12-05 12:14:09.601285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.724 qpair failed and we were unable to recover it. 00:30:35.724 [2024-12-05 12:14:09.601556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.724 [2024-12-05 12:14:09.601588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.724 qpair failed and we were unable to recover it. 00:30:35.724 [2024-12-05 12:14:09.601854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.724 [2024-12-05 12:14:09.601886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.724 qpair failed and we were unable to recover it. 00:30:35.724 [2024-12-05 12:14:09.602074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.724 [2024-12-05 12:14:09.602105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.724 qpair failed and we were unable to recover it. 00:30:35.724 [2024-12-05 12:14:09.602284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.724 [2024-12-05 12:14:09.602315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.724 qpair failed and we were unable to recover it. 00:30:35.724 [2024-12-05 12:14:09.602509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.724 [2024-12-05 12:14:09.602542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.724 qpair failed and we were unable to recover it. 00:30:35.724 [2024-12-05 12:14:09.602680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.724 [2024-12-05 12:14:09.602711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.724 qpair failed and we were unable to recover it. 00:30:35.724 [2024-12-05 12:14:09.602824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.724 [2024-12-05 12:14:09.602856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.724 qpair failed and we were unable to recover it. 00:30:35.724 [2024-12-05 12:14:09.602996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.724 [2024-12-05 12:14:09.603028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.724 qpair failed and we were unable to recover it. 00:30:35.724 [2024-12-05 12:14:09.603209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.724 [2024-12-05 12:14:09.603239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.724 qpair failed and we were unable to recover it. 00:30:35.724 [2024-12-05 12:14:09.603443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.724 [2024-12-05 12:14:09.603475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.724 qpair failed and we were unable to recover it. 00:30:35.724 [2024-12-05 12:14:09.603727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.724 [2024-12-05 12:14:09.603759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.724 qpair failed and we were unable to recover it. 00:30:35.724 [2024-12-05 12:14:09.603957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.724 [2024-12-05 12:14:09.603989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.724 qpair failed and we were unable to recover it. 00:30:35.724 [2024-12-05 12:14:09.604238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.724 [2024-12-05 12:14:09.604269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.724 qpair failed and we were unable to recover it. 00:30:35.724 [2024-12-05 12:14:09.604399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.724 [2024-12-05 12:14:09.604432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.724 qpair failed and we were unable to recover it. 00:30:35.724 [2024-12-05 12:14:09.604692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.724 [2024-12-05 12:14:09.604723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.724 qpair failed and we were unable to recover it. 00:30:35.724 [2024-12-05 12:14:09.604925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.724 [2024-12-05 12:14:09.604957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.724 qpair failed and we were unable to recover it. 00:30:35.724 [2024-12-05 12:14:09.605136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.724 [2024-12-05 12:14:09.605168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.724 qpair failed and we were unable to recover it. 00:30:35.724 [2024-12-05 12:14:09.605343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.724 [2024-12-05 12:14:09.605394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.724 qpair failed and we were unable to recover it. 00:30:35.724 [2024-12-05 12:14:09.605580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.724 [2024-12-05 12:14:09.605612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.724 qpair failed and we were unable to recover it. 00:30:35.724 [2024-12-05 12:14:09.605795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.724 [2024-12-05 12:14:09.605828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.724 qpair failed and we were unable to recover it. 00:30:35.724 [2024-12-05 12:14:09.606011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.724 [2024-12-05 12:14:09.606044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.724 qpair failed and we were unable to recover it. 00:30:35.724 [2024-12-05 12:14:09.606232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.724 [2024-12-05 12:14:09.606265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.724 qpair failed and we were unable to recover it. 00:30:35.724 [2024-12-05 12:14:09.606447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.724 [2024-12-05 12:14:09.606480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.724 qpair failed and we were unable to recover it. 00:30:35.724 [2024-12-05 12:14:09.606689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.724 [2024-12-05 12:14:09.606721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.725 qpair failed and we were unable to recover it. 00:30:35.725 [2024-12-05 12:14:09.606891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.725 [2024-12-05 12:14:09.606923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.725 qpair failed and we were unable to recover it. 00:30:35.725 [2024-12-05 12:14:09.607053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.725 [2024-12-05 12:14:09.607084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.725 qpair failed and we were unable to recover it. 00:30:35.725 [2024-12-05 12:14:09.607283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.725 [2024-12-05 12:14:09.607315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.725 qpair failed and we were unable to recover it. 00:30:35.725 [2024-12-05 12:14:09.607425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.725 [2024-12-05 12:14:09.607457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.725 qpair failed and we were unable to recover it. 00:30:35.725 [2024-12-05 12:14:09.607630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.725 [2024-12-05 12:14:09.607662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.725 qpair failed and we were unable to recover it. 00:30:35.725 [2024-12-05 12:14:09.607921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.725 [2024-12-05 12:14:09.607953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.725 qpair failed and we were unable to recover it. 00:30:35.725 [2024-12-05 12:14:09.608164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.725 [2024-12-05 12:14:09.608195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.725 qpair failed and we were unable to recover it. 00:30:35.725 [2024-12-05 12:14:09.608359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.725 [2024-12-05 12:14:09.608405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.725 qpair failed and we were unable to recover it. 00:30:35.725 [2024-12-05 12:14:09.608600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.725 [2024-12-05 12:14:09.608632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.725 qpair failed and we were unable to recover it. 00:30:35.725 [2024-12-05 12:14:09.608804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.725 [2024-12-05 12:14:09.608836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.725 qpair failed and we were unable to recover it. 00:30:35.725 [2024-12-05 12:14:09.609023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.725 [2024-12-05 12:14:09.609054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.725 qpair failed and we were unable to recover it. 00:30:35.725 [2024-12-05 12:14:09.609177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.725 [2024-12-05 12:14:09.609208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.725 qpair failed and we were unable to recover it. 00:30:35.725 [2024-12-05 12:14:09.609397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.725 [2024-12-05 12:14:09.609430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.725 qpair failed and we were unable to recover it. 00:30:35.725 [2024-12-05 12:14:09.609617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.725 [2024-12-05 12:14:09.609650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.725 qpair failed and we were unable to recover it. 00:30:35.725 [2024-12-05 12:14:09.609822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.725 [2024-12-05 12:14:09.609853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.725 qpair failed and we were unable to recover it. 00:30:35.725 [2024-12-05 12:14:09.609980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.725 [2024-12-05 12:14:09.610011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.725 qpair failed and we were unable to recover it. 00:30:35.725 [2024-12-05 12:14:09.610144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.725 [2024-12-05 12:14:09.610175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.725 qpair failed and we were unable to recover it. 00:30:35.725 [2024-12-05 12:14:09.610292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.725 [2024-12-05 12:14:09.610325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.725 qpair failed and we were unable to recover it. 00:30:35.725 [2024-12-05 12:14:09.610538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.725 [2024-12-05 12:14:09.610571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.725 qpair failed and we were unable to recover it. 00:30:35.725 [2024-12-05 12:14:09.610743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.725 [2024-12-05 12:14:09.610774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.725 qpair failed and we were unable to recover it. 00:30:35.725 [2024-12-05 12:14:09.610888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.725 [2024-12-05 12:14:09.610919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.725 qpair failed and we were unable to recover it. 00:30:35.725 [2024-12-05 12:14:09.611029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.725 [2024-12-05 12:14:09.611061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.725 qpair failed and we were unable to recover it. 00:30:35.725 [2024-12-05 12:14:09.611259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.725 [2024-12-05 12:14:09.611291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.725 qpair failed and we were unable to recover it. 00:30:35.725 [2024-12-05 12:14:09.611432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.725 [2024-12-05 12:14:09.611464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.725 qpair failed and we were unable to recover it. 00:30:35.725 [2024-12-05 12:14:09.611589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.725 [2024-12-05 12:14:09.611621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.725 qpair failed and we were unable to recover it. 00:30:35.725 [2024-12-05 12:14:09.611861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.725 [2024-12-05 12:14:09.611894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.725 qpair failed and we were unable to recover it. 00:30:35.725 [2024-12-05 12:14:09.612129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.725 [2024-12-05 12:14:09.612159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.725 qpair failed and we were unable to recover it. 00:30:35.725 [2024-12-05 12:14:09.612438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.725 [2024-12-05 12:14:09.612471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.725 qpair failed and we were unable to recover it. 00:30:35.725 [2024-12-05 12:14:09.612663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.725 [2024-12-05 12:14:09.612695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.725 qpair failed and we were unable to recover it. 00:30:35.725 [2024-12-05 12:14:09.612894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.725 [2024-12-05 12:14:09.612925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.725 qpair failed and we were unable to recover it. 00:30:35.725 [2024-12-05 12:14:09.613183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.725 [2024-12-05 12:14:09.613214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.725 qpair failed and we were unable to recover it. 00:30:35.725 [2024-12-05 12:14:09.613403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.726 [2024-12-05 12:14:09.613436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.726 qpair failed and we were unable to recover it. 00:30:35.726 [2024-12-05 12:14:09.613694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.726 [2024-12-05 12:14:09.613725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.726 qpair failed and we were unable to recover it. 00:30:35.726 [2024-12-05 12:14:09.614027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.726 [2024-12-05 12:14:09.614058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.726 qpair failed and we were unable to recover it. 00:30:35.726 [2024-12-05 12:14:09.614229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.726 [2024-12-05 12:14:09.614260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.726 qpair failed and we were unable to recover it. 00:30:35.726 [2024-12-05 12:14:09.614531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.726 [2024-12-05 12:14:09.614564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.726 qpair failed and we were unable to recover it. 00:30:35.726 [2024-12-05 12:14:09.614800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.726 [2024-12-05 12:14:09.614831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.726 qpair failed and we were unable to recover it. 00:30:35.726 [2024-12-05 12:14:09.615090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.726 [2024-12-05 12:14:09.615123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.726 qpair failed and we were unable to recover it. 00:30:35.726 [2024-12-05 12:14:09.615313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.726 [2024-12-05 12:14:09.615344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.726 qpair failed and we were unable to recover it. 00:30:35.726 [2024-12-05 12:14:09.615565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.726 [2024-12-05 12:14:09.615597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.726 qpair failed and we were unable to recover it. 00:30:35.726 [2024-12-05 12:14:09.615856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.726 [2024-12-05 12:14:09.615888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.726 qpair failed and we were unable to recover it. 00:30:35.726 [2024-12-05 12:14:09.616160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.726 [2024-12-05 12:14:09.616192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.726 qpair failed and we were unable to recover it. 00:30:35.726 [2024-12-05 12:14:09.616407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.726 [2024-12-05 12:14:09.616439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.726 qpair failed and we were unable to recover it. 00:30:35.726 [2024-12-05 12:14:09.616623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.726 [2024-12-05 12:14:09.616655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.726 qpair failed and we were unable to recover it. 00:30:35.726 [2024-12-05 12:14:09.616833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.726 [2024-12-05 12:14:09.616865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.726 qpair failed and we were unable to recover it. 00:30:35.726 [2024-12-05 12:14:09.616981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.726 [2024-12-05 12:14:09.617013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.726 qpair failed and we were unable to recover it. 00:30:35.726 [2024-12-05 12:14:09.617203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.726 [2024-12-05 12:14:09.617235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.726 qpair failed and we were unable to recover it. 00:30:35.726 [2024-12-05 12:14:09.617420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.726 [2024-12-05 12:14:09.617453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.726 qpair failed and we were unable to recover it. 00:30:35.726 [2024-12-05 12:14:09.617695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.726 [2024-12-05 12:14:09.617727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.726 qpair failed and we were unable to recover it. 00:30:35.726 [2024-12-05 12:14:09.617939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.726 [2024-12-05 12:14:09.617972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.726 qpair failed and we were unable to recover it. 00:30:35.726 [2024-12-05 12:14:09.618157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.726 [2024-12-05 12:14:09.618188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.726 qpair failed and we were unable to recover it. 00:30:35.726 [2024-12-05 12:14:09.618383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.726 [2024-12-05 12:14:09.618415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.726 qpair failed and we were unable to recover it. 00:30:35.726 [2024-12-05 12:14:09.618518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.726 [2024-12-05 12:14:09.618550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.726 qpair failed and we were unable to recover it. 00:30:35.726 [2024-12-05 12:14:09.618758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.726 [2024-12-05 12:14:09.618789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.726 qpair failed and we were unable to recover it. 00:30:35.726 [2024-12-05 12:14:09.618966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.726 [2024-12-05 12:14:09.618998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.726 qpair failed and we were unable to recover it. 00:30:35.726 [2024-12-05 12:14:09.619259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.726 [2024-12-05 12:14:09.619291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.726 qpair failed and we were unable to recover it. 00:30:35.726 [2024-12-05 12:14:09.619481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.726 [2024-12-05 12:14:09.619512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.726 qpair failed and we were unable to recover it. 00:30:35.726 [2024-12-05 12:14:09.619769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.726 [2024-12-05 12:14:09.619801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.726 qpair failed and we were unable to recover it. 00:30:35.726 [2024-12-05 12:14:09.619989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.726 [2024-12-05 12:14:09.620020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.726 qpair failed and we were unable to recover it. 00:30:35.726 [2024-12-05 12:14:09.620205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.726 [2024-12-05 12:14:09.620238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.726 qpair failed and we were unable to recover it. 00:30:35.726 [2024-12-05 12:14:09.620424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.726 [2024-12-05 12:14:09.620457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.726 qpair failed and we were unable to recover it. 00:30:35.726 [2024-12-05 12:14:09.620562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.726 [2024-12-05 12:14:09.620595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.726 qpair failed and we were unable to recover it. 00:30:35.726 [2024-12-05 12:14:09.620838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.726 [2024-12-05 12:14:09.620870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.726 qpair failed and we were unable to recover it. 00:30:35.726 [2024-12-05 12:14:09.621050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.726 [2024-12-05 12:14:09.621081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.726 qpair failed and we were unable to recover it. 00:30:35.726 [2024-12-05 12:14:09.621287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.726 [2024-12-05 12:14:09.621318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.726 qpair failed and we were unable to recover it. 00:30:35.726 [2024-12-05 12:14:09.621541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.726 [2024-12-05 12:14:09.621574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.726 qpair failed and we were unable to recover it. 00:30:35.726 [2024-12-05 12:14:09.621770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.726 [2024-12-05 12:14:09.621801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.726 qpair failed and we were unable to recover it. 00:30:35.727 [2024-12-05 12:14:09.622036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.727 [2024-12-05 12:14:09.622067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.727 qpair failed and we were unable to recover it. 00:30:35.727 [2024-12-05 12:14:09.622252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.727 [2024-12-05 12:14:09.622283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.727 qpair failed and we were unable to recover it. 00:30:35.727 [2024-12-05 12:14:09.622527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.727 [2024-12-05 12:14:09.622560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.727 qpair failed and we were unable to recover it. 00:30:35.727 [2024-12-05 12:14:09.622843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.727 [2024-12-05 12:14:09.622875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.727 qpair failed and we were unable to recover it. 00:30:35.727 [2024-12-05 12:14:09.623055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.727 [2024-12-05 12:14:09.623086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.727 qpair failed and we were unable to recover it. 00:30:35.727 [2024-12-05 12:14:09.623186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.727 [2024-12-05 12:14:09.623218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.727 qpair failed and we were unable to recover it. 00:30:35.727 [2024-12-05 12:14:09.623337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.727 [2024-12-05 12:14:09.623377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.727 qpair failed and we were unable to recover it. 00:30:35.727 [2024-12-05 12:14:09.623590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.727 [2024-12-05 12:14:09.623622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.727 qpair failed and we were unable to recover it. 00:30:35.727 [2024-12-05 12:14:09.623801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.727 [2024-12-05 12:14:09.623837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.727 qpair failed and we were unable to recover it. 00:30:35.727 [2024-12-05 12:14:09.624006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.727 [2024-12-05 12:14:09.624038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.727 qpair failed and we were unable to recover it. 00:30:35.727 [2024-12-05 12:14:09.624310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.727 [2024-12-05 12:14:09.624341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.727 qpair failed and we were unable to recover it. 00:30:35.727 [2024-12-05 12:14:09.624534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.727 [2024-12-05 12:14:09.624565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.727 qpair failed and we were unable to recover it. 00:30:35.727 [2024-12-05 12:14:09.624838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.727 [2024-12-05 12:14:09.624869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.727 qpair failed and we were unable to recover it. 00:30:35.727 [2024-12-05 12:14:09.625043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.727 [2024-12-05 12:14:09.625074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.727 qpair failed and we were unable to recover it. 00:30:35.727 [2024-12-05 12:14:09.625361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.727 [2024-12-05 12:14:09.625405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.727 qpair failed and we were unable to recover it. 00:30:35.727 [2024-12-05 12:14:09.625590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.727 [2024-12-05 12:14:09.625623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.727 qpair failed and we were unable to recover it. 00:30:35.727 [2024-12-05 12:14:09.625805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.727 [2024-12-05 12:14:09.625836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.727 qpair failed and we were unable to recover it. 00:30:35.727 [2024-12-05 12:14:09.626074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.727 [2024-12-05 12:14:09.626105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.727 qpair failed and we were unable to recover it. 00:30:35.727 [2024-12-05 12:14:09.626314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.727 [2024-12-05 12:14:09.626346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.727 qpair failed and we were unable to recover it. 00:30:35.727 [2024-12-05 12:14:09.626492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.727 [2024-12-05 12:14:09.626524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.727 qpair failed and we were unable to recover it. 00:30:35.727 [2024-12-05 12:14:09.626769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.727 [2024-12-05 12:14:09.626800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.727 qpair failed and we were unable to recover it. 00:30:35.727 [2024-12-05 12:14:09.626931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.727 [2024-12-05 12:14:09.626963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.727 qpair failed and we were unable to recover it. 00:30:35.727 [2024-12-05 12:14:09.627084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.727 [2024-12-05 12:14:09.627115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.727 qpair failed and we were unable to recover it. 00:30:35.727 [2024-12-05 12:14:09.627308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.727 [2024-12-05 12:14:09.627340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.727 qpair failed and we were unable to recover it. 00:30:35.727 [2024-12-05 12:14:09.627571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.727 [2024-12-05 12:14:09.627642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.727 qpair failed and we were unable to recover it. 00:30:35.727 [2024-12-05 12:14:09.627784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.727 [2024-12-05 12:14:09.627821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.727 qpair failed and we were unable to recover it. 00:30:35.727 [2024-12-05 12:14:09.627997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.727 [2024-12-05 12:14:09.628028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.727 qpair failed and we were unable to recover it. 00:30:35.727 [2024-12-05 12:14:09.628286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.727 [2024-12-05 12:14:09.628317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.727 qpair failed and we were unable to recover it. 00:30:35.727 [2024-12-05 12:14:09.628482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.727 [2024-12-05 12:14:09.628515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.727 qpair failed and we were unable to recover it. 00:30:35.727 [2024-12-05 12:14:09.628644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.727 [2024-12-05 12:14:09.628675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.727 qpair failed and we were unable to recover it. 00:30:35.727 [2024-12-05 12:14:09.628851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.727 [2024-12-05 12:14:09.628881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.727 qpair failed and we were unable to recover it. 00:30:35.727 [2024-12-05 12:14:09.629007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.727 [2024-12-05 12:14:09.629038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.727 qpair failed and we were unable to recover it. 00:30:35.727 [2024-12-05 12:14:09.629150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.727 [2024-12-05 12:14:09.629180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.727 qpair failed and we were unable to recover it. 00:30:35.727 [2024-12-05 12:14:09.629360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.727 [2024-12-05 12:14:09.629404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.727 qpair failed and we were unable to recover it. 00:30:35.727 [2024-12-05 12:14:09.629534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.727 [2024-12-05 12:14:09.629564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.727 qpair failed and we were unable to recover it. 00:30:35.727 [2024-12-05 12:14:09.629683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.727 [2024-12-05 12:14:09.629723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.727 qpair failed and we were unable to recover it. 00:30:35.727 [2024-12-05 12:14:09.629844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.727 [2024-12-05 12:14:09.629875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.728 qpair failed and we were unable to recover it. 00:30:35.728 [2024-12-05 12:14:09.629979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.728 [2024-12-05 12:14:09.630010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.728 qpair failed and we were unable to recover it. 00:30:35.728 [2024-12-05 12:14:09.630130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.728 [2024-12-05 12:14:09.630161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.728 qpair failed and we were unable to recover it. 00:30:35.728 [2024-12-05 12:14:09.630332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.728 [2024-12-05 12:14:09.630362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.728 qpair failed and we were unable to recover it. 00:30:35.728 [2024-12-05 12:14:09.630563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.728 [2024-12-05 12:14:09.630595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.728 qpair failed and we were unable to recover it. 00:30:35.728 [2024-12-05 12:14:09.630780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.728 [2024-12-05 12:14:09.630810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.728 qpair failed and we were unable to recover it. 00:30:35.728 [2024-12-05 12:14:09.630930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.728 [2024-12-05 12:14:09.630961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.728 qpair failed and we were unable to recover it. 00:30:35.728 [2024-12-05 12:14:09.631077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.728 [2024-12-05 12:14:09.631108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.728 qpair failed and we were unable to recover it. 00:30:35.728 [2024-12-05 12:14:09.631360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.728 [2024-12-05 12:14:09.631403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.728 qpair failed and we were unable to recover it. 00:30:35.728 [2024-12-05 12:14:09.631531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.728 [2024-12-05 12:14:09.631562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.728 qpair failed and we were unable to recover it. 00:30:35.728 [2024-12-05 12:14:09.631754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.728 [2024-12-05 12:14:09.631784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.728 qpair failed and we were unable to recover it. 00:30:35.728 [2024-12-05 12:14:09.631899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.728 [2024-12-05 12:14:09.631929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.728 qpair failed and we were unable to recover it. 00:30:35.728 [2024-12-05 12:14:09.632056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.728 [2024-12-05 12:14:09.632088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.728 qpair failed and we were unable to recover it. 00:30:35.728 [2024-12-05 12:14:09.632275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.728 [2024-12-05 12:14:09.632307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.728 qpair failed and we were unable to recover it. 00:30:35.728 [2024-12-05 12:14:09.632415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.728 [2024-12-05 12:14:09.632462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.728 qpair failed and we were unable to recover it. 00:30:35.728 [2024-12-05 12:14:09.632633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.728 [2024-12-05 12:14:09.632664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.728 qpair failed and we were unable to recover it. 00:30:35.728 [2024-12-05 12:14:09.632928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.728 [2024-12-05 12:14:09.632959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.728 qpair failed and we were unable to recover it. 00:30:35.728 [2024-12-05 12:14:09.633076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.728 [2024-12-05 12:14:09.633107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.728 qpair failed and we were unable to recover it. 00:30:35.728 [2024-12-05 12:14:09.633378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.728 [2024-12-05 12:14:09.633411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.728 qpair failed and we were unable to recover it. 00:30:35.728 [2024-12-05 12:14:09.633627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.728 [2024-12-05 12:14:09.633658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.728 qpair failed and we were unable to recover it. 00:30:35.728 [2024-12-05 12:14:09.633784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.728 [2024-12-05 12:14:09.633816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.728 qpair failed and we were unable to recover it. 00:30:35.728 [2024-12-05 12:14:09.633984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.728 [2024-12-05 12:14:09.634014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.728 qpair failed and we were unable to recover it. 00:30:35.728 [2024-12-05 12:14:09.634225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.728 [2024-12-05 12:14:09.634257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.728 qpair failed and we were unable to recover it. 00:30:35.728 [2024-12-05 12:14:09.634436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.728 [2024-12-05 12:14:09.634468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.728 qpair failed and we were unable to recover it. 00:30:35.728 [2024-12-05 12:14:09.634642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.728 [2024-12-05 12:14:09.634673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.728 qpair failed and we were unable to recover it. 00:30:35.728 [2024-12-05 12:14:09.634862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.728 [2024-12-05 12:14:09.634893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.728 qpair failed and we were unable to recover it. 00:30:35.728 [2024-12-05 12:14:09.635079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.728 [2024-12-05 12:14:09.635110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.728 qpair failed and we were unable to recover it. 00:30:35.728 [2024-12-05 12:14:09.635285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.728 [2024-12-05 12:14:09.635317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.728 qpair failed and we were unable to recover it. 00:30:35.728 [2024-12-05 12:14:09.635560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.728 [2024-12-05 12:14:09.635592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.728 qpair failed and we were unable to recover it. 00:30:35.728 [2024-12-05 12:14:09.635721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.728 [2024-12-05 12:14:09.635752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.728 qpair failed and we were unable to recover it. 00:30:35.728 [2024-12-05 12:14:09.636012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.728 [2024-12-05 12:14:09.636043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.728 qpair failed and we were unable to recover it. 00:30:35.728 [2024-12-05 12:14:09.636250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.728 [2024-12-05 12:14:09.636282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.728 qpair failed and we were unable to recover it. 00:30:35.728 [2024-12-05 12:14:09.636465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.728 [2024-12-05 12:14:09.636497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.728 qpair failed and we were unable to recover it. 00:30:35.728 [2024-12-05 12:14:09.636686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.728 [2024-12-05 12:14:09.636718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.728 qpair failed and we were unable to recover it. 00:30:35.728 [2024-12-05 12:14:09.636914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.728 [2024-12-05 12:14:09.636945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.728 qpair failed and we were unable to recover it. 00:30:35.728 [2024-12-05 12:14:09.637113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.728 [2024-12-05 12:14:09.637143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.728 qpair failed and we were unable to recover it. 00:30:35.728 [2024-12-05 12:14:09.637341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.728 [2024-12-05 12:14:09.637383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.728 qpair failed and we were unable to recover it. 00:30:35.728 [2024-12-05 12:14:09.637506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.729 [2024-12-05 12:14:09.637537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.729 qpair failed and we were unable to recover it. 00:30:35.729 [2024-12-05 12:14:09.637638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.729 [2024-12-05 12:14:09.637670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.729 qpair failed and we were unable to recover it. 00:30:35.729 [2024-12-05 12:14:09.637877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.729 [2024-12-05 12:14:09.637914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.729 qpair failed and we were unable to recover it. 00:30:35.729 [2024-12-05 12:14:09.638042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.729 [2024-12-05 12:14:09.638073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.729 qpair failed and we were unable to recover it. 00:30:35.729 [2024-12-05 12:14:09.638217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.729 [2024-12-05 12:14:09.638249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.729 qpair failed and we were unable to recover it. 00:30:35.729 [2024-12-05 12:14:09.638442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.729 [2024-12-05 12:14:09.638475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.729 qpair failed and we were unable to recover it. 00:30:35.729 [2024-12-05 12:14:09.638594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.729 [2024-12-05 12:14:09.638625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.729 qpair failed and we were unable to recover it. 00:30:35.729 [2024-12-05 12:14:09.638866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.729 [2024-12-05 12:14:09.638897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.729 qpair failed and we were unable to recover it. 00:30:35.729 [2024-12-05 12:14:09.639137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.729 [2024-12-05 12:14:09.639167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.729 qpair failed and we were unable to recover it. 00:30:35.729 [2024-12-05 12:14:09.639275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.729 [2024-12-05 12:14:09.639307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.729 qpair failed and we were unable to recover it. 00:30:35.729 [2024-12-05 12:14:09.639503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.729 [2024-12-05 12:14:09.639535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.729 qpair failed and we were unable to recover it. 00:30:35.729 [2024-12-05 12:14:09.639729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.729 [2024-12-05 12:14:09.639760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.729 qpair failed and we were unable to recover it. 00:30:35.729 [2024-12-05 12:14:09.640020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.729 [2024-12-05 12:14:09.640051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.729 qpair failed and we were unable to recover it. 00:30:35.729 [2024-12-05 12:14:09.640238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.729 [2024-12-05 12:14:09.640269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.729 qpair failed and we were unable to recover it. 00:30:35.729 [2024-12-05 12:14:09.640509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.729 [2024-12-05 12:14:09.640540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.729 qpair failed and we were unable to recover it. 00:30:35.729 [2024-12-05 12:14:09.640668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.729 [2024-12-05 12:14:09.640700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.729 qpair failed and we were unable to recover it. 00:30:35.729 [2024-12-05 12:14:09.640895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.729 [2024-12-05 12:14:09.640927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.729 qpair failed and we were unable to recover it. 00:30:35.729 [2024-12-05 12:14:09.641106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.729 [2024-12-05 12:14:09.641137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.729 qpair failed and we were unable to recover it. 00:30:35.729 [2024-12-05 12:14:09.641384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.729 [2024-12-05 12:14:09.641417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.729 qpair failed and we were unable to recover it. 00:30:35.729 [2024-12-05 12:14:09.641588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.729 [2024-12-05 12:14:09.641619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.729 qpair failed and we were unable to recover it. 00:30:35.729 [2024-12-05 12:14:09.641743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.729 [2024-12-05 12:14:09.641775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.729 qpair failed and we were unable to recover it. 00:30:35.729 [2024-12-05 12:14:09.641957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.729 [2024-12-05 12:14:09.641988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.729 qpair failed and we were unable to recover it. 00:30:35.729 [2024-12-05 12:14:09.642112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.729 [2024-12-05 12:14:09.642143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.729 qpair failed and we were unable to recover it. 00:30:35.729 [2024-12-05 12:14:09.642328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.729 [2024-12-05 12:14:09.642358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.729 qpair failed and we were unable to recover it. 00:30:35.729 [2024-12-05 12:14:09.642611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.729 [2024-12-05 12:14:09.642643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.729 qpair failed and we were unable to recover it. 00:30:35.729 [2024-12-05 12:14:09.642757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.729 [2024-12-05 12:14:09.642789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.729 qpair failed and we were unable to recover it. 00:30:35.729 [2024-12-05 12:14:09.642969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.729 [2024-12-05 12:14:09.642999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.729 qpair failed and we were unable to recover it. 00:30:35.729 [2024-12-05 12:14:09.643198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.729 [2024-12-05 12:14:09.643230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.729 qpair failed and we were unable to recover it. 00:30:35.729 [2024-12-05 12:14:09.643417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.729 [2024-12-05 12:14:09.643450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:35.729 qpair failed and we were unable to recover it. 00:30:35.729 [2024-12-05 12:14:09.643631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.730 [2024-12-05 12:14:09.643703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.730 qpair failed and we were unable to recover it. 00:30:35.730 [2024-12-05 12:14:09.643859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.730 [2024-12-05 12:14:09.643894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.730 qpair failed and we were unable to recover it. 00:30:35.730 [2024-12-05 12:14:09.644069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.730 [2024-12-05 12:14:09.644100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.730 qpair failed and we were unable to recover it. 00:30:35.730 [2024-12-05 12:14:09.644285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.730 [2024-12-05 12:14:09.644318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.730 qpair failed and we were unable to recover it. 00:30:35.730 [2024-12-05 12:14:09.644445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.730 [2024-12-05 12:14:09.644479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.730 qpair failed and we were unable to recover it. 00:30:35.730 [2024-12-05 12:14:09.644741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.730 [2024-12-05 12:14:09.644773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.730 qpair failed and we were unable to recover it. 00:30:35.730 [2024-12-05 12:14:09.644955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.730 [2024-12-05 12:14:09.644986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.730 qpair failed and we were unable to recover it. 00:30:35.730 [2024-12-05 12:14:09.645171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.730 [2024-12-05 12:14:09.645202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.730 qpair failed and we were unable to recover it. 00:30:35.730 [2024-12-05 12:14:09.645331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.730 [2024-12-05 12:14:09.645363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.730 qpair failed and we were unable to recover it. 00:30:35.730 [2024-12-05 12:14:09.645557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.730 [2024-12-05 12:14:09.645589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.730 qpair failed and we were unable to recover it. 00:30:35.730 [2024-12-05 12:14:09.645716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.730 [2024-12-05 12:14:09.645748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.730 qpair failed and we were unable to recover it. 00:30:35.730 [2024-12-05 12:14:09.645927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.730 [2024-12-05 12:14:09.645958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.730 qpair failed and we were unable to recover it. 00:30:35.730 [2024-12-05 12:14:09.646149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.730 [2024-12-05 12:14:09.646180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.730 qpair failed and we were unable to recover it. 00:30:35.730 [2024-12-05 12:14:09.646439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.730 [2024-12-05 12:14:09.646472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.730 qpair failed and we were unable to recover it. 00:30:35.730 [2024-12-05 12:14:09.646749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.730 [2024-12-05 12:14:09.646781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.730 qpair failed and we were unable to recover it. 00:30:35.730 [2024-12-05 12:14:09.647033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.730 [2024-12-05 12:14:09.647065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.730 qpair failed and we were unable to recover it. 00:30:35.730 [2024-12-05 12:14:09.647270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.730 [2024-12-05 12:14:09.647301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.730 qpair failed and we were unable to recover it. 00:30:35.730 [2024-12-05 12:14:09.647424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.730 [2024-12-05 12:14:09.647466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.730 qpair failed and we were unable to recover it. 00:30:35.730 [2024-12-05 12:14:09.647652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.730 [2024-12-05 12:14:09.647683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.730 qpair failed and we were unable to recover it. 00:30:35.730 [2024-12-05 12:14:09.647899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.730 [2024-12-05 12:14:09.647931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.730 qpair failed and we were unable to recover it. 00:30:35.730 [2024-12-05 12:14:09.648042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.730 [2024-12-05 12:14:09.648073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.730 qpair failed and we were unable to recover it. 00:30:35.730 [2024-12-05 12:14:09.648278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.730 [2024-12-05 12:14:09.648308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.730 qpair failed and we were unable to recover it. 00:30:35.730 [2024-12-05 12:14:09.648637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.730 [2024-12-05 12:14:09.648671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.730 qpair failed and we were unable to recover it. 00:30:35.730 [2024-12-05 12:14:09.648924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.730 [2024-12-05 12:14:09.648955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.730 qpair failed and we were unable to recover it. 00:30:35.730 [2024-12-05 12:14:09.649094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.730 [2024-12-05 12:14:09.649126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.730 qpair failed and we were unable to recover it. 00:30:35.730 [2024-12-05 12:14:09.649312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.730 [2024-12-05 12:14:09.649342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.730 qpair failed and we were unable to recover it. 00:30:35.730 [2024-12-05 12:14:09.649623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.730 [2024-12-05 12:14:09.649656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.730 qpair failed and we were unable to recover it. 00:30:35.730 [2024-12-05 12:14:09.649903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.730 [2024-12-05 12:14:09.649941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.730 qpair failed and we were unable to recover it. 00:30:35.730 [2024-12-05 12:14:09.650112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.730 [2024-12-05 12:14:09.650143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.730 qpair failed and we were unable to recover it. 00:30:35.730 [2024-12-05 12:14:09.650327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.730 [2024-12-05 12:14:09.650358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.730 qpair failed and we were unable to recover it. 00:30:35.730 [2024-12-05 12:14:09.650573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.730 [2024-12-05 12:14:09.650605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.730 qpair failed and we were unable to recover it. 00:30:35.730 [2024-12-05 12:14:09.650816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.731 [2024-12-05 12:14:09.650848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.731 qpair failed and we were unable to recover it. 00:30:35.731 [2024-12-05 12:14:09.651053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.731 [2024-12-05 12:14:09.651085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.731 qpair failed and we were unable to recover it. 00:30:35.731 [2024-12-05 12:14:09.651207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.731 [2024-12-05 12:14:09.651239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.731 qpair failed and we were unable to recover it. 00:30:35.731 [2024-12-05 12:14:09.651364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.731 [2024-12-05 12:14:09.651406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.731 qpair failed and we were unable to recover it. 00:30:35.731 [2024-12-05 12:14:09.651513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.731 [2024-12-05 12:14:09.651544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.731 qpair failed and we were unable to recover it. 00:30:35.731 [2024-12-05 12:14:09.651672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.731 [2024-12-05 12:14:09.651704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.731 qpair failed and we were unable to recover it. 00:30:35.731 [2024-12-05 12:14:09.651873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.731 [2024-12-05 12:14:09.651905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.731 qpair failed and we were unable to recover it. 00:30:35.731 [2024-12-05 12:14:09.652090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.731 [2024-12-05 12:14:09.652121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.731 qpair failed and we were unable to recover it. 00:30:35.731 [2024-12-05 12:14:09.652316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.731 [2024-12-05 12:14:09.652348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.731 qpair failed and we were unable to recover it. 00:30:35.731 [2024-12-05 12:14:09.652543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.731 [2024-12-05 12:14:09.652575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.731 qpair failed and we were unable to recover it. 00:30:35.731 [2024-12-05 12:14:09.652755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.731 [2024-12-05 12:14:09.652786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.731 qpair failed and we were unable to recover it. 00:30:35.731 [2024-12-05 12:14:09.652969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.731 [2024-12-05 12:14:09.653000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.731 qpair failed and we were unable to recover it. 00:30:35.731 [2024-12-05 12:14:09.653179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.731 [2024-12-05 12:14:09.653210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.731 qpair failed and we were unable to recover it. 00:30:35.731 [2024-12-05 12:14:09.653466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.731 [2024-12-05 12:14:09.653499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.731 qpair failed and we were unable to recover it. 00:30:35.731 [2024-12-05 12:14:09.653738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.731 [2024-12-05 12:14:09.653770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.731 qpair failed and we were unable to recover it. 00:30:35.731 [2024-12-05 12:14:09.653964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.731 [2024-12-05 12:14:09.653995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.731 qpair failed and we were unable to recover it. 00:30:35.731 [2024-12-05 12:14:09.654195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.731 [2024-12-05 12:14:09.654226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.731 qpair failed and we were unable to recover it. 00:30:35.731 [2024-12-05 12:14:09.654466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.731 [2024-12-05 12:14:09.654498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.731 qpair failed and we were unable to recover it. 00:30:35.731 [2024-12-05 12:14:09.654709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.731 [2024-12-05 12:14:09.654741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.731 qpair failed and we were unable to recover it. 00:30:35.731 [2024-12-05 12:14:09.654854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.731 [2024-12-05 12:14:09.654886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.731 qpair failed and we were unable to recover it. 00:30:35.731 [2024-12-05 12:14:09.655058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.731 [2024-12-05 12:14:09.655089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.731 qpair failed and we were unable to recover it. 00:30:35.731 [2024-12-05 12:14:09.655289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.731 [2024-12-05 12:14:09.655321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.731 qpair failed and we were unable to recover it. 00:30:35.731 [2024-12-05 12:14:09.655525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.731 [2024-12-05 12:14:09.655558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.731 qpair failed and we were unable to recover it. 00:30:35.731 [2024-12-05 12:14:09.655820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.731 [2024-12-05 12:14:09.655851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.731 qpair failed and we were unable to recover it. 00:30:35.731 [2024-12-05 12:14:09.656046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.731 [2024-12-05 12:14:09.656078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.731 qpair failed and we were unable to recover it. 00:30:35.731 [2024-12-05 12:14:09.656252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.731 [2024-12-05 12:14:09.656284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.731 qpair failed and we were unable to recover it. 00:30:35.731 [2024-12-05 12:14:09.656555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.731 [2024-12-05 12:14:09.656587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.731 qpair failed and we were unable to recover it. 00:30:35.731 [2024-12-05 12:14:09.656779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.731 [2024-12-05 12:14:09.656810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.731 qpair failed and we were unable to recover it. 00:30:35.731 [2024-12-05 12:14:09.656997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.731 [2024-12-05 12:14:09.657029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.731 qpair failed and we were unable to recover it. 00:30:35.731 [2024-12-05 12:14:09.657219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.731 [2024-12-05 12:14:09.657250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.731 qpair failed and we were unable to recover it. 00:30:35.731 [2024-12-05 12:14:09.657388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.731 [2024-12-05 12:14:09.657422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.731 qpair failed and we were unable to recover it. 00:30:35.731 [2024-12-05 12:14:09.657593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.731 [2024-12-05 12:14:09.657625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.731 qpair failed and we were unable to recover it. 00:30:35.731 [2024-12-05 12:14:09.657749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.731 [2024-12-05 12:14:09.657780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.731 qpair failed and we were unable to recover it. 00:30:35.731 [2024-12-05 12:14:09.657882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.731 [2024-12-05 12:14:09.657913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.731 qpair failed and we were unable to recover it. 00:30:35.731 [2024-12-05 12:14:09.658155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.731 [2024-12-05 12:14:09.658186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.731 qpair failed and we were unable to recover it. 00:30:35.731 [2024-12-05 12:14:09.658306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.731 [2024-12-05 12:14:09.658337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.732 qpair failed and we were unable to recover it. 00:30:35.732 [2024-12-05 12:14:09.658484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.732 [2024-12-05 12:14:09.658517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.732 qpair failed and we were unable to recover it. 00:30:35.732 [2024-12-05 12:14:09.658765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.732 [2024-12-05 12:14:09.658836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.732 qpair failed and we were unable to recover it. 00:30:35.732 [2024-12-05 12:14:09.659045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.732 [2024-12-05 12:14:09.659082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.732 qpair failed and we were unable to recover it. 00:30:35.732 [2024-12-05 12:14:09.659316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.732 [2024-12-05 12:14:09.659350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.732 qpair failed and we were unable to recover it. 00:30:35.732 [2024-12-05 12:14:09.659579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.732 [2024-12-05 12:14:09.659613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.732 qpair failed and we were unable to recover it. 00:30:35.732 [2024-12-05 12:14:09.659849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.732 [2024-12-05 12:14:09.659881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.732 qpair failed and we were unable to recover it. 00:30:35.732 [2024-12-05 12:14:09.660154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.732 [2024-12-05 12:14:09.660186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.732 qpair failed and we were unable to recover it. 00:30:35.732 [2024-12-05 12:14:09.660385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.732 [2024-12-05 12:14:09.660419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.732 qpair failed and we were unable to recover it. 00:30:35.732 [2024-12-05 12:14:09.660602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.732 [2024-12-05 12:14:09.660634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.732 qpair failed and we were unable to recover it. 00:30:35.732 [2024-12-05 12:14:09.660822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.732 [2024-12-05 12:14:09.660853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.732 qpair failed and we were unable to recover it. 00:30:35.732 [2024-12-05 12:14:09.661095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.732 [2024-12-05 12:14:09.661126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.732 qpair failed and we were unable to recover it. 00:30:35.732 [2024-12-05 12:14:09.661364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.732 [2024-12-05 12:14:09.661406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.732 qpair failed and we were unable to recover it. 00:30:35.732 [2024-12-05 12:14:09.661589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.732 [2024-12-05 12:14:09.661621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.732 qpair failed and we were unable to recover it. 00:30:35.732 [2024-12-05 12:14:09.661909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.732 [2024-12-05 12:14:09.661940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.732 qpair failed and we were unable to recover it. 00:30:35.732 [2024-12-05 12:14:09.662119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.732 [2024-12-05 12:14:09.662160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.732 qpair failed and we were unable to recover it. 00:30:35.732 [2024-12-05 12:14:09.662351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.732 [2024-12-05 12:14:09.662393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.732 qpair failed and we were unable to recover it. 00:30:35.732 [2024-12-05 12:14:09.662526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.732 [2024-12-05 12:14:09.662557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.732 qpair failed and we were unable to recover it. 00:30:35.732 [2024-12-05 12:14:09.662821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.732 [2024-12-05 12:14:09.662851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.732 qpair failed and we were unable to recover it. 00:30:35.732 [2024-12-05 12:14:09.663144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.732 [2024-12-05 12:14:09.663176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.732 qpair failed and we were unable to recover it. 00:30:35.732 [2024-12-05 12:14:09.663413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.732 [2024-12-05 12:14:09.663446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.732 qpair failed and we were unable to recover it. 00:30:35.732 [2024-12-05 12:14:09.663636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.732 [2024-12-05 12:14:09.663667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.732 qpair failed and we were unable to recover it. 00:30:35.732 [2024-12-05 12:14:09.663864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.732 [2024-12-05 12:14:09.663895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.732 qpair failed and we were unable to recover it. 00:30:35.732 [2024-12-05 12:14:09.664154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.732 [2024-12-05 12:14:09.664185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.732 qpair failed and we were unable to recover it. 00:30:35.732 [2024-12-05 12:14:09.664436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.732 [2024-12-05 12:14:09.664469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.732 qpair failed and we were unable to recover it. 00:30:35.732 [2024-12-05 12:14:09.664730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.732 [2024-12-05 12:14:09.664761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.732 qpair failed and we were unable to recover it. 00:30:35.732 [2024-12-05 12:14:09.664888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.732 [2024-12-05 12:14:09.664919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.732 qpair failed and we were unable to recover it. 00:30:35.732 [2024-12-05 12:14:09.665211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.732 [2024-12-05 12:14:09.665241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.732 qpair failed and we were unable to recover it. 00:30:35.732 [2024-12-05 12:14:09.665458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.732 [2024-12-05 12:14:09.665489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.732 qpair failed and we were unable to recover it. 00:30:35.732 [2024-12-05 12:14:09.665631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.732 [2024-12-05 12:14:09.665661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.732 qpair failed and we were unable to recover it. 00:30:35.732 [2024-12-05 12:14:09.665783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.732 [2024-12-05 12:14:09.665814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.732 qpair failed and we were unable to recover it. 00:30:35.732 [2024-12-05 12:14:09.666074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.732 [2024-12-05 12:14:09.666105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.732 qpair failed and we were unable to recover it. 00:30:35.732 [2024-12-05 12:14:09.666286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.732 [2024-12-05 12:14:09.666317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.732 qpair failed and we were unable to recover it. 00:30:35.732 [2024-12-05 12:14:09.666515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.732 [2024-12-05 12:14:09.666548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.732 qpair failed and we were unable to recover it. 00:30:35.732 [2024-12-05 12:14:09.666721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.732 [2024-12-05 12:14:09.666753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.732 qpair failed and we were unable to recover it. 00:30:35.732 [2024-12-05 12:14:09.666921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.732 [2024-12-05 12:14:09.666951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.732 qpair failed and we were unable to recover it. 00:30:35.732 [2024-12-05 12:14:09.667122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.732 [2024-12-05 12:14:09.667154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.732 qpair failed and we were unable to recover it. 00:30:35.732 [2024-12-05 12:14:09.667392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.732 [2024-12-05 12:14:09.667427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.733 qpair failed and we were unable to recover it. 00:30:35.733 [2024-12-05 12:14:09.667567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.733 [2024-12-05 12:14:09.667598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.733 qpair failed and we were unable to recover it. 00:30:35.733 [2024-12-05 12:14:09.667835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.733 [2024-12-05 12:14:09.667866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.733 qpair failed and we were unable to recover it. 00:30:35.733 [2024-12-05 12:14:09.668035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.733 [2024-12-05 12:14:09.668066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.733 qpair failed and we were unable to recover it. 00:30:35.733 [2024-12-05 12:14:09.668182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.733 [2024-12-05 12:14:09.668211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.733 qpair failed and we were unable to recover it. 00:30:35.733 [2024-12-05 12:14:09.668342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.733 [2024-12-05 12:14:09.668380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.733 qpair failed and we were unable to recover it. 00:30:35.733 [2024-12-05 12:14:09.668669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.733 [2024-12-05 12:14:09.668701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.733 qpair failed and we were unable to recover it. 00:30:35.733 [2024-12-05 12:14:09.668869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.733 [2024-12-05 12:14:09.668901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.733 qpair failed and we were unable to recover it. 00:30:35.733 [2024-12-05 12:14:09.669085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.733 [2024-12-05 12:14:09.669116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.733 qpair failed and we were unable to recover it. 00:30:35.733 [2024-12-05 12:14:09.669297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.733 [2024-12-05 12:14:09.669330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.733 qpair failed and we were unable to recover it. 00:30:35.733 [2024-12-05 12:14:09.669541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.733 [2024-12-05 12:14:09.669574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.733 qpair failed and we were unable to recover it. 00:30:35.733 [2024-12-05 12:14:09.669745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.733 [2024-12-05 12:14:09.669776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.733 qpair failed and we were unable to recover it. 00:30:35.733 [2024-12-05 12:14:09.670056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.733 [2024-12-05 12:14:09.670088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.733 qpair failed and we were unable to recover it. 00:30:35.733 [2024-12-05 12:14:09.670274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.733 [2024-12-05 12:14:09.670306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.733 qpair failed and we were unable to recover it. 00:30:35.733 [2024-12-05 12:14:09.670484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.733 [2024-12-05 12:14:09.670516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.733 qpair failed and we were unable to recover it. 00:30:35.733 [2024-12-05 12:14:09.670762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.733 [2024-12-05 12:14:09.670793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.733 qpair failed and we were unable to recover it. 00:30:35.733 [2024-12-05 12:14:09.671030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.733 [2024-12-05 12:14:09.671061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.733 qpair failed and we were unable to recover it. 00:30:35.733 [2024-12-05 12:14:09.671301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.733 [2024-12-05 12:14:09.671333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.733 qpair failed and we were unable to recover it. 00:30:35.733 [2024-12-05 12:14:09.671529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.733 [2024-12-05 12:14:09.671567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.733 qpair failed and we were unable to recover it. 00:30:35.733 [2024-12-05 12:14:09.671838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.733 [2024-12-05 12:14:09.671871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.733 qpair failed and we were unable to recover it. 00:30:35.733 [2024-12-05 12:14:09.672069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.733 [2024-12-05 12:14:09.672101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.733 qpair failed and we were unable to recover it. 00:30:35.733 [2024-12-05 12:14:09.672281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.733 [2024-12-05 12:14:09.672314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.733 qpair failed and we were unable to recover it. 00:30:35.733 [2024-12-05 12:14:09.672536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.733 [2024-12-05 12:14:09.672569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.733 qpair failed and we were unable to recover it. 00:30:35.733 [2024-12-05 12:14:09.672753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.733 [2024-12-05 12:14:09.672785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.733 qpair failed and we were unable to recover it. 00:30:35.733 [2024-12-05 12:14:09.673021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.733 [2024-12-05 12:14:09.673052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.733 qpair failed and we were unable to recover it. 00:30:35.733 [2024-12-05 12:14:09.673153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.733 [2024-12-05 12:14:09.673183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.733 qpair failed and we were unable to recover it. 00:30:35.733 [2024-12-05 12:14:09.673383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.733 [2024-12-05 12:14:09.673417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.733 qpair failed and we were unable to recover it. 00:30:35.733 [2024-12-05 12:14:09.673607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.733 [2024-12-05 12:14:09.673638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.733 qpair failed and we were unable to recover it. 00:30:35.733 [2024-12-05 12:14:09.673770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.733 [2024-12-05 12:14:09.673802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.733 qpair failed and we were unable to recover it. 00:30:35.733 [2024-12-05 12:14:09.674077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.733 [2024-12-05 12:14:09.674108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.733 qpair failed and we were unable to recover it. 00:30:35.733 [2024-12-05 12:14:09.674358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.733 [2024-12-05 12:14:09.674408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.733 qpair failed and we were unable to recover it. 00:30:35.733 [2024-12-05 12:14:09.674592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.733 [2024-12-05 12:14:09.674623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.733 qpair failed and we were unable to recover it. 00:30:35.733 [2024-12-05 12:14:09.674818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.733 [2024-12-05 12:14:09.674850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.733 qpair failed and we were unable to recover it. 00:30:35.733 [2024-12-05 12:14:09.675139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.733 [2024-12-05 12:14:09.675170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.733 qpair failed and we were unable to recover it. 00:30:35.733 [2024-12-05 12:14:09.675437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.734 [2024-12-05 12:14:09.675470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.734 qpair failed and we were unable to recover it. 00:30:35.734 [2024-12-05 12:14:09.675645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.734 [2024-12-05 12:14:09.675676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.734 qpair failed and we were unable to recover it. 00:30:35.734 [2024-12-05 12:14:09.675858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.734 [2024-12-05 12:14:09.675889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.734 qpair failed and we were unable to recover it. 00:30:35.734 [2024-12-05 12:14:09.676096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.734 [2024-12-05 12:14:09.676128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.734 qpair failed and we were unable to recover it. 00:30:35.734 [2024-12-05 12:14:09.676331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.734 [2024-12-05 12:14:09.676362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.734 qpair failed and we were unable to recover it. 00:30:35.734 [2024-12-05 12:14:09.676611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.734 [2024-12-05 12:14:09.676643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.734 qpair failed and we were unable to recover it. 00:30:35.734 [2024-12-05 12:14:09.676880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.734 [2024-12-05 12:14:09.676912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.734 qpair failed and we were unable to recover it. 00:30:35.734 [2024-12-05 12:14:09.677156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.734 [2024-12-05 12:14:09.677187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.734 qpair failed and we were unable to recover it. 00:30:35.734 [2024-12-05 12:14:09.677361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.734 [2024-12-05 12:14:09.677402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.734 qpair failed and we were unable to recover it. 00:30:35.734 [2024-12-05 12:14:09.677594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.734 [2024-12-05 12:14:09.677626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.734 qpair failed and we were unable to recover it. 00:30:35.734 [2024-12-05 12:14:09.677910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.734 [2024-12-05 12:14:09.677942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.734 qpair failed and we were unable to recover it. 00:30:35.734 [2024-12-05 12:14:09.678215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.734 [2024-12-05 12:14:09.678246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.734 qpair failed and we were unable to recover it. 00:30:35.734 [2024-12-05 12:14:09.678533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.734 [2024-12-05 12:14:09.678567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.734 qpair failed and we were unable to recover it. 00:30:35.734 [2024-12-05 12:14:09.678779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.734 [2024-12-05 12:14:09.678813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.734 qpair failed and we were unable to recover it. 00:30:35.734 [2024-12-05 12:14:09.678954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.734 [2024-12-05 12:14:09.678985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.734 qpair failed and we were unable to recover it. 00:30:35.734 [2024-12-05 12:14:09.679191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.734 [2024-12-05 12:14:09.679222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.734 qpair failed and we were unable to recover it. 00:30:35.734 [2024-12-05 12:14:09.679533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.734 [2024-12-05 12:14:09.679567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.734 qpair failed and we were unable to recover it. 00:30:35.734 [2024-12-05 12:14:09.679693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.734 [2024-12-05 12:14:09.679725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.734 qpair failed and we were unable to recover it. 00:30:35.734 [2024-12-05 12:14:09.680011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.734 [2024-12-05 12:14:09.680042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.734 qpair failed and we were unable to recover it. 00:30:35.734 [2024-12-05 12:14:09.680207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.734 [2024-12-05 12:14:09.680238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.734 qpair failed and we were unable to recover it. 00:30:35.734 [2024-12-05 12:14:09.680438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.734 [2024-12-05 12:14:09.680470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.734 qpair failed and we were unable to recover it. 00:30:35.734 [2024-12-05 12:14:09.680747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.734 [2024-12-05 12:14:09.680778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.734 qpair failed and we were unable to recover it. 00:30:35.734 [2024-12-05 12:14:09.681039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.734 [2024-12-05 12:14:09.681070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.734 qpair failed and we were unable to recover it. 00:30:35.734 [2024-12-05 12:14:09.681331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.734 [2024-12-05 12:14:09.681362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.734 qpair failed and we were unable to recover it. 00:30:35.734 [2024-12-05 12:14:09.681589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.734 [2024-12-05 12:14:09.681620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.734 qpair failed and we were unable to recover it. 00:30:35.734 [2024-12-05 12:14:09.681763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.734 [2024-12-05 12:14:09.681795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.734 qpair failed and we were unable to recover it. 00:30:35.734 [2024-12-05 12:14:09.682011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.734 [2024-12-05 12:14:09.682042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.734 qpair failed and we were unable to recover it. 00:30:35.734 [2024-12-05 12:14:09.682249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.734 [2024-12-05 12:14:09.682280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.734 qpair failed and we were unable to recover it. 00:30:35.734 [2024-12-05 12:14:09.682405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.734 [2024-12-05 12:14:09.682436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.734 qpair failed and we were unable to recover it. 00:30:35.734 [2024-12-05 12:14:09.682696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.734 [2024-12-05 12:14:09.682728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.734 qpair failed and we were unable to recover it. 00:30:35.734 [2024-12-05 12:14:09.682924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.734 [2024-12-05 12:14:09.682955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.734 qpair failed and we were unable to recover it. 00:30:35.734 [2024-12-05 12:14:09.683135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.734 [2024-12-05 12:14:09.683166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.734 qpair failed and we were unable to recover it. 00:30:35.734 [2024-12-05 12:14:09.683427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.734 [2024-12-05 12:14:09.683460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.734 qpair failed and we were unable to recover it. 00:30:35.734 [2024-12-05 12:14:09.683653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.734 [2024-12-05 12:14:09.683685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.734 qpair failed and we were unable to recover it. 00:30:35.734 [2024-12-05 12:14:09.683966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.734 [2024-12-05 12:14:09.683998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.734 qpair failed and we were unable to recover it. 00:30:35.734 [2024-12-05 12:14:09.684227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.734 [2024-12-05 12:14:09.684258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.734 qpair failed and we were unable to recover it. 00:30:35.735 [2024-12-05 12:14:09.684552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.735 [2024-12-05 12:14:09.684584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.735 qpair failed and we were unable to recover it. 00:30:35.735 [2024-12-05 12:14:09.684820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.735 [2024-12-05 12:14:09.684851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.735 qpair failed and we were unable to recover it. 00:30:35.735 [2024-12-05 12:14:09.685119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.735 [2024-12-05 12:14:09.685150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.735 qpair failed and we were unable to recover it. 00:30:35.735 [2024-12-05 12:14:09.685276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.735 [2024-12-05 12:14:09.685306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.735 qpair failed and we were unable to recover it. 00:30:35.735 [2024-12-05 12:14:09.685523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.735 [2024-12-05 12:14:09.685556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.735 qpair failed and we were unable to recover it. 00:30:35.735 [2024-12-05 12:14:09.685740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.735 [2024-12-05 12:14:09.685771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.735 qpair failed and we were unable to recover it. 00:30:35.735 [2024-12-05 12:14:09.686080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.735 [2024-12-05 12:14:09.686112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.735 qpair failed and we were unable to recover it. 00:30:35.735 [2024-12-05 12:14:09.686362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.735 [2024-12-05 12:14:09.686403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.735 qpair failed and we were unable to recover it. 00:30:35.735 [2024-12-05 12:14:09.686605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.735 [2024-12-05 12:14:09.686636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.735 qpair failed and we were unable to recover it. 00:30:35.735 [2024-12-05 12:14:09.686919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.735 [2024-12-05 12:14:09.686949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.735 qpair failed and we were unable to recover it. 00:30:35.735 [2024-12-05 12:14:09.687216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.735 [2024-12-05 12:14:09.687247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.735 qpair failed and we were unable to recover it. 00:30:35.735 [2024-12-05 12:14:09.687544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.735 [2024-12-05 12:14:09.687578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.735 qpair failed and we were unable to recover it. 00:30:35.735 [2024-12-05 12:14:09.687767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.735 [2024-12-05 12:14:09.687799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.735 qpair failed and we were unable to recover it. 00:30:35.735 [2024-12-05 12:14:09.687932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.735 [2024-12-05 12:14:09.687963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.735 qpair failed and we were unable to recover it. 00:30:35.735 [2024-12-05 12:14:09.688168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.735 [2024-12-05 12:14:09.688199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.735 qpair failed and we were unable to recover it. 00:30:35.735 [2024-12-05 12:14:09.688383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.735 [2024-12-05 12:14:09.688422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.735 qpair failed and we were unable to recover it. 00:30:35.735 [2024-12-05 12:14:09.688685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.735 [2024-12-05 12:14:09.688718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.735 qpair failed and we were unable to recover it. 00:30:35.735 [2024-12-05 12:14:09.688916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.735 [2024-12-05 12:14:09.688946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.735 qpair failed and we were unable to recover it. 00:30:35.735 [2024-12-05 12:14:09.689124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.735 [2024-12-05 12:14:09.689155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.735 qpair failed and we were unable to recover it. 00:30:35.735 [2024-12-05 12:14:09.689286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.735 [2024-12-05 12:14:09.689317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.735 qpair failed and we were unable to recover it. 00:30:35.735 [2024-12-05 12:14:09.689514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.735 [2024-12-05 12:14:09.689547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.735 qpair failed and we were unable to recover it. 00:30:35.735 [2024-12-05 12:14:09.689811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.735 [2024-12-05 12:14:09.689843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.735 qpair failed and we were unable to recover it. 00:30:35.735 [2024-12-05 12:14:09.690111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.735 [2024-12-05 12:14:09.690141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.735 qpair failed and we were unable to recover it. 00:30:35.735 [2024-12-05 12:14:09.690333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.735 [2024-12-05 12:14:09.690364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.735 qpair failed and we were unable to recover it. 00:30:35.735 [2024-12-05 12:14:09.690641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.735 [2024-12-05 12:14:09.690672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.735 qpair failed and we were unable to recover it. 00:30:35.735 [2024-12-05 12:14:09.690944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.735 [2024-12-05 12:14:09.690975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.735 qpair failed and we were unable to recover it. 00:30:35.735 [2024-12-05 12:14:09.691098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.735 [2024-12-05 12:14:09.691129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.735 qpair failed and we were unable to recover it. 00:30:35.735 [2024-12-05 12:14:09.691318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.735 [2024-12-05 12:14:09.691351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.735 qpair failed and we were unable to recover it. 00:30:35.735 [2024-12-05 12:14:09.691568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.735 [2024-12-05 12:14:09.691601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.735 qpair failed and we were unable to recover it. 00:30:35.735 [2024-12-05 12:14:09.691798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.735 [2024-12-05 12:14:09.691829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.735 qpair failed and we were unable to recover it. 00:30:35.735 [2024-12-05 12:14:09.692073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.735 [2024-12-05 12:14:09.692105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.735 qpair failed and we were unable to recover it. 00:30:35.735 [2024-12-05 12:14:09.692297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.735 [2024-12-05 12:14:09.692328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.735 qpair failed and we were unable to recover it. 00:30:35.736 [2024-12-05 12:14:09.692606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.736 [2024-12-05 12:14:09.692638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.736 qpair failed and we were unable to recover it. 00:30:35.736 [2024-12-05 12:14:09.692840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.736 [2024-12-05 12:14:09.692872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.736 qpair failed and we were unable to recover it. 00:30:35.736 [2024-12-05 12:14:09.693118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.736 [2024-12-05 12:14:09.693150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.736 qpair failed and we were unable to recover it. 00:30:35.736 [2024-12-05 12:14:09.693359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.736 [2024-12-05 12:14:09.693404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.736 qpair failed and we were unable to recover it. 00:30:35.736 [2024-12-05 12:14:09.693664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.736 [2024-12-05 12:14:09.693695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.736 qpair failed and we were unable to recover it. 00:30:35.736 [2024-12-05 12:14:09.693815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.736 [2024-12-05 12:14:09.693847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.736 qpair failed and we were unable to recover it. 00:30:35.736 [2024-12-05 12:14:09.694041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.736 [2024-12-05 12:14:09.694073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.736 qpair failed and we were unable to recover it. 00:30:35.736 [2024-12-05 12:14:09.694340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.736 [2024-12-05 12:14:09.694382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.736 qpair failed and we were unable to recover it. 00:30:35.736 [2024-12-05 12:14:09.694652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.736 [2024-12-05 12:14:09.694684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.736 qpair failed and we were unable to recover it. 00:30:35.736 [2024-12-05 12:14:09.694965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.736 [2024-12-05 12:14:09.694996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.736 qpair failed and we were unable to recover it. 00:30:35.736 [2024-12-05 12:14:09.695273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.736 [2024-12-05 12:14:09.695304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.736 qpair failed and we were unable to recover it. 00:30:35.736 [2024-12-05 12:14:09.695584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.736 [2024-12-05 12:14:09.695617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.736 qpair failed and we were unable to recover it. 00:30:35.736 [2024-12-05 12:14:09.695825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.736 [2024-12-05 12:14:09.695857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.736 qpair failed and we were unable to recover it. 00:30:35.736 [2024-12-05 12:14:09.696101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.736 [2024-12-05 12:14:09.696131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.736 qpair failed and we were unable to recover it. 00:30:35.736 [2024-12-05 12:14:09.696398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.736 [2024-12-05 12:14:09.696431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.736 qpair failed and we were unable to recover it. 00:30:35.736 [2024-12-05 12:14:09.696681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.736 [2024-12-05 12:14:09.696713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.736 qpair failed and we were unable to recover it. 00:30:35.736 [2024-12-05 12:14:09.696972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.736 [2024-12-05 12:14:09.697003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.736 qpair failed and we were unable to recover it. 00:30:35.736 [2024-12-05 12:14:09.697243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.736 [2024-12-05 12:14:09.697275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.736 qpair failed and we were unable to recover it. 00:30:35.736 [2024-12-05 12:14:09.697558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.736 [2024-12-05 12:14:09.697591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.736 qpair failed and we were unable to recover it. 00:30:35.736 [2024-12-05 12:14:09.697852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.736 [2024-12-05 12:14:09.697884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.736 qpair failed and we were unable to recover it. 00:30:35.736 [2024-12-05 12:14:09.698057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.736 [2024-12-05 12:14:09.698088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.736 qpair failed and we were unable to recover it. 00:30:35.736 [2024-12-05 12:14:09.698271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.736 [2024-12-05 12:14:09.698302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.736 qpair failed and we were unable to recover it. 00:30:35.736 [2024-12-05 12:14:09.698423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.736 [2024-12-05 12:14:09.698456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.736 qpair failed and we were unable to recover it. 00:30:35.736 [2024-12-05 12:14:09.698710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.736 [2024-12-05 12:14:09.698753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.736 qpair failed and we were unable to recover it. 00:30:35.736 [2024-12-05 12:14:09.698950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.736 [2024-12-05 12:14:09.698980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.736 qpair failed and we were unable to recover it. 00:30:35.736 [2024-12-05 12:14:09.699238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.736 [2024-12-05 12:14:09.699269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.736 qpair failed and we were unable to recover it. 00:30:35.736 [2024-12-05 12:14:09.699442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.736 [2024-12-05 12:14:09.699475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.736 qpair failed and we were unable to recover it. 00:30:35.736 [2024-12-05 12:14:09.699761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.736 [2024-12-05 12:14:09.699792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.736 qpair failed and we were unable to recover it. 00:30:35.736 [2024-12-05 12:14:09.700055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.736 [2024-12-05 12:14:09.700086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.736 qpair failed and we were unable to recover it. 00:30:35.736 [2024-12-05 12:14:09.700303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.736 [2024-12-05 12:14:09.700335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.736 qpair failed and we were unable to recover it. 00:30:35.736 [2024-12-05 12:14:09.700539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.736 [2024-12-05 12:14:09.700572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.736 qpair failed and we were unable to recover it. 00:30:35.736 [2024-12-05 12:14:09.700758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.736 [2024-12-05 12:14:09.700789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.736 qpair failed and we were unable to recover it. 00:30:35.736 [2024-12-05 12:14:09.700962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.736 [2024-12-05 12:14:09.700994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.736 qpair failed and we were unable to recover it. 00:30:35.736 [2024-12-05 12:14:09.701264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.736 [2024-12-05 12:14:09.701295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.736 qpair failed and we were unable to recover it. 00:30:35.736 [2024-12-05 12:14:09.701535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.737 [2024-12-05 12:14:09.701567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.737 qpair failed and we were unable to recover it. 00:30:35.737 [2024-12-05 12:14:09.701808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.737 [2024-12-05 12:14:09.701840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.737 qpair failed and we were unable to recover it. 00:30:35.737 [2024-12-05 12:14:09.701970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.737 [2024-12-05 12:14:09.702001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.737 qpair failed and we were unable to recover it. 00:30:35.737 [2024-12-05 12:14:09.702271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.737 [2024-12-05 12:14:09.702303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.737 qpair failed and we were unable to recover it. 00:30:35.737 [2024-12-05 12:14:09.702588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.737 [2024-12-05 12:14:09.702620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.737 qpair failed and we were unable to recover it. 00:30:35.737 [2024-12-05 12:14:09.702896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.737 [2024-12-05 12:14:09.702928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.737 qpair failed and we were unable to recover it. 00:30:35.737 [2024-12-05 12:14:09.703200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.737 [2024-12-05 12:14:09.703232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.737 qpair failed and we were unable to recover it. 00:30:35.737 [2024-12-05 12:14:09.703523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.737 [2024-12-05 12:14:09.703556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.737 qpair failed and we were unable to recover it. 00:30:35.737 [2024-12-05 12:14:09.703823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.737 [2024-12-05 12:14:09.703855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.737 qpair failed and we were unable to recover it. 00:30:35.737 [2024-12-05 12:14:09.704138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.737 [2024-12-05 12:14:09.704169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.737 qpair failed and we were unable to recover it. 00:30:35.737 [2024-12-05 12:14:09.704452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.737 [2024-12-05 12:14:09.704484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.737 qpair failed and we were unable to recover it. 00:30:35.737 [2024-12-05 12:14:09.704627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.737 [2024-12-05 12:14:09.704658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.737 qpair failed and we were unable to recover it. 00:30:35.737 [2024-12-05 12:14:09.704894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.737 [2024-12-05 12:14:09.704926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.737 qpair failed and we were unable to recover it. 00:30:35.737 [2024-12-05 12:14:09.705098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.737 [2024-12-05 12:14:09.705130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.737 qpair failed and we were unable to recover it. 00:30:35.737 [2024-12-05 12:14:09.705381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.737 [2024-12-05 12:14:09.705414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.737 qpair failed and we were unable to recover it. 00:30:35.737 [2024-12-05 12:14:09.705653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.737 [2024-12-05 12:14:09.705686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.737 qpair failed and we were unable to recover it. 00:30:35.737 [2024-12-05 12:14:09.705948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.737 [2024-12-05 12:14:09.705981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.737 qpair failed and we were unable to recover it. 00:30:35.737 [2024-12-05 12:14:09.706158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.737 [2024-12-05 12:14:09.706189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.737 qpair failed and we were unable to recover it. 00:30:35.737 [2024-12-05 12:14:09.706399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.737 [2024-12-05 12:14:09.706432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.737 qpair failed and we were unable to recover it. 00:30:35.737 [2024-12-05 12:14:09.706673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.737 [2024-12-05 12:14:09.706704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.737 qpair failed and we were unable to recover it. 00:30:35.737 [2024-12-05 12:14:09.706971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.737 [2024-12-05 12:14:09.707002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.737 qpair failed and we were unable to recover it. 00:30:35.737 [2024-12-05 12:14:09.707211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.737 [2024-12-05 12:14:09.707243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.737 qpair failed and we were unable to recover it. 00:30:35.737 [2024-12-05 12:14:09.707530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.737 [2024-12-05 12:14:09.707563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.737 qpair failed and we were unable to recover it. 00:30:35.737 [2024-12-05 12:14:09.707758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.737 [2024-12-05 12:14:09.707789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.737 qpair failed and we were unable to recover it. 00:30:35.737 [2024-12-05 12:14:09.708056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.737 [2024-12-05 12:14:09.708086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.737 qpair failed and we were unable to recover it. 00:30:35.737 [2024-12-05 12:14:09.708345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.737 [2024-12-05 12:14:09.708388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.737 qpair failed and we were unable to recover it. 00:30:35.737 [2024-12-05 12:14:09.708679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.737 [2024-12-05 12:14:09.708711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.737 qpair failed and we were unable to recover it. 00:30:35.737 [2024-12-05 12:14:09.708889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.737 [2024-12-05 12:14:09.708920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.737 qpair failed and we were unable to recover it. 00:30:35.737 [2024-12-05 12:14:09.709188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.737 [2024-12-05 12:14:09.709220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.737 qpair failed and we were unable to recover it. 00:30:35.737 [2024-12-05 12:14:09.709426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.737 [2024-12-05 12:14:09.709465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.737 qpair failed and we were unable to recover it. 00:30:35.737 [2024-12-05 12:14:09.709597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.737 [2024-12-05 12:14:09.709627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.737 qpair failed and we were unable to recover it. 00:30:35.737 [2024-12-05 12:14:09.709812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.737 [2024-12-05 12:14:09.709844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.737 qpair failed and we were unable to recover it. 00:30:35.737 [2024-12-05 12:14:09.710120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.737 [2024-12-05 12:14:09.710152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.737 qpair failed and we were unable to recover it. 00:30:35.737 [2024-12-05 12:14:09.710345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.737 [2024-12-05 12:14:09.710384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.737 qpair failed and we were unable to recover it. 00:30:35.737 [2024-12-05 12:14:09.710558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.737 [2024-12-05 12:14:09.710589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.737 qpair failed and we were unable to recover it. 00:30:35.737 [2024-12-05 12:14:09.710826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.737 [2024-12-05 12:14:09.710857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.737 qpair failed and we were unable to recover it. 00:30:35.738 [2024-12-05 12:14:09.711039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.738 [2024-12-05 12:14:09.711069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.738 qpair failed and we were unable to recover it. 00:30:35.738 [2024-12-05 12:14:09.711258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.738 [2024-12-05 12:14:09.711290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.738 qpair failed and we were unable to recover it. 00:30:35.738 [2024-12-05 12:14:09.711574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.738 [2024-12-05 12:14:09.711607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.738 qpair failed and we were unable to recover it. 00:30:35.738 [2024-12-05 12:14:09.711851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.738 [2024-12-05 12:14:09.711881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.738 qpair failed and we were unable to recover it. 00:30:35.738 [2024-12-05 12:14:09.712145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.738 [2024-12-05 12:14:09.712177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.738 qpair failed and we were unable to recover it. 00:30:35.738 [2024-12-05 12:14:09.712465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.738 [2024-12-05 12:14:09.712498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.738 qpair failed and we were unable to recover it. 00:30:35.738 [2024-12-05 12:14:09.712768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.738 [2024-12-05 12:14:09.712800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.738 qpair failed and we were unable to recover it. 00:30:35.738 [2024-12-05 12:14:09.712941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.738 [2024-12-05 12:14:09.712974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.738 qpair failed and we were unable to recover it. 00:30:35.738 [2024-12-05 12:14:09.713214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.738 [2024-12-05 12:14:09.713244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.738 qpair failed and we were unable to recover it. 00:30:35.738 [2024-12-05 12:14:09.713508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.738 [2024-12-05 12:14:09.713541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.738 qpair failed and we were unable to recover it. 00:30:35.738 [2024-12-05 12:14:09.713784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.738 [2024-12-05 12:14:09.713815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.738 qpair failed and we were unable to recover it. 00:30:35.738 [2024-12-05 12:14:09.714055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.738 [2024-12-05 12:14:09.714086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.738 qpair failed and we were unable to recover it. 00:30:35.738 [2024-12-05 12:14:09.714282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.738 [2024-12-05 12:14:09.714313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.738 qpair failed and we were unable to recover it. 00:30:35.738 [2024-12-05 12:14:09.714597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.738 [2024-12-05 12:14:09.714630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.738 qpair failed and we were unable to recover it. 00:30:35.738 [2024-12-05 12:14:09.714898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.738 [2024-12-05 12:14:09.714929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.738 qpair failed and we were unable to recover it. 00:30:35.738 [2024-12-05 12:14:09.715218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.738 [2024-12-05 12:14:09.715249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.738 qpair failed and we were unable to recover it. 00:30:35.738 [2024-12-05 12:14:09.715495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.738 [2024-12-05 12:14:09.715529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.738 qpair failed and we were unable to recover it. 00:30:35.738 [2024-12-05 12:14:09.715747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.738 [2024-12-05 12:14:09.715778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.738 qpair failed and we were unable to recover it. 00:30:35.738 [2024-12-05 12:14:09.715972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.738 [2024-12-05 12:14:09.716005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.738 qpair failed and we were unable to recover it. 00:30:35.738 [2024-12-05 12:14:09.716173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.738 [2024-12-05 12:14:09.716205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.738 qpair failed and we were unable to recover it. 00:30:35.738 [2024-12-05 12:14:09.716476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.738 [2024-12-05 12:14:09.716527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.738 qpair failed and we were unable to recover it. 00:30:35.738 [2024-12-05 12:14:09.716795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.738 [2024-12-05 12:14:09.716827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.738 qpair failed and we were unable to recover it. 00:30:35.738 [2024-12-05 12:14:09.717021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.738 [2024-12-05 12:14:09.717053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.738 qpair failed and we were unable to recover it. 00:30:35.738 [2024-12-05 12:14:09.717315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.738 [2024-12-05 12:14:09.717347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.738 qpair failed and we were unable to recover it. 00:30:35.738 [2024-12-05 12:14:09.717544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.738 [2024-12-05 12:14:09.717576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.738 qpair failed and we were unable to recover it. 00:30:35.738 [2024-12-05 12:14:09.717761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.738 [2024-12-05 12:14:09.717792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.738 qpair failed and we were unable to recover it. 00:30:35.738 [2024-12-05 12:14:09.718054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.738 [2024-12-05 12:14:09.718085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.738 qpair failed and we were unable to recover it. 00:30:35.738 [2024-12-05 12:14:09.718270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.738 [2024-12-05 12:14:09.718300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.738 qpair failed and we were unable to recover it. 00:30:35.738 [2024-12-05 12:14:09.718433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.738 [2024-12-05 12:14:09.718464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.738 qpair failed and we were unable to recover it. 00:30:35.738 [2024-12-05 12:14:09.718727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.738 [2024-12-05 12:14:09.718759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.738 qpair failed and we were unable to recover it. 00:30:35.738 [2024-12-05 12:14:09.718945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.738 [2024-12-05 12:14:09.718978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.738 qpair failed and we were unable to recover it. 00:30:35.738 [2024-12-05 12:14:09.719115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.738 [2024-12-05 12:14:09.719146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.738 qpair failed and we were unable to recover it. 00:30:35.738 [2024-12-05 12:14:09.719352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.738 [2024-12-05 12:14:09.719404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.738 qpair failed and we were unable to recover it. 00:30:35.738 [2024-12-05 12:14:09.719672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.738 [2024-12-05 12:14:09.719709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.738 qpair failed and we were unable to recover it. 00:30:35.738 [2024-12-05 12:14:09.719982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.738 [2024-12-05 12:14:09.720013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.738 qpair failed and we were unable to recover it. 00:30:35.738 [2024-12-05 12:14:09.720271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.738 [2024-12-05 12:14:09.720302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.738 qpair failed and we were unable to recover it. 00:30:35.738 [2024-12-05 12:14:09.720590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.738 [2024-12-05 12:14:09.720624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.738 qpair failed and we were unable to recover it. 00:30:35.738 [2024-12-05 12:14:09.720753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.738 [2024-12-05 12:14:09.720784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.739 qpair failed and we were unable to recover it. 00:30:35.739 [2024-12-05 12:14:09.721023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.739 [2024-12-05 12:14:09.721054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.739 qpair failed and we were unable to recover it. 00:30:35.739 [2024-12-05 12:14:09.721173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.739 [2024-12-05 12:14:09.721205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.739 qpair failed and we were unable to recover it. 00:30:35.739 [2024-12-05 12:14:09.721413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.739 [2024-12-05 12:14:09.721447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.739 qpair failed and we were unable to recover it. 00:30:35.739 [2024-12-05 12:14:09.721643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.739 [2024-12-05 12:14:09.721681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.739 qpair failed and we were unable to recover it. 00:30:35.739 [2024-12-05 12:14:09.721798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.739 [2024-12-05 12:14:09.721827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.739 qpair failed and we were unable to recover it. 00:30:35.739 [2024-12-05 12:14:09.722088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.739 [2024-12-05 12:14:09.722119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.739 qpair failed and we were unable to recover it. 00:30:35.739 [2024-12-05 12:14:09.722382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.739 [2024-12-05 12:14:09.722414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.739 qpair failed and we were unable to recover it. 00:30:35.739 [2024-12-05 12:14:09.722623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.739 [2024-12-05 12:14:09.722654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.739 qpair failed and we were unable to recover it. 00:30:35.739 [2024-12-05 12:14:09.722826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.739 [2024-12-05 12:14:09.722858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.739 qpair failed and we were unable to recover it. 00:30:35.739 [2024-12-05 12:14:09.723056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.739 [2024-12-05 12:14:09.723088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.739 qpair failed and we were unable to recover it. 00:30:35.739 [2024-12-05 12:14:09.723351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.739 [2024-12-05 12:14:09.723393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.739 qpair failed and we were unable to recover it. 00:30:35.739 [2024-12-05 12:14:09.723659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.739 [2024-12-05 12:14:09.723690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.739 qpair failed and we were unable to recover it. 00:30:35.739 [2024-12-05 12:14:09.723953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.739 [2024-12-05 12:14:09.723984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.739 qpair failed and we were unable to recover it. 00:30:35.739 [2024-12-05 12:14:09.724166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.739 [2024-12-05 12:14:09.724198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.739 qpair failed and we were unable to recover it. 00:30:35.739 [2024-12-05 12:14:09.724459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.739 [2024-12-05 12:14:09.724492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.739 qpair failed and we were unable to recover it. 00:30:35.739 [2024-12-05 12:14:09.724786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.739 [2024-12-05 12:14:09.724817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.739 qpair failed and we were unable to recover it. 00:30:35.739 [2024-12-05 12:14:09.725059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.739 [2024-12-05 12:14:09.725089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.739 qpair failed and we were unable to recover it. 00:30:35.739 [2024-12-05 12:14:09.725287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.739 [2024-12-05 12:14:09.725318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.739 qpair failed and we were unable to recover it. 00:30:35.739 [2024-12-05 12:14:09.725594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.739 [2024-12-05 12:14:09.725627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.739 qpair failed and we were unable to recover it. 00:30:35.739 [2024-12-05 12:14:09.725867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.739 [2024-12-05 12:14:09.725898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.739 qpair failed and we were unable to recover it. 00:30:35.739 [2024-12-05 12:14:09.726167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.739 [2024-12-05 12:14:09.726198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.739 qpair failed and we were unable to recover it. 00:30:35.739 [2024-12-05 12:14:09.726405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.739 [2024-12-05 12:14:09.726437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.739 qpair failed and we were unable to recover it. 00:30:35.739 [2024-12-05 12:14:09.726677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.739 [2024-12-05 12:14:09.726708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.739 qpair failed and we were unable to recover it. 00:30:35.739 [2024-12-05 12:14:09.726892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.739 [2024-12-05 12:14:09.726923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.739 qpair failed and we were unable to recover it. 00:30:35.739 [2024-12-05 12:14:09.727109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.739 [2024-12-05 12:14:09.727140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.739 qpair failed and we were unable to recover it. 00:30:35.739 [2024-12-05 12:14:09.727355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.739 [2024-12-05 12:14:09.727396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.739 qpair failed and we were unable to recover it. 00:30:35.739 [2024-12-05 12:14:09.727522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.739 [2024-12-05 12:14:09.727553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.739 qpair failed and we were unable to recover it. 00:30:35.739 [2024-12-05 12:14:09.727766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.739 [2024-12-05 12:14:09.727797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.739 qpair failed and we were unable to recover it. 00:30:35.739 [2024-12-05 12:14:09.728007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.739 [2024-12-05 12:14:09.728038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.739 qpair failed and we were unable to recover it. 00:30:35.739 [2024-12-05 12:14:09.728262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.739 [2024-12-05 12:14:09.728294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.739 qpair failed and we were unable to recover it. 00:30:35.739 [2024-12-05 12:14:09.728482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.739 [2024-12-05 12:14:09.728515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.739 qpair failed and we were unable to recover it. 00:30:35.739 [2024-12-05 12:14:09.728756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.739 [2024-12-05 12:14:09.728788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.739 qpair failed and we were unable to recover it. 00:30:35.739 [2024-12-05 12:14:09.728959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.739 [2024-12-05 12:14:09.728991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.739 qpair failed and we were unable to recover it. 00:30:35.739 [2024-12-05 12:14:09.729208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.739 [2024-12-05 12:14:09.729240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.739 qpair failed and we were unable to recover it. 00:30:35.739 [2024-12-05 12:14:09.729505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.739 [2024-12-05 12:14:09.729538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.739 qpair failed and we were unable to recover it. 00:30:35.739 [2024-12-05 12:14:09.729725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.739 [2024-12-05 12:14:09.729763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.739 qpair failed and we were unable to recover it. 00:30:35.739 [2024-12-05 12:14:09.729957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.740 [2024-12-05 12:14:09.729988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.740 qpair failed and we were unable to recover it. 00:30:35.740 [2024-12-05 12:14:09.730249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.740 [2024-12-05 12:14:09.730281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.740 qpair failed and we were unable to recover it. 00:30:35.740 [2024-12-05 12:14:09.730454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.740 [2024-12-05 12:14:09.730488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.740 qpair failed and we were unable to recover it. 00:30:35.740 [2024-12-05 12:14:09.730755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.740 [2024-12-05 12:14:09.730787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.740 qpair failed and we were unable to recover it. 00:30:35.740 [2024-12-05 12:14:09.731057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.740 [2024-12-05 12:14:09.731089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.740 qpair failed and we were unable to recover it. 00:30:35.740 [2024-12-05 12:14:09.731340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.740 [2024-12-05 12:14:09.731388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.740 qpair failed and we were unable to recover it. 00:30:35.740 [2024-12-05 12:14:09.731593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.740 [2024-12-05 12:14:09.731624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.740 qpair failed and we were unable to recover it. 00:30:35.740 [2024-12-05 12:14:09.731886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.740 [2024-12-05 12:14:09.731917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.740 qpair failed and we were unable to recover it. 00:30:35.740 [2024-12-05 12:14:09.732157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.740 [2024-12-05 12:14:09.732190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.740 qpair failed and we were unable to recover it. 00:30:35.740 [2024-12-05 12:14:09.732385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.740 [2024-12-05 12:14:09.732419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.740 qpair failed and we were unable to recover it. 00:30:35.740 [2024-12-05 12:14:09.732684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.740 [2024-12-05 12:14:09.732718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.740 qpair failed and we were unable to recover it. 00:30:35.740 [2024-12-05 12:14:09.732855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.740 [2024-12-05 12:14:09.732887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.740 qpair failed and we were unable to recover it. 00:30:35.740 [2024-12-05 12:14:09.733125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.740 [2024-12-05 12:14:09.733156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.740 qpair failed and we were unable to recover it. 00:30:35.740 [2024-12-05 12:14:09.733349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.740 [2024-12-05 12:14:09.733392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.740 qpair failed and we were unable to recover it. 00:30:35.740 [2024-12-05 12:14:09.733663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.740 [2024-12-05 12:14:09.733695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.740 qpair failed and we were unable to recover it. 00:30:35.740 [2024-12-05 12:14:09.733966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.740 [2024-12-05 12:14:09.733997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.740 qpair failed and we were unable to recover it. 00:30:35.740 [2024-12-05 12:14:09.734212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.740 [2024-12-05 12:14:09.734243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.740 qpair failed and we were unable to recover it. 00:30:35.740 [2024-12-05 12:14:09.734482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.740 [2024-12-05 12:14:09.734514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.740 qpair failed and we were unable to recover it. 00:30:35.740 [2024-12-05 12:14:09.734687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.740 [2024-12-05 12:14:09.734719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.740 qpair failed and we were unable to recover it. 00:30:35.740 [2024-12-05 12:14:09.734972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.740 [2024-12-05 12:14:09.735003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.740 qpair failed and we were unable to recover it. 00:30:35.740 [2024-12-05 12:14:09.735130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.740 [2024-12-05 12:14:09.735163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.740 qpair failed and we were unable to recover it. 00:30:35.740 [2024-12-05 12:14:09.735339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.740 [2024-12-05 12:14:09.735380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.740 qpair failed and we were unable to recover it. 00:30:35.740 [2024-12-05 12:14:09.735574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.740 [2024-12-05 12:14:09.735606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.740 qpair failed and we were unable to recover it. 00:30:35.740 [2024-12-05 12:14:09.735846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.740 [2024-12-05 12:14:09.735879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.740 qpair failed and we were unable to recover it. 00:30:35.740 [2024-12-05 12:14:09.736074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.740 [2024-12-05 12:14:09.736106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.740 qpair failed and we were unable to recover it. 00:30:35.740 [2024-12-05 12:14:09.736296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.740 [2024-12-05 12:14:09.736327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.740 qpair failed and we were unable to recover it. 00:30:35.740 [2024-12-05 12:14:09.736578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.740 [2024-12-05 12:14:09.736612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.740 qpair failed and we were unable to recover it. 00:30:35.740 [2024-12-05 12:14:09.736924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.740 [2024-12-05 12:14:09.736955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.740 qpair failed and we were unable to recover it. 00:30:35.740 [2024-12-05 12:14:09.737203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.740 [2024-12-05 12:14:09.737234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.740 qpair failed and we were unable to recover it. 00:30:35.740 [2024-12-05 12:14:09.737487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.740 [2024-12-05 12:14:09.737519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.740 qpair failed and we were unable to recover it. 00:30:35.741 [2024-12-05 12:14:09.737799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.741 [2024-12-05 12:14:09.737831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.741 qpair failed and we were unable to recover it. 00:30:35.741 [2024-12-05 12:14:09.738111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.741 [2024-12-05 12:14:09.738144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.741 qpair failed and we were unable to recover it. 00:30:35.741 [2024-12-05 12:14:09.738466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.741 [2024-12-05 12:14:09.738500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.741 qpair failed and we were unable to recover it. 00:30:35.741 [2024-12-05 12:14:09.738629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.741 [2024-12-05 12:14:09.738662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.741 qpair failed and we were unable to recover it. 00:30:35.741 [2024-12-05 12:14:09.738929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.741 [2024-12-05 12:14:09.738961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.741 qpair failed and we were unable to recover it. 00:30:35.741 [2024-12-05 12:14:09.739202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.741 [2024-12-05 12:14:09.739234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.741 qpair failed and we were unable to recover it. 00:30:35.741 [2024-12-05 12:14:09.739476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.741 [2024-12-05 12:14:09.739510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.741 qpair failed and we were unable to recover it. 00:30:35.741 [2024-12-05 12:14:09.739767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.741 [2024-12-05 12:14:09.739798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.741 qpair failed and we were unable to recover it. 00:30:35.741 [2024-12-05 12:14:09.739981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.741 [2024-12-05 12:14:09.740012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.741 qpair failed and we were unable to recover it. 00:30:35.741 [2024-12-05 12:14:09.740141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.741 [2024-12-05 12:14:09.740177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.741 qpair failed and we were unable to recover it. 00:30:35.741 [2024-12-05 12:14:09.740421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.741 [2024-12-05 12:14:09.740454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.741 qpair failed and we were unable to recover it. 00:30:35.741 [2024-12-05 12:14:09.740745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.741 [2024-12-05 12:14:09.740778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.741 qpair failed and we were unable to recover it. 00:30:35.741 [2024-12-05 12:14:09.740976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.741 [2024-12-05 12:14:09.741008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.741 qpair failed and we were unable to recover it. 00:30:35.741 [2024-12-05 12:14:09.741190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.741 [2024-12-05 12:14:09.741222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.741 qpair failed and we were unable to recover it. 00:30:35.741 [2024-12-05 12:14:09.741462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.741 [2024-12-05 12:14:09.741494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.741 qpair failed and we were unable to recover it. 00:30:35.741 [2024-12-05 12:14:09.741683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.741 [2024-12-05 12:14:09.741714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.741 qpair failed and we were unable to recover it. 00:30:35.741 [2024-12-05 12:14:09.741840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.741 [2024-12-05 12:14:09.741873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.741 qpair failed and we were unable to recover it. 00:30:35.741 [2024-12-05 12:14:09.742136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.741 [2024-12-05 12:14:09.742167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.741 qpair failed and we were unable to recover it. 00:30:35.741 [2024-12-05 12:14:09.742303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.741 [2024-12-05 12:14:09.742333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.741 qpair failed and we were unable to recover it. 00:30:35.741 [2024-12-05 12:14:09.742540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.741 [2024-12-05 12:14:09.742572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.741 qpair failed and we were unable to recover it. 00:30:35.741 [2024-12-05 12:14:09.742814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.741 [2024-12-05 12:14:09.742845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.741 qpair failed and we were unable to recover it. 00:30:35.741 [2024-12-05 12:14:09.742949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.741 [2024-12-05 12:14:09.742980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.741 qpair failed and we were unable to recover it. 00:30:35.741 [2024-12-05 12:14:09.743196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.741 [2024-12-05 12:14:09.743231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.741 qpair failed and we were unable to recover it. 00:30:35.741 [2024-12-05 12:14:09.743412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.741 [2024-12-05 12:14:09.743445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.741 qpair failed and we were unable to recover it. 00:30:35.741 [2024-12-05 12:14:09.743578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.741 [2024-12-05 12:14:09.743608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.741 qpair failed and we were unable to recover it. 00:30:35.741 [2024-12-05 12:14:09.743848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.741 [2024-12-05 12:14:09.743879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.741 qpair failed and we were unable to recover it. 00:30:35.741 [2024-12-05 12:14:09.744127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.741 [2024-12-05 12:14:09.744159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.741 qpair failed and we were unable to recover it. 00:30:35.741 [2024-12-05 12:14:09.744403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.741 [2024-12-05 12:14:09.744437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.741 qpair failed and we were unable to recover it. 00:30:35.741 [2024-12-05 12:14:09.744613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.741 [2024-12-05 12:14:09.744646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.741 qpair failed and we were unable to recover it. 00:30:35.741 [2024-12-05 12:14:09.744933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.741 [2024-12-05 12:14:09.744967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.741 qpair failed and we were unable to recover it. 00:30:35.741 [2024-12-05 12:14:09.745102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.741 [2024-12-05 12:14:09.745134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.741 qpair failed and we were unable to recover it. 00:30:35.741 [2024-12-05 12:14:09.745336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.741 [2024-12-05 12:14:09.745376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.741 qpair failed and we were unable to recover it. 00:30:35.741 [2024-12-05 12:14:09.745667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.741 [2024-12-05 12:14:09.745699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.741 qpair failed and we were unable to recover it. 00:30:35.741 [2024-12-05 12:14:09.745961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.742 [2024-12-05 12:14:09.745993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.742 qpair failed and we were unable to recover it. 00:30:35.742 [2024-12-05 12:14:09.746177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.742 [2024-12-05 12:14:09.746208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.742 qpair failed and we were unable to recover it. 00:30:35.742 [2024-12-05 12:14:09.746423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.742 [2024-12-05 12:14:09.746457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.742 qpair failed and we were unable to recover it. 00:30:35.742 [2024-12-05 12:14:09.746672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.742 [2024-12-05 12:14:09.746704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.742 qpair failed and we were unable to recover it. 00:30:35.742 [2024-12-05 12:14:09.746895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.742 [2024-12-05 12:14:09.746927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.742 qpair failed and we were unable to recover it. 00:30:35.742 [2024-12-05 12:14:09.747189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.742 [2024-12-05 12:14:09.747221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.742 qpair failed and we were unable to recover it. 00:30:35.742 [2024-12-05 12:14:09.747423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.742 [2024-12-05 12:14:09.747456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.742 qpair failed and we were unable to recover it. 00:30:35.742 [2024-12-05 12:14:09.747729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.742 [2024-12-05 12:14:09.747761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.742 qpair failed and we were unable to recover it. 00:30:35.742 [2024-12-05 12:14:09.747938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.742 [2024-12-05 12:14:09.747970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.742 qpair failed and we were unable to recover it. 00:30:35.742 [2024-12-05 12:14:09.748239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.742 [2024-12-05 12:14:09.748272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.742 qpair failed and we were unable to recover it. 00:30:35.742 [2024-12-05 12:14:09.748472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.742 [2024-12-05 12:14:09.748507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.742 qpair failed and we were unable to recover it. 00:30:35.742 [2024-12-05 12:14:09.748768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.742 [2024-12-05 12:14:09.748800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.742 qpair failed and we were unable to recover it. 00:30:35.742 [2024-12-05 12:14:09.748931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.742 [2024-12-05 12:14:09.748961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.742 qpair failed and we were unable to recover it. 00:30:35.742 [2024-12-05 12:14:09.749229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.742 [2024-12-05 12:14:09.749260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.742 qpair failed and we were unable to recover it. 00:30:35.742 [2024-12-05 12:14:09.749444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.742 [2024-12-05 12:14:09.749476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.742 qpair failed and we were unable to recover it. 00:30:35.742 [2024-12-05 12:14:09.749748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.742 [2024-12-05 12:14:09.749780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.742 qpair failed and we were unable to recover it. 00:30:35.742 [2024-12-05 12:14:09.750023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.742 [2024-12-05 12:14:09.750066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.742 qpair failed and we were unable to recover it. 00:30:35.742 [2024-12-05 12:14:09.750330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.742 [2024-12-05 12:14:09.750362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.742 qpair failed and we were unable to recover it. 00:30:35.742 [2024-12-05 12:14:09.750591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.742 [2024-12-05 12:14:09.750625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.742 qpair failed and we were unable to recover it. 00:30:35.742 [2024-12-05 12:14:09.750820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.742 [2024-12-05 12:14:09.750852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.742 qpair failed and we were unable to recover it. 00:30:35.742 [2024-12-05 12:14:09.751023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.742 [2024-12-05 12:14:09.751057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.742 qpair failed and we were unable to recover it. 00:30:35.742 [2024-12-05 12:14:09.751251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.742 [2024-12-05 12:14:09.751283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.742 qpair failed and we were unable to recover it. 00:30:35.742 [2024-12-05 12:14:09.751563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.742 [2024-12-05 12:14:09.751597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.742 qpair failed and we were unable to recover it. 00:30:35.742 [2024-12-05 12:14:09.751843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.742 [2024-12-05 12:14:09.751875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.742 qpair failed and we were unable to recover it. 00:30:35.742 [2024-12-05 12:14:09.752142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.742 [2024-12-05 12:14:09.752173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.742 qpair failed and we were unable to recover it. 00:30:35.742 [2024-12-05 12:14:09.752365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.742 [2024-12-05 12:14:09.752406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.742 qpair failed and we were unable to recover it. 00:30:35.742 [2024-12-05 12:14:09.752697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.742 [2024-12-05 12:14:09.752729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.742 qpair failed and we were unable to recover it. 00:30:35.742 [2024-12-05 12:14:09.753013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.742 [2024-12-05 12:14:09.753045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.742 qpair failed and we were unable to recover it. 00:30:35.742 [2024-12-05 12:14:09.753250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.742 [2024-12-05 12:14:09.753282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.742 qpair failed and we were unable to recover it. 00:30:35.742 [2024-12-05 12:14:09.753474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.742 [2024-12-05 12:14:09.753507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.742 qpair failed and we were unable to recover it. 00:30:35.742 [2024-12-05 12:14:09.753712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.742 [2024-12-05 12:14:09.753745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.742 qpair failed and we were unable to recover it. 00:30:35.742 [2024-12-05 12:14:09.753988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.742 [2024-12-05 12:14:09.754022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.742 qpair failed and we were unable to recover it. 00:30:35.742 [2024-12-05 12:14:09.754235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.742 [2024-12-05 12:14:09.754269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.742 qpair failed and we were unable to recover it. 00:30:35.742 [2024-12-05 12:14:09.754552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.742 [2024-12-05 12:14:09.754585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.742 qpair failed and we were unable to recover it. 00:30:35.742 [2024-12-05 12:14:09.754782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.742 [2024-12-05 12:14:09.754816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.742 qpair failed and we were unable to recover it. 00:30:35.743 [2024-12-05 12:14:09.755034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.743 [2024-12-05 12:14:09.755065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.743 qpair failed and we were unable to recover it. 00:30:35.743 [2024-12-05 12:14:09.755259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.743 [2024-12-05 12:14:09.755291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.743 qpair failed and we were unable to recover it. 00:30:35.743 [2024-12-05 12:14:09.755544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.743 [2024-12-05 12:14:09.755578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.743 qpair failed and we were unable to recover it. 00:30:35.743 [2024-12-05 12:14:09.755761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.743 [2024-12-05 12:14:09.755792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.743 qpair failed and we were unable to recover it. 00:30:35.743 [2024-12-05 12:14:09.756083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.743 [2024-12-05 12:14:09.756114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.743 qpair failed and we were unable to recover it. 00:30:35.743 [2024-12-05 12:14:09.756355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.743 [2024-12-05 12:14:09.756411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.743 qpair failed and we were unable to recover it. 00:30:35.743 [2024-12-05 12:14:09.756675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.743 [2024-12-05 12:14:09.756707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.743 qpair failed and we were unable to recover it. 00:30:35.743 [2024-12-05 12:14:09.756900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.743 [2024-12-05 12:14:09.756933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.743 qpair failed and we were unable to recover it. 00:30:35.743 [2024-12-05 12:14:09.757201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.743 [2024-12-05 12:14:09.757233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.743 qpair failed and we were unable to recover it. 00:30:35.743 [2024-12-05 12:14:09.757522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.743 [2024-12-05 12:14:09.757555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.743 qpair failed and we were unable to recover it. 00:30:35.743 [2024-12-05 12:14:09.757841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.743 [2024-12-05 12:14:09.757874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.743 qpair failed and we were unable to recover it. 00:30:35.743 [2024-12-05 12:14:09.758146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.743 [2024-12-05 12:14:09.758176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.743 qpair failed and we were unable to recover it. 00:30:35.743 [2024-12-05 12:14:09.758385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.743 [2024-12-05 12:14:09.758418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.743 qpair failed and we were unable to recover it. 00:30:35.743 [2024-12-05 12:14:09.758608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.743 [2024-12-05 12:14:09.758639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.743 qpair failed and we were unable to recover it. 00:30:35.743 [2024-12-05 12:14:09.758901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.743 [2024-12-05 12:14:09.758933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.743 qpair failed and we were unable to recover it. 00:30:35.743 [2024-12-05 12:14:09.759229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.743 [2024-12-05 12:14:09.759260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.743 qpair failed and we were unable to recover it. 00:30:35.743 [2024-12-05 12:14:09.759528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.743 [2024-12-05 12:14:09.759562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.743 qpair failed and we were unable to recover it. 00:30:35.743 [2024-12-05 12:14:09.759830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.743 [2024-12-05 12:14:09.759861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.743 qpair failed and we were unable to recover it. 00:30:35.743 [2024-12-05 12:14:09.760049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.743 [2024-12-05 12:14:09.760080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.743 qpair failed and we were unable to recover it. 00:30:35.743 [2024-12-05 12:14:09.760321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.743 [2024-12-05 12:14:09.760353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.743 qpair failed and we were unable to recover it. 00:30:35.743 [2024-12-05 12:14:09.760541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.743 [2024-12-05 12:14:09.760573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.743 qpair failed and we were unable to recover it. 00:30:35.743 [2024-12-05 12:14:09.760749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.743 [2024-12-05 12:14:09.760786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.743 qpair failed and we were unable to recover it. 00:30:35.743 [2024-12-05 12:14:09.760991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.743 [2024-12-05 12:14:09.761022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.743 qpair failed and we were unable to recover it. 00:30:35.743 [2024-12-05 12:14:09.761358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.743 [2024-12-05 12:14:09.761399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.743 qpair failed and we were unable to recover it. 00:30:35.743 [2024-12-05 12:14:09.761676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.743 [2024-12-05 12:14:09.761708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.743 qpair failed and we were unable to recover it. 00:30:35.743 [2024-12-05 12:14:09.761904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.743 [2024-12-05 12:14:09.761935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.743 qpair failed and we were unable to recover it. 00:30:35.743 [2024-12-05 12:14:09.762130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.743 [2024-12-05 12:14:09.762162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.743 qpair failed and we were unable to recover it. 00:30:35.743 [2024-12-05 12:14:09.762429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.743 [2024-12-05 12:14:09.762461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.743 qpair failed and we were unable to recover it. 00:30:35.743 [2024-12-05 12:14:09.762745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.743 [2024-12-05 12:14:09.762776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.743 qpair failed and we were unable to recover it. 00:30:35.743 [2024-12-05 12:14:09.763061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.743 [2024-12-05 12:14:09.763092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.743 qpair failed and we were unable to recover it. 00:30:35.743 [2024-12-05 12:14:09.763397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.743 [2024-12-05 12:14:09.763432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.743 qpair failed and we were unable to recover it. 00:30:35.743 [2024-12-05 12:14:09.763690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.743 [2024-12-05 12:14:09.763723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.743 qpair failed and we were unable to recover it. 00:30:35.743 [2024-12-05 12:14:09.763935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.743 [2024-12-05 12:14:09.763966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.743 qpair failed and we were unable to recover it. 00:30:35.743 [2024-12-05 12:14:09.764160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.743 [2024-12-05 12:14:09.764193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.743 qpair failed and we were unable to recover it. 00:30:35.743 [2024-12-05 12:14:09.764456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.743 [2024-12-05 12:14:09.764489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.743 qpair failed and we were unable to recover it. 00:30:35.743 [2024-12-05 12:14:09.764733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.743 [2024-12-05 12:14:09.764765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.743 qpair failed and we were unable to recover it. 00:30:35.743 [2024-12-05 12:14:09.764944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.744 [2024-12-05 12:14:09.764976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.744 qpair failed and we were unable to recover it. 00:30:35.744 [2024-12-05 12:14:09.765230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.744 [2024-12-05 12:14:09.765263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.744 qpair failed and we were unable to recover it. 00:30:35.744 [2024-12-05 12:14:09.765535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.744 [2024-12-05 12:14:09.765568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.744 qpair failed and we were unable to recover it. 00:30:35.744 [2024-12-05 12:14:09.765853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.744 [2024-12-05 12:14:09.765886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.744 qpair failed and we were unable to recover it. 00:30:35.744 [2024-12-05 12:14:09.766157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.744 [2024-12-05 12:14:09.766191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.744 qpair failed and we were unable to recover it. 00:30:35.744 [2024-12-05 12:14:09.766480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.744 [2024-12-05 12:14:09.766513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.744 qpair failed and we were unable to recover it. 00:30:35.744 [2024-12-05 12:14:09.766782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.744 [2024-12-05 12:14:09.766815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.744 qpair failed and we were unable to recover it. 00:30:35.744 [2024-12-05 12:14:09.767052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.744 [2024-12-05 12:14:09.767083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.744 qpair failed and we were unable to recover it. 00:30:35.744 [2024-12-05 12:14:09.767351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.744 [2024-12-05 12:14:09.767391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.744 qpair failed and we were unable to recover it. 00:30:35.744 [2024-12-05 12:14:09.767696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.744 [2024-12-05 12:14:09.767727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.744 qpair failed and we were unable to recover it. 00:30:35.744 [2024-12-05 12:14:09.767998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.744 [2024-12-05 12:14:09.768029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.744 qpair failed and we were unable to recover it. 00:30:35.744 [2024-12-05 12:14:09.768344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.744 [2024-12-05 12:14:09.768387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.744 qpair failed and we were unable to recover it. 00:30:35.744 [2024-12-05 12:14:09.768608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.744 [2024-12-05 12:14:09.768640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.744 qpair failed and we were unable to recover it. 00:30:35.744 [2024-12-05 12:14:09.768882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.744 [2024-12-05 12:14:09.768914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.744 qpair failed and we were unable to recover it. 00:30:35.744 [2024-12-05 12:14:09.769184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.744 [2024-12-05 12:14:09.769216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.744 qpair failed and we were unable to recover it. 00:30:35.744 [2024-12-05 12:14:09.769343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.744 [2024-12-05 12:14:09.769388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.744 qpair failed and we were unable to recover it. 00:30:35.744 [2024-12-05 12:14:09.769637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.744 [2024-12-05 12:14:09.769669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.744 qpair failed and we were unable to recover it. 00:30:35.744 [2024-12-05 12:14:09.769856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.744 [2024-12-05 12:14:09.769887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.744 qpair failed and we were unable to recover it. 00:30:35.744 [2024-12-05 12:14:09.770156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.744 [2024-12-05 12:14:09.770190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.744 qpair failed and we were unable to recover it. 00:30:35.744 [2024-12-05 12:14:09.770390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.744 [2024-12-05 12:14:09.770423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.744 qpair failed and we were unable to recover it. 00:30:35.744 [2024-12-05 12:14:09.770697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.744 [2024-12-05 12:14:09.770730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.744 qpair failed and we were unable to recover it. 00:30:35.744 [2024-12-05 12:14:09.770912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.744 [2024-12-05 12:14:09.770944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.744 qpair failed and we were unable to recover it. 00:30:35.744 [2024-12-05 12:14:09.771084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.744 [2024-12-05 12:14:09.771113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.744 qpair failed and we were unable to recover it. 00:30:35.744 [2024-12-05 12:14:09.771419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.744 [2024-12-05 12:14:09.771454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.744 qpair failed and we were unable to recover it. 00:30:35.744 [2024-12-05 12:14:09.771731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.744 [2024-12-05 12:14:09.771764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.744 qpair failed and we were unable to recover it. 00:30:35.744 [2024-12-05 12:14:09.772003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.744 [2024-12-05 12:14:09.772041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.744 qpair failed and we were unable to recover it. 00:30:35.744 [2024-12-05 12:14:09.772226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.744 [2024-12-05 12:14:09.772257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.744 qpair failed and we were unable to recover it. 00:30:35.744 [2024-12-05 12:14:09.772549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.744 [2024-12-05 12:14:09.772582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.744 qpair failed and we were unable to recover it. 00:30:35.744 [2024-12-05 12:14:09.772801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.744 [2024-12-05 12:14:09.772833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.744 qpair failed and we were unable to recover it. 00:30:35.744 [2024-12-05 12:14:09.773100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.744 [2024-12-05 12:14:09.773131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.744 qpair failed and we were unable to recover it. 00:30:35.744 [2024-12-05 12:14:09.773325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.744 [2024-12-05 12:14:09.773356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.744 qpair failed and we were unable to recover it. 00:30:35.744 [2024-12-05 12:14:09.773566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.744 [2024-12-05 12:14:09.773599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.744 qpair failed and we were unable to recover it. 00:30:35.744 [2024-12-05 12:14:09.773777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.744 [2024-12-05 12:14:09.773809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.744 qpair failed and we were unable to recover it. 00:30:35.744 [2024-12-05 12:14:09.774075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.744 [2024-12-05 12:14:09.774106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.744 qpair failed and we were unable to recover it. 00:30:35.744 [2024-12-05 12:14:09.774303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.744 [2024-12-05 12:14:09.774336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.744 qpair failed and we were unable to recover it. 00:30:35.744 [2024-12-05 12:14:09.774591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.744 [2024-12-05 12:14:09.774626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.744 qpair failed and we were unable to recover it. 00:30:35.744 [2024-12-05 12:14:09.774867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.745 [2024-12-05 12:14:09.774898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.745 qpair failed and we were unable to recover it. 00:30:35.745 [2024-12-05 12:14:09.775173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.745 [2024-12-05 12:14:09.775205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.745 qpair failed and we were unable to recover it. 00:30:35.745 [2024-12-05 12:14:09.775387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.745 [2024-12-05 12:14:09.775421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.745 qpair failed and we were unable to recover it. 00:30:35.745 [2024-12-05 12:14:09.775552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.745 [2024-12-05 12:14:09.775582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.745 qpair failed and we were unable to recover it. 00:30:35.745 [2024-12-05 12:14:09.775831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.745 [2024-12-05 12:14:09.775862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.745 qpair failed and we were unable to recover it. 00:30:35.745 [2024-12-05 12:14:09.776072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.745 [2024-12-05 12:14:09.776103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.745 qpair failed and we were unable to recover it. 00:30:35.745 [2024-12-05 12:14:09.776396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.745 [2024-12-05 12:14:09.776428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.745 qpair failed and we were unable to recover it. 00:30:35.745 [2024-12-05 12:14:09.776696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.745 [2024-12-05 12:14:09.776727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.745 qpair failed and we were unable to recover it. 00:30:35.745 [2024-12-05 12:14:09.776960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.745 [2024-12-05 12:14:09.776992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.745 qpair failed and we were unable to recover it. 00:30:35.745 [2024-12-05 12:14:09.777283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.745 [2024-12-05 12:14:09.777314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.745 qpair failed and we were unable to recover it. 00:30:35.745 [2024-12-05 12:14:09.777455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.745 [2024-12-05 12:14:09.777485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.745 qpair failed and we were unable to recover it. 00:30:35.745 [2024-12-05 12:14:09.777753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.745 [2024-12-05 12:14:09.777784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.745 qpair failed and we were unable to recover it. 00:30:35.745 [2024-12-05 12:14:09.778049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.745 [2024-12-05 12:14:09.778080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.745 qpair failed and we were unable to recover it. 00:30:35.745 [2024-12-05 12:14:09.778327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.745 [2024-12-05 12:14:09.778358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.745 qpair failed and we were unable to recover it. 00:30:35.745 [2024-12-05 12:14:09.778545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.745 [2024-12-05 12:14:09.778577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.745 qpair failed and we were unable to recover it. 00:30:35.745 [2024-12-05 12:14:09.778868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.745 [2024-12-05 12:14:09.778904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.745 qpair failed and we were unable to recover it. 00:30:35.745 [2024-12-05 12:14:09.779090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.745 [2024-12-05 12:14:09.779123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.745 qpair failed and we were unable to recover it. 00:30:35.745 [2024-12-05 12:14:09.779316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.745 [2024-12-05 12:14:09.779348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.745 qpair failed and we were unable to recover it. 00:30:35.745 [2024-12-05 12:14:09.779618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.745 [2024-12-05 12:14:09.779651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.745 qpair failed and we were unable to recover it. 00:30:35.745 [2024-12-05 12:14:09.779771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.745 [2024-12-05 12:14:09.779804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.745 qpair failed and we were unable to recover it. 00:30:35.745 [2024-12-05 12:14:09.779992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.745 [2024-12-05 12:14:09.780024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.745 qpair failed and we were unable to recover it. 00:30:35.745 [2024-12-05 12:14:09.780216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.745 [2024-12-05 12:14:09.780248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.745 qpair failed and we were unable to recover it. 00:30:35.745 [2024-12-05 12:14:09.780468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.745 [2024-12-05 12:14:09.780501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.745 qpair failed and we were unable to recover it. 00:30:35.745 [2024-12-05 12:14:09.780681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.745 [2024-12-05 12:14:09.780711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.745 qpair failed and we were unable to recover it. 00:30:35.745 [2024-12-05 12:14:09.780845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.745 [2024-12-05 12:14:09.780876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.745 qpair failed and we were unable to recover it. 00:30:35.745 [2024-12-05 12:14:09.781075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.745 [2024-12-05 12:14:09.781107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.745 qpair failed and we were unable to recover it. 00:30:35.745 [2024-12-05 12:14:09.781381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.745 [2024-12-05 12:14:09.781413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.745 qpair failed and we were unable to recover it. 00:30:35.745 [2024-12-05 12:14:09.781624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.745 [2024-12-05 12:14:09.781655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.745 qpair failed and we were unable to recover it. 00:30:35.745 [2024-12-05 12:14:09.781853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.745 [2024-12-05 12:14:09.781884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.745 qpair failed and we were unable to recover it. 00:30:35.745 [2024-12-05 12:14:09.782121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.745 [2024-12-05 12:14:09.782160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.745 qpair failed and we were unable to recover it. 00:30:35.745 [2024-12-05 12:14:09.782282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.745 [2024-12-05 12:14:09.782314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.745 qpair failed and we were unable to recover it. 00:30:35.745 [2024-12-05 12:14:09.782598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.745 [2024-12-05 12:14:09.782632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.745 qpair failed and we were unable to recover it. 00:30:35.745 [2024-12-05 12:14:09.782884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.746 [2024-12-05 12:14:09.782917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.746 qpair failed and we were unable to recover it. 00:30:35.746 [2024-12-05 12:14:09.783168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.746 [2024-12-05 12:14:09.783200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.746 qpair failed and we were unable to recover it. 00:30:35.746 [2024-12-05 12:14:09.783333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.746 [2024-12-05 12:14:09.783363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.746 qpair failed and we were unable to recover it. 00:30:35.746 [2024-12-05 12:14:09.783644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.746 [2024-12-05 12:14:09.783675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.746 qpair failed and we were unable to recover it. 00:30:35.746 [2024-12-05 12:14:09.783818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.746 [2024-12-05 12:14:09.783850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.746 qpair failed and we were unable to recover it. 00:30:35.746 [2024-12-05 12:14:09.784033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.746 [2024-12-05 12:14:09.784064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.746 qpair failed and we were unable to recover it. 00:30:35.746 [2024-12-05 12:14:09.784272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.746 [2024-12-05 12:14:09.784304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.746 qpair failed and we were unable to recover it. 00:30:35.746 [2024-12-05 12:14:09.784507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.746 [2024-12-05 12:14:09.784540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.746 qpair failed and we were unable to recover it. 00:30:35.746 [2024-12-05 12:14:09.784810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.746 [2024-12-05 12:14:09.784841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.746 qpair failed and we were unable to recover it. 00:30:35.746 [2024-12-05 12:14:09.785129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.746 [2024-12-05 12:14:09.785166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.746 qpair failed and we were unable to recover it. 00:30:35.746 [2024-12-05 12:14:09.785411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.746 [2024-12-05 12:14:09.785444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.746 qpair failed and we were unable to recover it. 00:30:35.746 [2024-12-05 12:14:09.785746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.746 [2024-12-05 12:14:09.785778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.746 qpair failed and we were unable to recover it. 00:30:35.746 [2024-12-05 12:14:09.786006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.746 [2024-12-05 12:14:09.786039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.746 qpair failed and we were unable to recover it. 00:30:35.746 [2024-12-05 12:14:09.786180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.746 [2024-12-05 12:14:09.786210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.746 qpair failed and we were unable to recover it. 00:30:35.746 [2024-12-05 12:14:09.786451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.746 [2024-12-05 12:14:09.786484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.746 qpair failed and we were unable to recover it. 00:30:35.746 [2024-12-05 12:14:09.786678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.746 [2024-12-05 12:14:09.786710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.746 qpair failed and we were unable to recover it. 00:30:35.746 [2024-12-05 12:14:09.786889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.746 [2024-12-05 12:14:09.786921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.746 qpair failed and we were unable to recover it. 00:30:35.746 [2024-12-05 12:14:09.787163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.746 [2024-12-05 12:14:09.787193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.746 qpair failed and we were unable to recover it. 00:30:35.746 [2024-12-05 12:14:09.787408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.746 [2024-12-05 12:14:09.787442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.746 qpair failed and we were unable to recover it. 00:30:35.746 [2024-12-05 12:14:09.787661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.746 [2024-12-05 12:14:09.787693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.746 qpair failed and we were unable to recover it. 00:30:35.746 [2024-12-05 12:14:09.787893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.746 [2024-12-05 12:14:09.787923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.746 qpair failed and we were unable to recover it. 00:30:35.746 [2024-12-05 12:14:09.788137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.746 [2024-12-05 12:14:09.788170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.746 qpair failed and we were unable to recover it. 00:30:35.746 [2024-12-05 12:14:09.788349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.746 [2024-12-05 12:14:09.788402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.746 qpair failed and we were unable to recover it. 00:30:35.746 [2024-12-05 12:14:09.788649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.746 [2024-12-05 12:14:09.788681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:35.746 qpair failed and we were unable to recover it. 00:30:35.746 [2024-12-05 12:14:09.789025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.746 [2024-12-05 12:14:09.789102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.746 qpair failed and we were unable to recover it. 00:30:35.746 [2024-12-05 12:14:09.789418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.746 [2024-12-05 12:14:09.789456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.746 qpair failed and we were unable to recover it. 00:30:35.746 [2024-12-05 12:14:09.789657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.746 [2024-12-05 12:14:09.789691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.746 qpair failed and we were unable to recover it. 00:30:35.746 [2024-12-05 12:14:09.789992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.746 [2024-12-05 12:14:09.790024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.746 qpair failed and we were unable to recover it. 00:30:35.746 [2024-12-05 12:14:09.790317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.746 [2024-12-05 12:14:09.790348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.746 qpair failed and we were unable to recover it. 00:30:35.746 [2024-12-05 12:14:09.790619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.746 [2024-12-05 12:14:09.790652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.746 qpair failed and we were unable to recover it. 00:30:35.746 [2024-12-05 12:14:09.790899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.746 [2024-12-05 12:14:09.790930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.746 qpair failed and we were unable to recover it. 00:30:35.746 [2024-12-05 12:14:09.791223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.746 [2024-12-05 12:14:09.791253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.746 qpair failed and we were unable to recover it. 00:30:35.746 [2024-12-05 12:14:09.791549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.746 [2024-12-05 12:14:09.791581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.746 qpair failed and we were unable to recover it. 00:30:35.746 [2024-12-05 12:14:09.791853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.746 [2024-12-05 12:14:09.791884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.746 qpair failed and we were unable to recover it. 00:30:35.746 [2024-12-05 12:14:09.792152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.746 [2024-12-05 12:14:09.792184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.746 qpair failed and we were unable to recover it. 00:30:35.746 [2024-12-05 12:14:09.792428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.747 [2024-12-05 12:14:09.792461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.747 qpair failed and we were unable to recover it. 00:30:35.747 [2024-12-05 12:14:09.792602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.747 [2024-12-05 12:14:09.792633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.747 qpair failed and we were unable to recover it. 00:30:35.747 [2024-12-05 12:14:09.792833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.747 [2024-12-05 12:14:09.792865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.747 qpair failed and we were unable to recover it. 00:30:35.747 [2024-12-05 12:14:09.793144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.747 [2024-12-05 12:14:09.793177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.747 qpair failed and we were unable to recover it. 00:30:35.747 [2024-12-05 12:14:09.793426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.747 [2024-12-05 12:14:09.793459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.747 qpair failed and we were unable to recover it. 00:30:35.747 [2024-12-05 12:14:09.793667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.747 [2024-12-05 12:14:09.793699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.747 qpair failed and we were unable to recover it. 00:30:35.747 [2024-12-05 12:14:09.793877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.747 [2024-12-05 12:14:09.793911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.747 qpair failed and we were unable to recover it. 00:30:35.747 [2024-12-05 12:14:09.794159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.747 [2024-12-05 12:14:09.794191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.747 qpair failed and we were unable to recover it. 00:30:35.747 [2024-12-05 12:14:09.794486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.747 [2024-12-05 12:14:09.794518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.747 qpair failed and we were unable to recover it. 00:30:35.747 [2024-12-05 12:14:09.794807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.747 [2024-12-05 12:14:09.794839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.747 qpair failed and we were unable to recover it. 00:30:35.747 [2024-12-05 12:14:09.795081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.747 [2024-12-05 12:14:09.795112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.747 qpair failed and we were unable to recover it. 00:30:35.747 [2024-12-05 12:14:09.795427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.747 [2024-12-05 12:14:09.795459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.747 qpair failed and we were unable to recover it. 00:30:35.747 [2024-12-05 12:14:09.795655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.747 [2024-12-05 12:14:09.795687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.747 qpair failed and we were unable to recover it. 00:30:35.747 [2024-12-05 12:14:09.795977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.747 [2024-12-05 12:14:09.796008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.747 qpair failed and we were unable to recover it. 00:30:35.747 [2024-12-05 12:14:09.796139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.747 [2024-12-05 12:14:09.796170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.747 qpair failed and we were unable to recover it. 00:30:35.747 [2024-12-05 12:14:09.796463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.747 [2024-12-05 12:14:09.796495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.747 qpair failed and we were unable to recover it. 00:30:35.747 [2024-12-05 12:14:09.796788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.747 [2024-12-05 12:14:09.796826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.747 qpair failed and we were unable to recover it. 00:30:35.747 [2024-12-05 12:14:09.796969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.747 [2024-12-05 12:14:09.796998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.747 qpair failed and we were unable to recover it. 00:30:35.747 [2024-12-05 12:14:09.797214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.747 [2024-12-05 12:14:09.797246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.747 qpair failed and we were unable to recover it. 00:30:35.747 [2024-12-05 12:14:09.797438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.747 [2024-12-05 12:14:09.797470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.747 qpair failed and we were unable to recover it. 00:30:35.747 [2024-12-05 12:14:09.797730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.747 [2024-12-05 12:14:09.797762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.747 qpair failed and we were unable to recover it. 00:30:35.747 [2024-12-05 12:14:09.798009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.747 [2024-12-05 12:14:09.798041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.747 qpair failed and we were unable to recover it. 00:30:35.747 [2024-12-05 12:14:09.798234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.747 [2024-12-05 12:14:09.798265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.747 qpair failed and we were unable to recover it. 00:30:35.747 [2024-12-05 12:14:09.798463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.747 [2024-12-05 12:14:09.798496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.747 qpair failed and we were unable to recover it. 00:30:35.747 [2024-12-05 12:14:09.798646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.747 [2024-12-05 12:14:09.798678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.747 qpair failed and we were unable to recover it. 00:30:35.747 [2024-12-05 12:14:09.798891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.747 [2024-12-05 12:14:09.798922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.747 qpair failed and we were unable to recover it. 00:30:35.747 [2024-12-05 12:14:09.799099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.747 [2024-12-05 12:14:09.799131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.747 qpair failed and we were unable to recover it. 00:30:35.747 [2024-12-05 12:14:09.799424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.747 [2024-12-05 12:14:09.799455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.747 qpair failed and we were unable to recover it. 00:30:35.747 [2024-12-05 12:14:09.799728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.747 [2024-12-05 12:14:09.799760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.747 qpair failed and we were unable to recover it. 00:30:35.747 [2024-12-05 12:14:09.799979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.747 [2024-12-05 12:14:09.800010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.747 qpair failed and we were unable to recover it. 00:30:35.747 [2024-12-05 12:14:09.800257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.747 [2024-12-05 12:14:09.800288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.747 qpair failed and we were unable to recover it. 00:30:35.747 [2024-12-05 12:14:09.800494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.747 [2024-12-05 12:14:09.800527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.747 qpair failed and we were unable to recover it. 00:30:35.747 [2024-12-05 12:14:09.800806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.747 [2024-12-05 12:14:09.800837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.747 qpair failed and we were unable to recover it. 00:30:35.747 [2024-12-05 12:14:09.801123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.747 [2024-12-05 12:14:09.801154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.747 qpair failed and we were unable to recover it. 00:30:35.747 [2024-12-05 12:14:09.801464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.747 [2024-12-05 12:14:09.801497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.747 qpair failed and we were unable to recover it. 00:30:35.747 [2024-12-05 12:14:09.801777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.747 [2024-12-05 12:14:09.801810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.747 qpair failed and we were unable to recover it. 00:30:35.747 [2024-12-05 12:14:09.802094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.747 [2024-12-05 12:14:09.802127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.747 qpair failed and we were unable to recover it. 00:30:35.747 [2024-12-05 12:14:09.802388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.748 [2024-12-05 12:14:09.802423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.748 qpair failed and we were unable to recover it. 00:30:35.748 [2024-12-05 12:14:09.802605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.748 [2024-12-05 12:14:09.802636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.748 qpair failed and we were unable to recover it. 00:30:35.748 [2024-12-05 12:14:09.802833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.748 [2024-12-05 12:14:09.802863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.748 qpair failed and we were unable to recover it. 00:30:35.748 [2024-12-05 12:14:09.803066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.748 [2024-12-05 12:14:09.803098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.748 qpair failed and we were unable to recover it. 00:30:35.748 [2024-12-05 12:14:09.803302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.748 [2024-12-05 12:14:09.803335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.748 qpair failed and we were unable to recover it. 00:30:35.748 [2024-12-05 12:14:09.803530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.748 [2024-12-05 12:14:09.803562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.748 qpair failed and we were unable to recover it. 00:30:35.748 [2024-12-05 12:14:09.803837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.748 [2024-12-05 12:14:09.803869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.748 qpair failed and we were unable to recover it. 00:30:35.748 [2024-12-05 12:14:09.804003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.748 [2024-12-05 12:14:09.804035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.748 qpair failed and we were unable to recover it. 00:30:35.748 [2024-12-05 12:14:09.804166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.748 [2024-12-05 12:14:09.804196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.748 qpair failed and we were unable to recover it. 00:30:35.748 [2024-12-05 12:14:09.804473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.748 [2024-12-05 12:14:09.804505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.748 qpair failed and we were unable to recover it. 00:30:35.748 [2024-12-05 12:14:09.804702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.748 [2024-12-05 12:14:09.804734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.748 qpair failed and we were unable to recover it. 00:30:35.748 [2024-12-05 12:14:09.804995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.748 [2024-12-05 12:14:09.805029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.748 qpair failed and we were unable to recover it. 00:30:35.748 [2024-12-05 12:14:09.805287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.748 [2024-12-05 12:14:09.805320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.748 qpair failed and we were unable to recover it. 00:30:35.748 [2024-12-05 12:14:09.805469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.748 [2024-12-05 12:14:09.805504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.748 qpair failed and we were unable to recover it. 00:30:35.748 [2024-12-05 12:14:09.805749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.748 [2024-12-05 12:14:09.805781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.748 qpair failed and we were unable to recover it. 00:30:35.748 [2024-12-05 12:14:09.805904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.748 [2024-12-05 12:14:09.805936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.748 qpair failed and we were unable to recover it. 00:30:35.748 [2024-12-05 12:14:09.806132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.748 [2024-12-05 12:14:09.806163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.748 qpair failed and we were unable to recover it. 00:30:35.748 [2024-12-05 12:14:09.806415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.748 [2024-12-05 12:14:09.806448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.748 qpair failed and we were unable to recover it. 00:30:35.748 [2024-12-05 12:14:09.806719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.748 [2024-12-05 12:14:09.806751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.748 qpair failed and we were unable to recover it. 00:30:35.748 [2024-12-05 12:14:09.806891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.748 [2024-12-05 12:14:09.806922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.748 qpair failed and we were unable to recover it. 00:30:35.748 [2024-12-05 12:14:09.807117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.748 [2024-12-05 12:14:09.807149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.748 qpair failed and we were unable to recover it. 00:30:35.748 [2024-12-05 12:14:09.807397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.748 [2024-12-05 12:14:09.807430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.748 qpair failed and we were unable to recover it. 00:30:35.748 [2024-12-05 12:14:09.807633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.748 [2024-12-05 12:14:09.807664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.748 qpair failed and we were unable to recover it. 00:30:35.748 [2024-12-05 12:14:09.807860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.748 [2024-12-05 12:14:09.807893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.748 qpair failed and we were unable to recover it. 00:30:35.748 [2024-12-05 12:14:09.808142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.748 [2024-12-05 12:14:09.808175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.748 qpair failed and we were unable to recover it. 00:30:35.748 [2024-12-05 12:14:09.808361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.748 [2024-12-05 12:14:09.808402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.748 qpair failed and we were unable to recover it. 00:30:35.748 [2024-12-05 12:14:09.808661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.748 [2024-12-05 12:14:09.808693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.748 qpair failed and we were unable to recover it. 00:30:35.748 [2024-12-05 12:14:09.808970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.748 [2024-12-05 12:14:09.809003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.748 qpair failed and we were unable to recover it. 00:30:35.748 [2024-12-05 12:14:09.809203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.748 [2024-12-05 12:14:09.809234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.748 qpair failed and we were unable to recover it. 00:30:35.748 [2024-12-05 12:14:09.809483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.748 [2024-12-05 12:14:09.809516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.748 qpair failed and we were unable to recover it. 00:30:35.748 [2024-12-05 12:14:09.809767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.748 [2024-12-05 12:14:09.809799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.748 qpair failed and we were unable to recover it. 00:30:35.748 [2024-12-05 12:14:09.810070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.748 [2024-12-05 12:14:09.810101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.748 qpair failed and we were unable to recover it. 00:30:35.748 [2024-12-05 12:14:09.810361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.748 [2024-12-05 12:14:09.810407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.748 qpair failed and we were unable to recover it. 00:30:35.748 [2024-12-05 12:14:09.810633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.748 [2024-12-05 12:14:09.810664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.748 qpair failed and we were unable to recover it. 00:30:35.748 [2024-12-05 12:14:09.810921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.748 [2024-12-05 12:14:09.810952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.748 qpair failed and we were unable to recover it. 00:30:35.748 [2024-12-05 12:14:09.811151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.748 [2024-12-05 12:14:09.811182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.748 qpair failed and we were unable to recover it. 00:30:35.748 [2024-12-05 12:14:09.811458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.748 [2024-12-05 12:14:09.811492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.748 qpair failed and we were unable to recover it. 00:30:35.748 [2024-12-05 12:14:09.811754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.749 [2024-12-05 12:14:09.811785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.749 qpair failed and we were unable to recover it. 00:30:35.749 [2024-12-05 12:14:09.812003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.749 [2024-12-05 12:14:09.812034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.749 qpair failed and we were unable to recover it. 00:30:35.749 [2024-12-05 12:14:09.812298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.749 [2024-12-05 12:14:09.812329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.749 qpair failed and we were unable to recover it. 00:30:35.749 [2024-12-05 12:14:09.812653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.749 [2024-12-05 12:14:09.812686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.749 qpair failed and we were unable to recover it. 00:30:35.749 [2024-12-05 12:14:09.812927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.749 [2024-12-05 12:14:09.812958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.749 qpair failed and we were unable to recover it. 00:30:35.749 [2024-12-05 12:14:09.813213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.749 [2024-12-05 12:14:09.813244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.749 qpair failed and we were unable to recover it. 00:30:35.749 [2024-12-05 12:14:09.813530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.749 [2024-12-05 12:14:09.813562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.749 qpair failed and we were unable to recover it. 00:30:35.749 [2024-12-05 12:14:09.813764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.749 [2024-12-05 12:14:09.813796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.749 qpair failed and we were unable to recover it. 00:30:35.749 [2024-12-05 12:14:09.814064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.749 [2024-12-05 12:14:09.814097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.749 qpair failed and we were unable to recover it. 00:30:35.749 [2024-12-05 12:14:09.814315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.749 [2024-12-05 12:14:09.814345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.749 qpair failed and we were unable to recover it. 00:30:35.749 [2024-12-05 12:14:09.814495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.749 [2024-12-05 12:14:09.814533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.749 qpair failed and we were unable to recover it. 00:30:35.749 [2024-12-05 12:14:09.814796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.749 [2024-12-05 12:14:09.814828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.749 qpair failed and we were unable to recover it. 00:30:35.749 [2024-12-05 12:14:09.815053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.749 [2024-12-05 12:14:09.815084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.749 qpair failed and we were unable to recover it. 00:30:35.749 [2024-12-05 12:14:09.815284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.749 [2024-12-05 12:14:09.815315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.749 qpair failed and we were unable to recover it. 00:30:35.749 [2024-12-05 12:14:09.815618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.749 [2024-12-05 12:14:09.815650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.749 qpair failed and we were unable to recover it. 00:30:35.749 [2024-12-05 12:14:09.815915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.749 [2024-12-05 12:14:09.815946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.749 qpair failed and we were unable to recover it. 00:30:35.749 [2024-12-05 12:14:09.816212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.749 [2024-12-05 12:14:09.816243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.749 qpair failed and we were unable to recover it. 00:30:35.749 [2024-12-05 12:14:09.816437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.749 [2024-12-05 12:14:09.816471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.749 qpair failed and we were unable to recover it. 00:30:35.749 [2024-12-05 12:14:09.816740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.749 [2024-12-05 12:14:09.816771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.749 qpair failed and we were unable to recover it. 00:30:35.749 [2024-12-05 12:14:09.816971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.749 [2024-12-05 12:14:09.817003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.749 qpair failed and we were unable to recover it. 00:30:35.749 [2024-12-05 12:14:09.817212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.749 [2024-12-05 12:14:09.817243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.749 qpair failed and we were unable to recover it. 00:30:35.749 [2024-12-05 12:14:09.817426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.749 [2024-12-05 12:14:09.817458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.749 qpair failed and we were unable to recover it. 00:30:35.749 [2024-12-05 12:14:09.817612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.749 [2024-12-05 12:14:09.817645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.749 qpair failed and we were unable to recover it. 00:30:35.749 [2024-12-05 12:14:09.817834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.749 [2024-12-05 12:14:09.817865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.749 qpair failed and we were unable to recover it. 00:30:35.749 [2024-12-05 12:14:09.818062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.749 [2024-12-05 12:14:09.818093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.749 qpair failed and we were unable to recover it. 00:30:35.749 [2024-12-05 12:14:09.818383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.749 [2024-12-05 12:14:09.818417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.749 qpair failed and we were unable to recover it. 00:30:35.749 [2024-12-05 12:14:09.818620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.749 [2024-12-05 12:14:09.818651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.749 qpair failed and we were unable to recover it. 00:30:35.749 [2024-12-05 12:14:09.818924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.749 [2024-12-05 12:14:09.818956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.749 qpair failed and we were unable to recover it. 00:30:35.749 [2024-12-05 12:14:09.819236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.749 [2024-12-05 12:14:09.819267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.749 qpair failed and we were unable to recover it. 00:30:35.749 [2024-12-05 12:14:09.819559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.749 [2024-12-05 12:14:09.819591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.749 qpair failed and we were unable to recover it. 00:30:35.749 [2024-12-05 12:14:09.819725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.749 [2024-12-05 12:14:09.819757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.749 qpair failed and we were unable to recover it. 00:30:35.749 [2024-12-05 12:14:09.820032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.749 [2024-12-05 12:14:09.820063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.749 qpair failed and we were unable to recover it. 00:30:35.749 [2024-12-05 12:14:09.820208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.749 [2024-12-05 12:14:09.820239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.749 qpair failed and we were unable to recover it. 00:30:35.749 [2024-12-05 12:14:09.820512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.749 [2024-12-05 12:14:09.820545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.749 qpair failed and we were unable to recover it. 00:30:35.749 [2024-12-05 12:14:09.820826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.749 [2024-12-05 12:14:09.820856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.749 qpair failed and we were unable to recover it. 00:30:35.749 [2024-12-05 12:14:09.821143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.749 [2024-12-05 12:14:09.821174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.749 qpair failed and we were unable to recover it. 00:30:35.749 [2024-12-05 12:14:09.821459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.749 [2024-12-05 12:14:09.821490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.749 qpair failed and we were unable to recover it. 00:30:35.749 [2024-12-05 12:14:09.821773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.749 [2024-12-05 12:14:09.821805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.749 qpair failed and we were unable to recover it. 00:30:35.749 [2024-12-05 12:14:09.822096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.750 [2024-12-05 12:14:09.822128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.750 qpair failed and we were unable to recover it. 00:30:35.750 [2024-12-05 12:14:09.822405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.750 [2024-12-05 12:14:09.822438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.750 qpair failed and we were unable to recover it. 00:30:35.750 [2024-12-05 12:14:09.822727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.750 [2024-12-05 12:14:09.822759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.750 qpair failed and we were unable to recover it. 00:30:35.750 [2024-12-05 12:14:09.823037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.750 [2024-12-05 12:14:09.823068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.750 qpair failed and we were unable to recover it. 00:30:35.750 [2024-12-05 12:14:09.823278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.750 [2024-12-05 12:14:09.823310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.750 qpair failed and we were unable to recover it. 00:30:35.750 [2024-12-05 12:14:09.823499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.750 [2024-12-05 12:14:09.823531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.750 qpair failed and we were unable to recover it. 00:30:35.750 [2024-12-05 12:14:09.823753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.750 [2024-12-05 12:14:09.823785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.750 qpair failed and we were unable to recover it. 00:30:35.750 [2024-12-05 12:14:09.823964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.750 [2024-12-05 12:14:09.823996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.750 qpair failed and we were unable to recover it. 00:30:35.750 [2024-12-05 12:14:09.824277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.750 [2024-12-05 12:14:09.824308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.750 qpair failed and we were unable to recover it. 00:30:35.750 [2024-12-05 12:14:09.824493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.750 [2024-12-05 12:14:09.824526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.750 qpair failed and we were unable to recover it. 00:30:35.750 [2024-12-05 12:14:09.824654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.750 [2024-12-05 12:14:09.824686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.750 qpair failed and we were unable to recover it. 00:30:35.750 [2024-12-05 12:14:09.824960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.750 [2024-12-05 12:14:09.824991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.750 qpair failed and we were unable to recover it. 00:30:35.750 [2024-12-05 12:14:09.825267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.750 [2024-12-05 12:14:09.825298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.750 qpair failed and we were unable to recover it. 00:30:35.750 [2024-12-05 12:14:09.825611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.750 [2024-12-05 12:14:09.825650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.750 qpair failed and we were unable to recover it. 00:30:35.750 [2024-12-05 12:14:09.825806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.750 [2024-12-05 12:14:09.825838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.750 qpair failed and we were unable to recover it. 00:30:35.750 [2024-12-05 12:14:09.826040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.750 [2024-12-05 12:14:09.826072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.750 qpair failed and we were unable to recover it. 00:30:35.750 [2024-12-05 12:14:09.826296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.750 [2024-12-05 12:14:09.826328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.750 qpair failed and we were unable to recover it. 00:30:35.750 [2024-12-05 12:14:09.826616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.750 [2024-12-05 12:14:09.826649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.750 qpair failed and we were unable to recover it. 00:30:35.750 [2024-12-05 12:14:09.826928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.750 [2024-12-05 12:14:09.826966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.750 qpair failed and we were unable to recover it. 00:30:35.750 [2024-12-05 12:14:09.827215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.750 [2024-12-05 12:14:09.827247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.750 qpair failed and we were unable to recover it. 00:30:35.750 [2024-12-05 12:14:09.827530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.750 [2024-12-05 12:14:09.827562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.750 qpair failed and we were unable to recover it. 00:30:35.750 [2024-12-05 12:14:09.827838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.750 [2024-12-05 12:14:09.827869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.750 qpair failed and we were unable to recover it. 00:30:35.750 [2024-12-05 12:14:09.828162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.750 [2024-12-05 12:14:09.828195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.750 qpair failed and we were unable to recover it. 00:30:35.750 [2024-12-05 12:14:09.828494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.750 [2024-12-05 12:14:09.828527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.750 qpair failed and we were unable to recover it. 00:30:35.750 [2024-12-05 12:14:09.828727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.750 [2024-12-05 12:14:09.828759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.750 qpair failed and we were unable to recover it. 00:30:35.750 [2024-12-05 12:14:09.829020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.750 [2024-12-05 12:14:09.829051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.750 qpair failed and we were unable to recover it. 00:30:35.750 [2024-12-05 12:14:09.829332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.750 [2024-12-05 12:14:09.829364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.750 qpair failed and we were unable to recover it. 00:30:35.750 [2024-12-05 12:14:09.829654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.750 [2024-12-05 12:14:09.829686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.750 qpair failed and we were unable to recover it. 00:30:35.750 [2024-12-05 12:14:09.829887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.750 [2024-12-05 12:14:09.829918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.750 qpair failed and we were unable to recover it. 00:30:35.750 [2024-12-05 12:14:09.830186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.750 [2024-12-05 12:14:09.830218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.750 qpair failed and we were unable to recover it. 00:30:35.750 [2024-12-05 12:14:09.830518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.750 [2024-12-05 12:14:09.830550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.750 qpair failed and we were unable to recover it. 00:30:35.750 [2024-12-05 12:14:09.830816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.750 [2024-12-05 12:14:09.830848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.750 qpair failed and we were unable to recover it. 00:30:35.750 [2024-12-05 12:14:09.831144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.750 [2024-12-05 12:14:09.831176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.750 qpair failed and we were unable to recover it. 00:30:35.750 [2024-12-05 12:14:09.831452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.751 [2024-12-05 12:14:09.831484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.751 qpair failed and we were unable to recover it. 00:30:35.751 [2024-12-05 12:14:09.831780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.751 [2024-12-05 12:14:09.831811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.751 qpair failed and we were unable to recover it. 00:30:35.751 [2024-12-05 12:14:09.831995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.751 [2024-12-05 12:14:09.832027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.751 qpair failed and we were unable to recover it. 00:30:35.751 [2024-12-05 12:14:09.832285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.751 [2024-12-05 12:14:09.832316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.751 qpair failed and we were unable to recover it. 00:30:35.751 [2024-12-05 12:14:09.832478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.751 [2024-12-05 12:14:09.832511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.751 qpair failed and we were unable to recover it. 00:30:35.751 [2024-12-05 12:14:09.832707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.751 [2024-12-05 12:14:09.832740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.751 qpair failed and we were unable to recover it. 00:30:35.751 [2024-12-05 12:14:09.833022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.751 [2024-12-05 12:14:09.833053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.751 qpair failed and we were unable to recover it. 00:30:35.751 [2024-12-05 12:14:09.833337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.751 [2024-12-05 12:14:09.833383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.751 qpair failed and we were unable to recover it. 00:30:35.751 [2024-12-05 12:14:09.833589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.751 [2024-12-05 12:14:09.833622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.751 qpair failed and we were unable to recover it. 00:30:35.751 [2024-12-05 12:14:09.833781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.751 [2024-12-05 12:14:09.833813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.751 qpair failed and we were unable to recover it. 00:30:35.751 [2024-12-05 12:14:09.834005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.751 [2024-12-05 12:14:09.834037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.751 qpair failed and we were unable to recover it. 00:30:35.751 [2024-12-05 12:14:09.834241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.751 [2024-12-05 12:14:09.834273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.751 qpair failed and we were unable to recover it. 00:30:35.751 [2024-12-05 12:14:09.834543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.751 [2024-12-05 12:14:09.834577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.751 qpair failed and we were unable to recover it. 00:30:35.751 [2024-12-05 12:14:09.834880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.751 [2024-12-05 12:14:09.834912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.751 qpair failed and we were unable to recover it. 00:30:35.751 [2024-12-05 12:14:09.835176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.751 [2024-12-05 12:14:09.835208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.751 qpair failed and we were unable to recover it. 00:30:35.751 [2024-12-05 12:14:09.835495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.751 [2024-12-05 12:14:09.835528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.751 qpair failed and we were unable to recover it. 00:30:35.751 [2024-12-05 12:14:09.835811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.751 [2024-12-05 12:14:09.835842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.751 qpair failed and we were unable to recover it. 00:30:35.751 [2024-12-05 12:14:09.836126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.751 [2024-12-05 12:14:09.836158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.751 qpair failed and we were unable to recover it. 00:30:35.751 [2024-12-05 12:14:09.836344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.751 [2024-12-05 12:14:09.836383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.751 qpair failed and we were unable to recover it. 00:30:35.751 [2024-12-05 12:14:09.836658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.751 [2024-12-05 12:14:09.836690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.751 qpair failed and we were unable to recover it. 00:30:35.751 [2024-12-05 12:14:09.836972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.751 [2024-12-05 12:14:09.837004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.751 qpair failed and we were unable to recover it. 00:30:35.751 [2024-12-05 12:14:09.837235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.751 [2024-12-05 12:14:09.837267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.751 qpair failed and we were unable to recover it. 00:30:35.751 [2024-12-05 12:14:09.837524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.751 [2024-12-05 12:14:09.837558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.751 qpair failed and we were unable to recover it. 00:30:35.751 [2024-12-05 12:14:09.837810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.751 [2024-12-05 12:14:09.837843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.751 qpair failed and we were unable to recover it. 00:30:35.751 [2024-12-05 12:14:09.838037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.751 [2024-12-05 12:14:09.838069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.751 qpair failed and we were unable to recover it. 00:30:35.751 [2024-12-05 12:14:09.838262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.751 [2024-12-05 12:14:09.838294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.751 qpair failed and we were unable to recover it. 00:30:35.751 [2024-12-05 12:14:09.838560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.751 [2024-12-05 12:14:09.838593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.751 qpair failed and we were unable to recover it. 00:30:35.751 [2024-12-05 12:14:09.838840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.751 [2024-12-05 12:14:09.838872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.751 qpair failed and we were unable to recover it. 00:30:35.751 [2024-12-05 12:14:09.839074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.751 [2024-12-05 12:14:09.839106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.751 qpair failed and we were unable to recover it. 00:30:35.751 [2024-12-05 12:14:09.839250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.751 [2024-12-05 12:14:09.839283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.751 qpair failed and we were unable to recover it. 00:30:35.751 [2024-12-05 12:14:09.839531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.751 [2024-12-05 12:14:09.839565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.751 qpair failed and we were unable to recover it. 00:30:35.751 [2024-12-05 12:14:09.839870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.751 [2024-12-05 12:14:09.839902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.751 qpair failed and we were unable to recover it. 00:30:35.751 [2024-12-05 12:14:09.840165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.751 [2024-12-05 12:14:09.840198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.751 qpair failed and we were unable to recover it. 00:30:35.751 [2024-12-05 12:14:09.840475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.752 [2024-12-05 12:14:09.840509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.752 qpair failed and we were unable to recover it. 00:30:35.752 [2024-12-05 12:14:09.840794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.752 [2024-12-05 12:14:09.840826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.752 qpair failed and we were unable to recover it. 00:30:35.752 [2024-12-05 12:14:09.841057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.752 [2024-12-05 12:14:09.841089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.752 qpair failed and we were unable to recover it. 00:30:35.752 [2024-12-05 12:14:09.841390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.752 [2024-12-05 12:14:09.841424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.752 qpair failed and we were unable to recover it. 00:30:35.752 [2024-12-05 12:14:09.841672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.752 [2024-12-05 12:14:09.841705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.752 qpair failed and we were unable to recover it. 00:30:35.752 [2024-12-05 12:14:09.842006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.752 [2024-12-05 12:14:09.842038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.752 qpair failed and we were unable to recover it. 00:30:35.752 [2024-12-05 12:14:09.842314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.752 [2024-12-05 12:14:09.842346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.752 qpair failed and we were unable to recover it. 00:30:35.752 [2024-12-05 12:14:09.842559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.752 [2024-12-05 12:14:09.842593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.752 qpair failed and we were unable to recover it. 00:30:35.752 [2024-12-05 12:14:09.842821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.752 [2024-12-05 12:14:09.842852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.752 qpair failed and we were unable to recover it. 00:30:35.752 [2024-12-05 12:14:09.843128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.752 [2024-12-05 12:14:09.843159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.752 qpair failed and we were unable to recover it. 00:30:35.752 [2024-12-05 12:14:09.843440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.752 [2024-12-05 12:14:09.843473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.752 qpair failed and we were unable to recover it. 00:30:35.752 [2024-12-05 12:14:09.843742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.752 [2024-12-05 12:14:09.843774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.752 qpair failed and we were unable to recover it. 00:30:35.752 [2024-12-05 12:14:09.843997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.752 [2024-12-05 12:14:09.844030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.752 qpair failed and we were unable to recover it. 00:30:35.752 [2024-12-05 12:14:09.844160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.752 [2024-12-05 12:14:09.844192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.752 qpair failed and we were unable to recover it. 00:30:35.752 [2024-12-05 12:14:09.844473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.752 [2024-12-05 12:14:09.844507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.752 qpair failed and we were unable to recover it. 00:30:35.752 [2024-12-05 12:14:09.844707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.752 [2024-12-05 12:14:09.844744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.752 qpair failed and we were unable to recover it. 00:30:35.752 [2024-12-05 12:14:09.844946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.752 [2024-12-05 12:14:09.844978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.752 qpair failed and we were unable to recover it. 00:30:35.752 [2024-12-05 12:14:09.845232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.752 [2024-12-05 12:14:09.845264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.752 qpair failed and we were unable to recover it. 00:30:35.752 [2024-12-05 12:14:09.845547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.752 [2024-12-05 12:14:09.845579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.752 qpair failed and we were unable to recover it. 00:30:35.752 [2024-12-05 12:14:09.845795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.752 [2024-12-05 12:14:09.845827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.752 qpair failed and we were unable to recover it. 00:30:35.752 [2024-12-05 12:14:09.846101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.752 [2024-12-05 12:14:09.846134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.752 qpair failed and we were unable to recover it. 00:30:35.752 [2024-12-05 12:14:09.846411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.752 [2024-12-05 12:14:09.846444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.752 qpair failed and we were unable to recover it. 00:30:35.752 [2024-12-05 12:14:09.846708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.752 [2024-12-05 12:14:09.846740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.752 qpair failed and we were unable to recover it. 00:30:35.752 [2024-12-05 12:14:09.847038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.752 [2024-12-05 12:14:09.847070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.752 qpair failed and we were unable to recover it. 00:30:35.752 [2024-12-05 12:14:09.847378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.752 [2024-12-05 12:14:09.847412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.752 qpair failed and we were unable to recover it. 00:30:35.752 [2024-12-05 12:14:09.847708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.752 [2024-12-05 12:14:09.847739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.752 qpair failed and we were unable to recover it. 00:30:35.752 [2024-12-05 12:14:09.847943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.752 [2024-12-05 12:14:09.847975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.752 qpair failed and we were unable to recover it. 00:30:35.752 [2024-12-05 12:14:09.848229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.752 [2024-12-05 12:14:09.848260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.752 qpair failed and we were unable to recover it. 00:30:35.752 [2024-12-05 12:14:09.848409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.752 [2024-12-05 12:14:09.848443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.752 qpair failed and we were unable to recover it. 00:30:35.752 [2024-12-05 12:14:09.848752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.752 [2024-12-05 12:14:09.848785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.752 qpair failed and we were unable to recover it. 00:30:35.752 [2024-12-05 12:14:09.849020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.752 [2024-12-05 12:14:09.849053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.752 qpair failed and we were unable to recover it. 00:30:35.752 [2024-12-05 12:14:09.849327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.752 [2024-12-05 12:14:09.849358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.752 qpair failed and we were unable to recover it. 00:30:35.752 [2024-12-05 12:14:09.849652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.752 [2024-12-05 12:14:09.849685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.752 qpair failed and we were unable to recover it. 00:30:35.752 [2024-12-05 12:14:09.849882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.752 [2024-12-05 12:14:09.849915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.752 qpair failed and we were unable to recover it. 00:30:35.752 [2024-12-05 12:14:09.850029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.752 [2024-12-05 12:14:09.850061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.752 qpair failed and we were unable to recover it. 00:30:35.752 [2024-12-05 12:14:09.850337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.752 [2024-12-05 12:14:09.850380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.752 qpair failed and we were unable to recover it. 00:30:35.752 [2024-12-05 12:14:09.850586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.752 [2024-12-05 12:14:09.850617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.752 qpair failed and we were unable to recover it. 00:30:35.752 [2024-12-05 12:14:09.850896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.752 [2024-12-05 12:14:09.850927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.752 qpair failed and we were unable to recover it. 00:30:35.752 [2024-12-05 12:14:09.851205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.752 [2024-12-05 12:14:09.851237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.752 qpair failed and we were unable to recover it. 00:30:35.753 [2024-12-05 12:14:09.851526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.753 [2024-12-05 12:14:09.851559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.753 qpair failed and we were unable to recover it. 00:30:35.753 [2024-12-05 12:14:09.851834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.753 [2024-12-05 12:14:09.851866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.753 qpair failed and we were unable to recover it. 00:30:35.753 [2024-12-05 12:14:09.852085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.753 [2024-12-05 12:14:09.852117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.753 qpair failed and we were unable to recover it. 00:30:35.753 [2024-12-05 12:14:09.852348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.753 [2024-12-05 12:14:09.852401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.753 qpair failed and we were unable to recover it. 00:30:35.753 [2024-12-05 12:14:09.852706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.753 [2024-12-05 12:14:09.852738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.753 qpair failed and we were unable to recover it. 00:30:35.753 [2024-12-05 12:14:09.852958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.753 [2024-12-05 12:14:09.852990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.753 qpair failed and we were unable to recover it. 00:30:35.753 [2024-12-05 12:14:09.853195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.753 [2024-12-05 12:14:09.853227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.753 qpair failed and we were unable to recover it. 00:30:35.753 [2024-12-05 12:14:09.853433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.753 [2024-12-05 12:14:09.853466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.753 qpair failed and we were unable to recover it. 00:30:35.753 [2024-12-05 12:14:09.853589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.753 [2024-12-05 12:14:09.853620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.753 qpair failed and we were unable to recover it. 00:30:35.753 [2024-12-05 12:14:09.853823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.753 [2024-12-05 12:14:09.853856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.753 qpair failed and we were unable to recover it. 00:30:35.753 [2024-12-05 12:14:09.853988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.753 [2024-12-05 12:14:09.854020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.753 qpair failed and we were unable to recover it. 00:30:35.753 [2024-12-05 12:14:09.854217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.753 [2024-12-05 12:14:09.854249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.753 qpair failed and we were unable to recover it. 00:30:35.753 [2024-12-05 12:14:09.854527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.753 [2024-12-05 12:14:09.854560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.753 qpair failed and we were unable to recover it. 00:30:35.753 [2024-12-05 12:14:09.854847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.753 [2024-12-05 12:14:09.854880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.753 qpair failed and we were unable to recover it. 00:30:35.753 [2024-12-05 12:14:09.855024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.753 [2024-12-05 12:14:09.855056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.753 qpair failed and we were unable to recover it. 00:30:35.753 [2024-12-05 12:14:09.855187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.753 [2024-12-05 12:14:09.855218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.753 qpair failed and we were unable to recover it. 00:30:35.753 [2024-12-05 12:14:09.855400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.753 [2024-12-05 12:14:09.855434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.753 qpair failed and we were unable to recover it. 00:30:35.753 [2024-12-05 12:14:09.855747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.753 [2024-12-05 12:14:09.855780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.753 qpair failed and we were unable to recover it. 00:30:35.753 [2024-12-05 12:14:09.856037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.753 [2024-12-05 12:14:09.856068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.753 qpair failed and we were unable to recover it. 00:30:35.753 [2024-12-05 12:14:09.856278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.753 [2024-12-05 12:14:09.856310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.753 qpair failed and we were unable to recover it. 00:30:35.753 [2024-12-05 12:14:09.856594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.753 [2024-12-05 12:14:09.856627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.753 qpair failed and we were unable to recover it. 00:30:35.753 [2024-12-05 12:14:09.856910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.753 [2024-12-05 12:14:09.856942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.753 qpair failed and we were unable to recover it. 00:30:35.753 [2024-12-05 12:14:09.857152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.753 [2024-12-05 12:14:09.857184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.753 qpair failed and we were unable to recover it. 00:30:35.753 [2024-12-05 12:14:09.857483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.753 [2024-12-05 12:14:09.857516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.753 qpair failed and we were unable to recover it. 00:30:35.753 [2024-12-05 12:14:09.857782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.753 [2024-12-05 12:14:09.857813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.753 qpair failed and we were unable to recover it. 00:30:35.753 [2024-12-05 12:14:09.858063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.753 [2024-12-05 12:14:09.858094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.753 qpair failed and we were unable to recover it. 00:30:35.753 [2024-12-05 12:14:09.858397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.753 [2024-12-05 12:14:09.858431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.753 qpair failed and we were unable to recover it. 00:30:35.753 [2024-12-05 12:14:09.858623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.753 [2024-12-05 12:14:09.858655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.753 qpair failed and we were unable to recover it. 00:30:35.753 [2024-12-05 12:14:09.858859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.753 [2024-12-05 12:14:09.858891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.753 qpair failed and we were unable to recover it. 00:30:35.753 [2024-12-05 12:14:09.859079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.753 [2024-12-05 12:14:09.859111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.753 qpair failed and we were unable to recover it. 00:30:35.753 [2024-12-05 12:14:09.859387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.753 [2024-12-05 12:14:09.859420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.753 qpair failed and we were unable to recover it. 00:30:35.753 [2024-12-05 12:14:09.859686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.753 [2024-12-05 12:14:09.859719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.753 qpair failed and we were unable to recover it. 00:30:35.753 [2024-12-05 12:14:09.860018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.753 [2024-12-05 12:14:09.860051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.753 qpair failed and we were unable to recover it. 00:30:35.753 [2024-12-05 12:14:09.860254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.753 [2024-12-05 12:14:09.860286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.753 qpair failed and we were unable to recover it. 00:30:35.753 [2024-12-05 12:14:09.860477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.753 [2024-12-05 12:14:09.860510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.753 qpair failed and we were unable to recover it. 00:30:35.753 [2024-12-05 12:14:09.860643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.753 [2024-12-05 12:14:09.860676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.753 qpair failed and we were unable to recover it. 00:30:35.753 [2024-12-05 12:14:09.860952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.753 [2024-12-05 12:14:09.860985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.753 qpair failed and we were unable to recover it. 00:30:35.753 [2024-12-05 12:14:09.861178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.753 [2024-12-05 12:14:09.861209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.754 qpair failed and we were unable to recover it. 00:30:35.754 [2024-12-05 12:14:09.861408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.754 [2024-12-05 12:14:09.861442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.754 qpair failed and we were unable to recover it. 00:30:35.754 [2024-12-05 12:14:09.861638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.754 [2024-12-05 12:14:09.861670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.754 qpair failed and we were unable to recover it. 00:30:35.754 [2024-12-05 12:14:09.861948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.754 [2024-12-05 12:14:09.861981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.754 qpair failed and we were unable to recover it. 00:30:35.754 [2024-12-05 12:14:09.862285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.754 [2024-12-05 12:14:09.862317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.754 qpair failed and we were unable to recover it. 00:30:35.754 [2024-12-05 12:14:09.862527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.754 [2024-12-05 12:14:09.862560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.754 qpair failed and we were unable to recover it. 00:30:35.754 [2024-12-05 12:14:09.862837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.754 [2024-12-05 12:14:09.862869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.754 qpair failed and we were unable to recover it. 00:30:35.754 [2024-12-05 12:14:09.863120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.754 [2024-12-05 12:14:09.863158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.754 qpair failed and we were unable to recover it. 00:30:35.754 [2024-12-05 12:14:09.863457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.754 [2024-12-05 12:14:09.863491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.754 qpair failed and we were unable to recover it. 00:30:35.754 [2024-12-05 12:14:09.863742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.754 [2024-12-05 12:14:09.863775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.754 qpair failed and we were unable to recover it. 00:30:35.754 [2024-12-05 12:14:09.864060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.754 [2024-12-05 12:14:09.864092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.754 qpair failed and we were unable to recover it. 00:30:35.754 [2024-12-05 12:14:09.864380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.754 [2024-12-05 12:14:09.864413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.754 qpair failed and we were unable to recover it. 00:30:35.754 [2024-12-05 12:14:09.864612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.754 [2024-12-05 12:14:09.864644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.754 qpair failed and we were unable to recover it. 00:30:35.754 [2024-12-05 12:14:09.864771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.754 [2024-12-05 12:14:09.864803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.754 qpair failed and we were unable to recover it. 00:30:35.754 [2024-12-05 12:14:09.865023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.754 [2024-12-05 12:14:09.865056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.754 qpair failed and we were unable to recover it. 00:30:35.754 [2024-12-05 12:14:09.865331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.754 [2024-12-05 12:14:09.865362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.754 qpair failed and we were unable to recover it. 00:30:35.754 [2024-12-05 12:14:09.865656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.754 [2024-12-05 12:14:09.865688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.754 qpair failed and we were unable to recover it. 00:30:35.754 [2024-12-05 12:14:09.865958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.754 [2024-12-05 12:14:09.865990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.754 qpair failed and we were unable to recover it. 00:30:35.754 [2024-12-05 12:14:09.866233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.754 [2024-12-05 12:14:09.866266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.754 qpair failed and we were unable to recover it. 00:30:35.754 [2024-12-05 12:14:09.866567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.754 [2024-12-05 12:14:09.866601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.754 qpair failed and we were unable to recover it. 00:30:35.754 [2024-12-05 12:14:09.866803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.754 [2024-12-05 12:14:09.866835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.754 qpair failed and we were unable to recover it. 00:30:35.754 [2024-12-05 12:14:09.867061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.754 [2024-12-05 12:14:09.867094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.754 qpair failed and we were unable to recover it. 00:30:35.754 [2024-12-05 12:14:09.867345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.754 [2024-12-05 12:14:09.867385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.754 qpair failed and we were unable to recover it. 00:30:35.754 [2024-12-05 12:14:09.867597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.754 [2024-12-05 12:14:09.867629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.754 qpair failed and we were unable to recover it. 00:30:35.754 [2024-12-05 12:14:09.867912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.754 [2024-12-05 12:14:09.867943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.754 qpair failed and we were unable to recover it. 00:30:35.754 [2024-12-05 12:14:09.868196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.754 [2024-12-05 12:14:09.868228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.754 qpair failed and we were unable to recover it. 00:30:35.754 [2024-12-05 12:14:09.868528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.754 [2024-12-05 12:14:09.868561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.754 qpair failed and we were unable to recover it. 00:30:35.754 [2024-12-05 12:14:09.868706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.754 [2024-12-05 12:14:09.868739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.754 qpair failed and we were unable to recover it. 00:30:35.754 [2024-12-05 12:14:09.868918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.754 [2024-12-05 12:14:09.868950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.754 qpair failed and we were unable to recover it. 00:30:35.754 [2024-12-05 12:14:09.869227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.754 [2024-12-05 12:14:09.869258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.754 qpair failed and we were unable to recover it. 00:30:35.754 [2024-12-05 12:14:09.869536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.754 [2024-12-05 12:14:09.869569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.754 qpair failed and we were unable to recover it. 00:30:35.754 [2024-12-05 12:14:09.869789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.754 [2024-12-05 12:14:09.869821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.754 qpair failed and we were unable to recover it. 00:30:35.754 [2024-12-05 12:14:09.870013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.754 [2024-12-05 12:14:09.870045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.755 qpair failed and we were unable to recover it. 00:30:35.755 [2024-12-05 12:14:09.870234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.755 [2024-12-05 12:14:09.870266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.755 qpair failed and we were unable to recover it. 00:30:35.755 [2024-12-05 12:14:09.870463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.755 [2024-12-05 12:14:09.870502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.755 qpair failed and we were unable to recover it. 00:30:35.755 [2024-12-05 12:14:09.870757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.755 [2024-12-05 12:14:09.870789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.755 qpair failed and we were unable to recover it. 00:30:35.755 [2024-12-05 12:14:09.870995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.755 [2024-12-05 12:14:09.871027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.755 qpair failed and we were unable to recover it. 00:30:35.755 [2024-12-05 12:14:09.871299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.755 [2024-12-05 12:14:09.871331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.755 qpair failed and we were unable to recover it. 00:30:35.755 [2024-12-05 12:14:09.871625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.755 [2024-12-05 12:14:09.871659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.755 qpair failed and we were unable to recover it. 00:30:35.755 [2024-12-05 12:14:09.871882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.755 [2024-12-05 12:14:09.871914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.755 qpair failed and we were unable to recover it. 00:30:35.755 [2024-12-05 12:14:09.872194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.755 [2024-12-05 12:14:09.872226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.755 qpair failed and we were unable to recover it. 00:30:35.755 [2024-12-05 12:14:09.872453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.755 [2024-12-05 12:14:09.872486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.755 qpair failed and we were unable to recover it. 00:30:35.755 [2024-12-05 12:14:09.872711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.755 [2024-12-05 12:14:09.872743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.755 qpair failed and we were unable to recover it. 00:30:35.755 [2024-12-05 12:14:09.873022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.755 [2024-12-05 12:14:09.873055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.755 qpair failed and we were unable to recover it. 00:30:35.755 [2024-12-05 12:14:09.873254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.755 [2024-12-05 12:14:09.873286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.755 qpair failed and we were unable to recover it. 00:30:35.755 [2024-12-05 12:14:09.873585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.755 [2024-12-05 12:14:09.873619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.755 qpair failed and we were unable to recover it. 00:30:35.755 [2024-12-05 12:14:09.873916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.755 [2024-12-05 12:14:09.873948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.755 qpair failed and we were unable to recover it. 00:30:35.755 [2024-12-05 12:14:09.874216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.755 [2024-12-05 12:14:09.874248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.755 qpair failed and we were unable to recover it. 00:30:35.755 [2024-12-05 12:14:09.874535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.755 [2024-12-05 12:14:09.874569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.755 qpair failed and we were unable to recover it. 00:30:35.755 [2024-12-05 12:14:09.874854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.755 [2024-12-05 12:14:09.874886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.755 qpair failed and we were unable to recover it. 00:30:35.755 [2024-12-05 12:14:09.875167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.755 [2024-12-05 12:14:09.875199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.755 qpair failed and we were unable to recover it. 00:30:35.755 [2024-12-05 12:14:09.875482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.755 [2024-12-05 12:14:09.875515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.755 qpair failed and we were unable to recover it. 00:30:35.755 [2024-12-05 12:14:09.875792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.755 [2024-12-05 12:14:09.875824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.755 qpair failed and we were unable to recover it. 00:30:35.755 [2024-12-05 12:14:09.876113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.755 [2024-12-05 12:14:09.876145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.755 qpair failed and we were unable to recover it. 00:30:35.755 [2024-12-05 12:14:09.876430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.755 [2024-12-05 12:14:09.876463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.755 qpair failed and we were unable to recover it. 00:30:35.755 [2024-12-05 12:14:09.876672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.755 [2024-12-05 12:14:09.876705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.755 qpair failed and we were unable to recover it. 00:30:35.755 [2024-12-05 12:14:09.876965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.755 [2024-12-05 12:14:09.876997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.755 qpair failed and we were unable to recover it. 00:30:35.755 [2024-12-05 12:14:09.877249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.755 [2024-12-05 12:14:09.877281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.755 qpair failed and we were unable to recover it. 00:30:35.755 [2024-12-05 12:14:09.877584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.755 [2024-12-05 12:14:09.877617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.755 qpair failed and we were unable to recover it. 00:30:35.755 [2024-12-05 12:14:09.877883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.755 [2024-12-05 12:14:09.877915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.755 qpair failed and we were unable to recover it. 00:30:35.755 [2024-12-05 12:14:09.878217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.755 [2024-12-05 12:14:09.878249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.755 qpair failed and we were unable to recover it. 00:30:35.755 [2024-12-05 12:14:09.878441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.755 [2024-12-05 12:14:09.878474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.755 qpair failed and we were unable to recover it. 00:30:35.755 [2024-12-05 12:14:09.878685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.755 [2024-12-05 12:14:09.878717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.755 qpair failed and we were unable to recover it. 00:30:35.755 [2024-12-05 12:14:09.878940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.755 [2024-12-05 12:14:09.878971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.755 qpair failed and we were unable to recover it. 00:30:35.755 [2024-12-05 12:14:09.879210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.755 [2024-12-05 12:14:09.879242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.755 qpair failed and we were unable to recover it. 00:30:35.755 [2024-12-05 12:14:09.879520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.755 [2024-12-05 12:14:09.879552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.755 qpair failed and we were unable to recover it. 00:30:35.755 [2024-12-05 12:14:09.879830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.755 [2024-12-05 12:14:09.879861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.755 qpair failed and we were unable to recover it. 00:30:35.755 [2024-12-05 12:14:09.880063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.755 [2024-12-05 12:14:09.880096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.755 qpair failed and we were unable to recover it. 00:30:35.755 [2024-12-05 12:14:09.880380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.755 [2024-12-05 12:14:09.880414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.755 qpair failed and we were unable to recover it. 00:30:35.755 [2024-12-05 12:14:09.880716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.755 [2024-12-05 12:14:09.880749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.756 qpair failed and we were unable to recover it. 00:30:35.756 [2024-12-05 12:14:09.881022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.756 [2024-12-05 12:14:09.881054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.756 qpair failed and we were unable to recover it. 00:30:35.756 [2024-12-05 12:14:09.881351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.756 [2024-12-05 12:14:09.881400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.756 qpair failed and we were unable to recover it. 00:30:35.756 [2024-12-05 12:14:09.881658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.756 [2024-12-05 12:14:09.881690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.756 qpair failed and we were unable to recover it. 00:30:35.756 [2024-12-05 12:14:09.881895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.756 [2024-12-05 12:14:09.881927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.756 qpair failed and we were unable to recover it. 00:30:35.756 [2024-12-05 12:14:09.882197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.756 [2024-12-05 12:14:09.882229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.756 qpair failed and we were unable to recover it. 00:30:35.756 [2024-12-05 12:14:09.882512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.756 [2024-12-05 12:14:09.882551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.756 qpair failed and we were unable to recover it. 00:30:35.756 [2024-12-05 12:14:09.882830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.756 [2024-12-05 12:14:09.882862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.756 qpair failed and we were unable to recover it. 00:30:35.756 [2024-12-05 12:14:09.883057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.756 [2024-12-05 12:14:09.883089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.756 qpair failed and we were unable to recover it. 00:30:35.756 [2024-12-05 12:14:09.883352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.756 [2024-12-05 12:14:09.883392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.756 qpair failed and we were unable to recover it. 00:30:35.756 [2024-12-05 12:14:09.883659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.756 [2024-12-05 12:14:09.883691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.756 qpair failed and we were unable to recover it. 00:30:35.756 [2024-12-05 12:14:09.883910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.756 [2024-12-05 12:14:09.883940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.756 qpair failed and we were unable to recover it. 00:30:35.756 [2024-12-05 12:14:09.884141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.756 [2024-12-05 12:14:09.884173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.756 qpair failed and we were unable to recover it. 00:30:35.756 [2024-12-05 12:14:09.884444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.756 [2024-12-05 12:14:09.884477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.756 qpair failed and we were unable to recover it. 00:30:35.756 [2024-12-05 12:14:09.884761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.756 [2024-12-05 12:14:09.884793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.756 qpair failed and we were unable to recover it. 00:30:35.756 [2024-12-05 12:14:09.885074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.756 [2024-12-05 12:14:09.885108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.756 qpair failed and we were unable to recover it. 00:30:35.756 [2024-12-05 12:14:09.885398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.756 [2024-12-05 12:14:09.885433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.756 qpair failed and we were unable to recover it. 00:30:35.756 [2024-12-05 12:14:09.885726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.756 [2024-12-05 12:14:09.885758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.756 qpair failed and we were unable to recover it. 00:30:35.756 [2024-12-05 12:14:09.886015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.756 [2024-12-05 12:14:09.886047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.756 qpair failed and we were unable to recover it. 00:30:35.756 [2024-12-05 12:14:09.886244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.756 [2024-12-05 12:14:09.886276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.756 qpair failed and we were unable to recover it. 00:30:35.756 [2024-12-05 12:14:09.886575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.756 [2024-12-05 12:14:09.886608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.756 qpair failed and we were unable to recover it. 00:30:35.756 [2024-12-05 12:14:09.886891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.756 [2024-12-05 12:14:09.886923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.756 qpair failed and we were unable to recover it. 00:30:35.756 [2024-12-05 12:14:09.887198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.756 [2024-12-05 12:14:09.887230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.756 qpair failed and we were unable to recover it. 00:30:35.756 [2024-12-05 12:14:09.887441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.756 [2024-12-05 12:14:09.887474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.756 qpair failed and we were unable to recover it. 00:30:35.756 [2024-12-05 12:14:09.887749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.756 [2024-12-05 12:14:09.887781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.756 qpair failed and we were unable to recover it. 00:30:35.756 [2024-12-05 12:14:09.887962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.756 [2024-12-05 12:14:09.887994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.756 qpair failed and we were unable to recover it. 00:30:35.756 [2024-12-05 12:14:09.888269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.756 [2024-12-05 12:14:09.888300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.756 qpair failed and we were unable to recover it. 00:30:35.756 [2024-12-05 12:14:09.888574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.756 [2024-12-05 12:14:09.888607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.756 qpair failed and we were unable to recover it. 00:30:35.756 [2024-12-05 12:14:09.888747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.756 [2024-12-05 12:14:09.888779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.756 qpair failed and we were unable to recover it. 00:30:35.756 [2024-12-05 12:14:09.889028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.756 [2024-12-05 12:14:09.889060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.756 qpair failed and we were unable to recover it. 00:30:35.756 [2024-12-05 12:14:09.889335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.756 [2024-12-05 12:14:09.889376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.756 qpair failed and we were unable to recover it. 00:30:35.756 [2024-12-05 12:14:09.889657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.756 [2024-12-05 12:14:09.889689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.756 qpair failed and we were unable to recover it. 00:30:35.756 [2024-12-05 12:14:09.889916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.756 [2024-12-05 12:14:09.889947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.756 qpair failed and we were unable to recover it. 00:30:35.756 [2024-12-05 12:14:09.890222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.756 [2024-12-05 12:14:09.890254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.756 qpair failed and we were unable to recover it. 00:30:35.756 [2024-12-05 12:14:09.890543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.756 [2024-12-05 12:14:09.890577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.756 qpair failed and we were unable to recover it. 00:30:35.756 [2024-12-05 12:14:09.890856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.756 [2024-12-05 12:14:09.890888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.756 qpair failed and we were unable to recover it. 00:30:35.756 [2024-12-05 12:14:09.891110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.756 [2024-12-05 12:14:09.891142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.757 qpair failed and we were unable to recover it. 00:30:35.757 [2024-12-05 12:14:09.891380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.757 [2024-12-05 12:14:09.891414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.757 qpair failed and we were unable to recover it. 00:30:35.757 [2024-12-05 12:14:09.891682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.757 [2024-12-05 12:14:09.891713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.757 qpair failed and we were unable to recover it. 00:30:35.757 [2024-12-05 12:14:09.892003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.757 [2024-12-05 12:14:09.892035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.757 qpair failed and we were unable to recover it. 00:30:35.757 [2024-12-05 12:14:09.892315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.757 [2024-12-05 12:14:09.892347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.757 qpair failed and we were unable to recover it. 00:30:35.757 [2024-12-05 12:14:09.892503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.757 [2024-12-05 12:14:09.892535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.757 qpair failed and we were unable to recover it. 00:30:35.757 [2024-12-05 12:14:09.892730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.757 [2024-12-05 12:14:09.892762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.757 qpair failed and we were unable to recover it. 00:30:35.757 [2024-12-05 12:14:09.892992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.757 [2024-12-05 12:14:09.893024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.757 qpair failed and we were unable to recover it. 00:30:35.757 [2024-12-05 12:14:09.893218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.757 [2024-12-05 12:14:09.893250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.757 qpair failed and we were unable to recover it. 00:30:35.757 [2024-12-05 12:14:09.893558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.757 [2024-12-05 12:14:09.893592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.757 qpair failed and we were unable to recover it. 00:30:35.757 [2024-12-05 12:14:09.893849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.757 [2024-12-05 12:14:09.893882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.757 qpair failed and we were unable to recover it. 00:30:35.757 [2024-12-05 12:14:09.894188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.757 [2024-12-05 12:14:09.894221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.757 qpair failed and we were unable to recover it. 00:30:35.757 [2024-12-05 12:14:09.894501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.757 [2024-12-05 12:14:09.894535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.757 qpair failed and we were unable to recover it. 00:30:35.757 [2024-12-05 12:14:09.894817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.757 [2024-12-05 12:14:09.894849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.757 qpair failed and we were unable to recover it. 00:30:35.757 [2024-12-05 12:14:09.895057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.757 [2024-12-05 12:14:09.895089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.757 qpair failed and we were unable to recover it. 00:30:35.757 [2024-12-05 12:14:09.895363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.757 [2024-12-05 12:14:09.895405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.757 qpair failed and we were unable to recover it. 00:30:35.757 [2024-12-05 12:14:09.895706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.757 [2024-12-05 12:14:09.895738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.757 qpair failed and we were unable to recover it. 00:30:35.757 [2024-12-05 12:14:09.895943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.757 [2024-12-05 12:14:09.895976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.757 qpair failed and we were unable to recover it. 00:30:35.757 [2024-12-05 12:14:09.896227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.757 [2024-12-05 12:14:09.896259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.757 qpair failed and we were unable to recover it. 00:30:35.757 [2024-12-05 12:14:09.896460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.757 [2024-12-05 12:14:09.896493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.757 qpair failed and we were unable to recover it. 00:30:35.757 [2024-12-05 12:14:09.896771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.757 [2024-12-05 12:14:09.896804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.757 qpair failed and we were unable to recover it. 00:30:35.757 [2024-12-05 12:14:09.897088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.757 [2024-12-05 12:14:09.897119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.757 qpair failed and we were unable to recover it. 00:30:35.757 [2024-12-05 12:14:09.897403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.757 [2024-12-05 12:14:09.897436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.757 qpair failed and we were unable to recover it. 00:30:35.757 [2024-12-05 12:14:09.897692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.757 [2024-12-05 12:14:09.897724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.757 qpair failed and we were unable to recover it. 00:30:35.757 [2024-12-05 12:14:09.897980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.757 [2024-12-05 12:14:09.898012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:35.757 qpair failed and we were unable to recover it. 00:30:36.037 [2024-12-05 12:14:09.898197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.037 [2024-12-05 12:14:09.898230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.037 qpair failed and we were unable to recover it. 00:30:36.037 [2024-12-05 12:14:09.898429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.037 [2024-12-05 12:14:09.898462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.037 qpair failed and we were unable to recover it. 00:30:36.037 [2024-12-05 12:14:09.898686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.037 [2024-12-05 12:14:09.898720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.037 qpair failed and we were unable to recover it. 00:30:36.037 [2024-12-05 12:14:09.898860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.037 [2024-12-05 12:14:09.898891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.037 qpair failed and we were unable to recover it. 00:30:36.037 [2024-12-05 12:14:09.899076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.037 [2024-12-05 12:14:09.899108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.037 qpair failed and we were unable to recover it. 00:30:36.037 [2024-12-05 12:14:09.899312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.037 [2024-12-05 12:14:09.899345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.037 qpair failed and we were unable to recover it. 00:30:36.038 [2024-12-05 12:14:09.899496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.038 [2024-12-05 12:14:09.899529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.038 qpair failed and we were unable to recover it. 00:30:36.038 [2024-12-05 12:14:09.899817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.038 [2024-12-05 12:14:09.899849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.038 qpair failed and we were unable to recover it. 00:30:36.038 [2024-12-05 12:14:09.900074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.038 [2024-12-05 12:14:09.900106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.038 qpair failed and we were unable to recover it. 00:30:36.038 [2024-12-05 12:14:09.900305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.038 [2024-12-05 12:14:09.900338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.038 qpair failed and we were unable to recover it. 00:30:36.038 [2024-12-05 12:14:09.900642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.038 [2024-12-05 12:14:09.900675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.038 qpair failed and we were unable to recover it. 00:30:36.038 [2024-12-05 12:14:09.900951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.038 [2024-12-05 12:14:09.900984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.038 qpair failed and we were unable to recover it. 00:30:36.038 [2024-12-05 12:14:09.901263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.038 [2024-12-05 12:14:09.901295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.038 qpair failed and we were unable to recover it. 00:30:36.038 [2024-12-05 12:14:09.901539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.038 [2024-12-05 12:14:09.901577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.038 qpair failed and we were unable to recover it. 00:30:36.038 [2024-12-05 12:14:09.901769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.038 [2024-12-05 12:14:09.901801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.038 qpair failed and we were unable to recover it. 00:30:36.038 [2024-12-05 12:14:09.901994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.038 [2024-12-05 12:14:09.902026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.038 qpair failed and we were unable to recover it. 00:30:36.038 [2024-12-05 12:14:09.902223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.038 [2024-12-05 12:14:09.902255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.038 qpair failed and we were unable to recover it. 00:30:36.038 [2024-12-05 12:14:09.902538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.038 [2024-12-05 12:14:09.902571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.038 qpair failed and we were unable to recover it. 00:30:36.038 [2024-12-05 12:14:09.902752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.038 [2024-12-05 12:14:09.902784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.038 qpair failed and we were unable to recover it. 00:30:36.038 [2024-12-05 12:14:09.902913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.038 [2024-12-05 12:14:09.902945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.038 qpair failed and we were unable to recover it. 00:30:36.038 [2024-12-05 12:14:09.903226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.038 [2024-12-05 12:14:09.903258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.038 qpair failed and we were unable to recover it. 00:30:36.038 [2024-12-05 12:14:09.903520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.038 [2024-12-05 12:14:09.903552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.038 qpair failed and we were unable to recover it. 00:30:36.038 [2024-12-05 12:14:09.903803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.038 [2024-12-05 12:14:09.903835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.038 qpair failed and we were unable to recover it. 00:30:36.038 [2024-12-05 12:14:09.904088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.038 [2024-12-05 12:14:09.904119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.038 qpair failed and we were unable to recover it. 00:30:36.038 [2024-12-05 12:14:09.904434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.038 [2024-12-05 12:14:09.904468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.038 qpair failed and we were unable to recover it. 00:30:36.038 [2024-12-05 12:14:09.904678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.038 [2024-12-05 12:14:09.904711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.038 qpair failed and we were unable to recover it. 00:30:36.038 [2024-12-05 12:14:09.904926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.038 [2024-12-05 12:14:09.904957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.038 qpair failed and we were unable to recover it. 00:30:36.038 [2024-12-05 12:14:09.905192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.038 [2024-12-05 12:14:09.905225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.038 qpair failed and we were unable to recover it. 00:30:36.038 [2024-12-05 12:14:09.905334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.038 [2024-12-05 12:14:09.905390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.038 qpair failed and we were unable to recover it. 00:30:36.038 [2024-12-05 12:14:09.905672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.038 [2024-12-05 12:14:09.905705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.038 qpair failed and we were unable to recover it. 00:30:36.038 [2024-12-05 12:14:09.905967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.038 [2024-12-05 12:14:09.905999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.038 qpair failed and we were unable to recover it. 00:30:36.038 [2024-12-05 12:14:09.906188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.038 [2024-12-05 12:14:09.906220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.038 qpair failed and we were unable to recover it. 00:30:36.038 [2024-12-05 12:14:09.906496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.038 [2024-12-05 12:14:09.906530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.038 qpair failed and we were unable to recover it. 00:30:36.038 [2024-12-05 12:14:09.906724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.038 [2024-12-05 12:14:09.906756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.039 qpair failed and we were unable to recover it. 00:30:36.039 [2024-12-05 12:14:09.906942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.039 [2024-12-05 12:14:09.906973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.039 qpair failed and we were unable to recover it. 00:30:36.039 [2024-12-05 12:14:09.907250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.039 [2024-12-05 12:14:09.907282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.039 qpair failed and we were unable to recover it. 00:30:36.039 [2024-12-05 12:14:09.907396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.039 [2024-12-05 12:14:09.907429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.039 qpair failed and we were unable to recover it. 00:30:36.039 [2024-12-05 12:14:09.907712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.039 [2024-12-05 12:14:09.907745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.039 qpair failed and we were unable to recover it. 00:30:36.039 [2024-12-05 12:14:09.908021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.039 [2024-12-05 12:14:09.908053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.039 qpair failed and we were unable to recover it. 00:30:36.039 [2024-12-05 12:14:09.908312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.039 [2024-12-05 12:14:09.908344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.039 qpair failed and we were unable to recover it. 00:30:36.039 [2024-12-05 12:14:09.908655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.039 [2024-12-05 12:14:09.908688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.039 qpair failed and we were unable to recover it. 00:30:36.039 [2024-12-05 12:14:09.908827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.039 [2024-12-05 12:14:09.908858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.039 qpair failed and we were unable to recover it. 00:30:36.039 [2024-12-05 12:14:09.909153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.039 [2024-12-05 12:14:09.909185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.039 qpair failed and we were unable to recover it. 00:30:36.039 [2024-12-05 12:14:09.909457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.039 [2024-12-05 12:14:09.909490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.039 qpair failed and we were unable to recover it. 00:30:36.039 [2024-12-05 12:14:09.909685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.039 [2024-12-05 12:14:09.909719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.039 qpair failed and we were unable to recover it. 00:30:36.039 [2024-12-05 12:14:09.909984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.039 [2024-12-05 12:14:09.910017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.039 qpair failed and we were unable to recover it. 00:30:36.039 [2024-12-05 12:14:09.910223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.039 [2024-12-05 12:14:09.910255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.039 qpair failed and we were unable to recover it. 00:30:36.039 [2024-12-05 12:14:09.910450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.039 [2024-12-05 12:14:09.910483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.039 qpair failed and we were unable to recover it. 00:30:36.039 [2024-12-05 12:14:09.910705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.039 [2024-12-05 12:14:09.910737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.039 qpair failed and we were unable to recover it. 00:30:36.039 [2024-12-05 12:14:09.910988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.039 [2024-12-05 12:14:09.911020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.039 qpair failed and we were unable to recover it. 00:30:36.039 [2024-12-05 12:14:09.911295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.039 [2024-12-05 12:14:09.911326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.039 qpair failed and we were unable to recover it. 00:30:36.039 [2024-12-05 12:14:09.911616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.039 [2024-12-05 12:14:09.911649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.039 qpair failed and we were unable to recover it. 00:30:36.039 [2024-12-05 12:14:09.911928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.039 [2024-12-05 12:14:09.911959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.039 qpair failed and we were unable to recover it. 00:30:36.039 [2024-12-05 12:14:09.912220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.039 [2024-12-05 12:14:09.912252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.039 qpair failed and we were unable to recover it. 00:30:36.039 [2024-12-05 12:14:09.912541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.039 [2024-12-05 12:14:09.912580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.039 qpair failed and we were unable to recover it. 00:30:36.039 [2024-12-05 12:14:09.912843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.039 [2024-12-05 12:14:09.912875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.039 qpair failed and we were unable to recover it. 00:30:36.039 [2024-12-05 12:14:09.913140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.039 [2024-12-05 12:14:09.913171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.039 qpair failed and we were unable to recover it. 00:30:36.039 [2024-12-05 12:14:09.913304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.039 [2024-12-05 12:14:09.913336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.039 qpair failed and we were unable to recover it. 00:30:36.039 [2024-12-05 12:14:09.913558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.039 [2024-12-05 12:14:09.913592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.039 qpair failed and we were unable to recover it. 00:30:36.039 [2024-12-05 12:14:09.913904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.039 [2024-12-05 12:14:09.913936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.039 qpair failed and we were unable to recover it. 00:30:36.039 [2024-12-05 12:14:09.914210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.039 [2024-12-05 12:14:09.914242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.039 qpair failed and we were unable to recover it. 00:30:36.039 [2024-12-05 12:14:09.914535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.039 [2024-12-05 12:14:09.914568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.039 qpair failed and we were unable to recover it. 00:30:36.039 [2024-12-05 12:14:09.914749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.039 [2024-12-05 12:14:09.914781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.040 qpair failed and we were unable to recover it. 00:30:36.040 [2024-12-05 12:14:09.914976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.040 [2024-12-05 12:14:09.915008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.040 qpair failed and we were unable to recover it. 00:30:36.040 [2024-12-05 12:14:09.915261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.040 [2024-12-05 12:14:09.915293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.040 qpair failed and we were unable to recover it. 00:30:36.040 [2024-12-05 12:14:09.915436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.040 [2024-12-05 12:14:09.915470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.040 qpair failed and we were unable to recover it. 00:30:36.040 [2024-12-05 12:14:09.915675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.040 [2024-12-05 12:14:09.915707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.040 qpair failed and we were unable to recover it. 00:30:36.040 [2024-12-05 12:14:09.915957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.040 [2024-12-05 12:14:09.915989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.040 qpair failed and we were unable to recover it. 00:30:36.040 [2024-12-05 12:14:09.916187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.040 [2024-12-05 12:14:09.916220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.040 qpair failed and we were unable to recover it. 00:30:36.040 [2024-12-05 12:14:09.916500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.040 [2024-12-05 12:14:09.916533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.040 qpair failed and we were unable to recover it. 00:30:36.040 [2024-12-05 12:14:09.916733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.040 [2024-12-05 12:14:09.916764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.040 qpair failed and we were unable to recover it. 00:30:36.040 [2024-12-05 12:14:09.917057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.040 [2024-12-05 12:14:09.917090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.040 qpair failed and we were unable to recover it. 00:30:36.040 [2024-12-05 12:14:09.917379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.040 [2024-12-05 12:14:09.917413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.040 qpair failed and we were unable to recover it. 00:30:36.040 [2024-12-05 12:14:09.917605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.040 [2024-12-05 12:14:09.917637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.040 qpair failed and we were unable to recover it. 00:30:36.040 [2024-12-05 12:14:09.917900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.040 [2024-12-05 12:14:09.917931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.040 qpair failed and we were unable to recover it. 00:30:36.040 [2024-12-05 12:14:09.918205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.040 [2024-12-05 12:14:09.918238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.040 qpair failed and we were unable to recover it. 00:30:36.040 [2024-12-05 12:14:09.918417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.040 [2024-12-05 12:14:09.918453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.040 qpair failed and we were unable to recover it. 00:30:36.040 [2024-12-05 12:14:09.918638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.040 [2024-12-05 12:14:09.918669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.040 qpair failed and we were unable to recover it. 00:30:36.040 [2024-12-05 12:14:09.918889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.040 [2024-12-05 12:14:09.918922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.040 qpair failed and we were unable to recover it. 00:30:36.040 [2024-12-05 12:14:09.919172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.040 [2024-12-05 12:14:09.919204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.040 qpair failed and we were unable to recover it. 00:30:36.040 [2024-12-05 12:14:09.919474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.040 [2024-12-05 12:14:09.919508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.040 qpair failed and we were unable to recover it. 00:30:36.040 [2024-12-05 12:14:09.919784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.040 [2024-12-05 12:14:09.919850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.040 qpair failed and we were unable to recover it. 00:30:36.040 [2024-12-05 12:14:09.920142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.040 [2024-12-05 12:14:09.920174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.040 qpair failed and we were unable to recover it. 00:30:36.040 [2024-12-05 12:14:09.920396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.040 [2024-12-05 12:14:09.920430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.040 qpair failed and we were unable to recover it. 00:30:36.040 [2024-12-05 12:14:09.920632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.040 [2024-12-05 12:14:09.920663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.040 qpair failed and we were unable to recover it. 00:30:36.040 [2024-12-05 12:14:09.920936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.040 [2024-12-05 12:14:09.920969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.040 qpair failed and we were unable to recover it. 00:30:36.040 [2024-12-05 12:14:09.921161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.040 [2024-12-05 12:14:09.921193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.040 qpair failed and we were unable to recover it. 00:30:36.040 [2024-12-05 12:14:09.921401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.040 [2024-12-05 12:14:09.921434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.040 qpair failed and we were unable to recover it. 00:30:36.040 [2024-12-05 12:14:09.921715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.040 [2024-12-05 12:14:09.921747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.040 qpair failed and we were unable to recover it. 00:30:36.040 [2024-12-05 12:14:09.921933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.040 [2024-12-05 12:14:09.921965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.040 qpair failed and we were unable to recover it. 00:30:36.040 [2024-12-05 12:14:09.922105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.040 [2024-12-05 12:14:09.922137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.040 qpair failed and we were unable to recover it. 00:30:36.041 [2024-12-05 12:14:09.922348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.041 [2024-12-05 12:14:09.922392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.041 qpair failed and we were unable to recover it. 00:30:36.041 [2024-12-05 12:14:09.922590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.041 [2024-12-05 12:14:09.922621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.041 qpair failed and we were unable to recover it. 00:30:36.041 [2024-12-05 12:14:09.922896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.041 [2024-12-05 12:14:09.922927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.041 qpair failed and we were unable to recover it. 00:30:36.041 [2024-12-05 12:14:09.923114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.041 [2024-12-05 12:14:09.923145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.041 qpair failed and we were unable to recover it. 00:30:36.041 [2024-12-05 12:14:09.923353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.041 [2024-12-05 12:14:09.923398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.041 qpair failed and we were unable to recover it. 00:30:36.041 [2024-12-05 12:14:09.923584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.041 [2024-12-05 12:14:09.923614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.041 qpair failed and we were unable to recover it. 00:30:36.041 [2024-12-05 12:14:09.923891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.041 [2024-12-05 12:14:09.923923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.041 qpair failed and we were unable to recover it. 00:30:36.041 [2024-12-05 12:14:09.924102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.041 [2024-12-05 12:14:09.924134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.041 qpair failed and we were unable to recover it. 00:30:36.041 [2024-12-05 12:14:09.924416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.041 [2024-12-05 12:14:09.924449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.041 qpair failed and we were unable to recover it. 00:30:36.041 [2024-12-05 12:14:09.924737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.041 [2024-12-05 12:14:09.924769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.041 qpair failed and we were unable to recover it. 00:30:36.041 [2024-12-05 12:14:09.925063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.041 [2024-12-05 12:14:09.925096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.041 qpair failed and we were unable to recover it. 00:30:36.041 [2024-12-05 12:14:09.925210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.041 [2024-12-05 12:14:09.925241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.041 qpair failed and we were unable to recover it. 00:30:36.041 [2024-12-05 12:14:09.925458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.041 [2024-12-05 12:14:09.925491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.041 qpair failed and we were unable to recover it. 00:30:36.041 [2024-12-05 12:14:09.925697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.041 [2024-12-05 12:14:09.925730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.041 qpair failed and we were unable to recover it. 00:30:36.041 [2024-12-05 12:14:09.925983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.041 [2024-12-05 12:14:09.926015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.041 qpair failed and we were unable to recover it. 00:30:36.041 [2024-12-05 12:14:09.926217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.041 [2024-12-05 12:14:09.926250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.041 qpair failed and we were unable to recover it. 00:30:36.041 [2024-12-05 12:14:09.926525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.041 [2024-12-05 12:14:09.926558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.041 qpair failed and we were unable to recover it. 00:30:36.041 [2024-12-05 12:14:09.926808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.041 [2024-12-05 12:14:09.926839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.041 qpair failed and we were unable to recover it. 00:30:36.041 [2024-12-05 12:14:09.927148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.041 [2024-12-05 12:14:09.927180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.041 qpair failed and we were unable to recover it. 00:30:36.041 [2024-12-05 12:14:09.927446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.041 [2024-12-05 12:14:09.927478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.041 qpair failed and we were unable to recover it. 00:30:36.041 [2024-12-05 12:14:09.927778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.041 [2024-12-05 12:14:09.927810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.041 qpair failed and we were unable to recover it. 00:30:36.041 [2024-12-05 12:14:09.928065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.041 [2024-12-05 12:14:09.928098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.041 qpair failed and we were unable to recover it. 00:30:36.041 [2024-12-05 12:14:09.928242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.041 [2024-12-05 12:14:09.928274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.041 qpair failed and we were unable to recover it. 00:30:36.041 [2024-12-05 12:14:09.928522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.041 [2024-12-05 12:14:09.928556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.041 qpair failed and we were unable to recover it. 00:30:36.041 [2024-12-05 12:14:09.928759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.041 [2024-12-05 12:14:09.928791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.041 qpair failed and we were unable to recover it. 00:30:36.041 [2024-12-05 12:14:09.928919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.041 [2024-12-05 12:14:09.928951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.041 qpair failed and we were unable to recover it. 00:30:36.041 [2024-12-05 12:14:09.929229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.041 [2024-12-05 12:14:09.929262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.041 qpair failed and we were unable to recover it. 00:30:36.041 [2024-12-05 12:14:09.929549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.041 [2024-12-05 12:14:09.929582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.041 qpair failed and we were unable to recover it. 00:30:36.041 [2024-12-05 12:14:09.929862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.042 [2024-12-05 12:14:09.929893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.042 qpair failed and we were unable to recover it. 00:30:36.042 [2024-12-05 12:14:09.930145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.042 [2024-12-05 12:14:09.930179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.042 qpair failed and we were unable to recover it. 00:30:36.042 [2024-12-05 12:14:09.930388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.042 [2024-12-05 12:14:09.930422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.042 qpair failed and we were unable to recover it. 00:30:36.042 [2024-12-05 12:14:09.930602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.042 [2024-12-05 12:14:09.930640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.042 qpair failed and we were unable to recover it. 00:30:36.042 [2024-12-05 12:14:09.930870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.042 [2024-12-05 12:14:09.930903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.042 qpair failed and we were unable to recover it. 00:30:36.042 [2024-12-05 12:14:09.931130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.042 [2024-12-05 12:14:09.931163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.042 qpair failed and we were unable to recover it. 00:30:36.042 [2024-12-05 12:14:09.931442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.042 [2024-12-05 12:14:09.931478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.042 qpair failed and we were unable to recover it. 00:30:36.042 [2024-12-05 12:14:09.931734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.042 [2024-12-05 12:14:09.931765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.042 qpair failed and we were unable to recover it. 00:30:36.042 [2024-12-05 12:14:09.931959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.042 [2024-12-05 12:14:09.931992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.042 qpair failed and we were unable to recover it. 00:30:36.042 [2024-12-05 12:14:09.932273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.042 [2024-12-05 12:14:09.932306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.042 qpair failed and we were unable to recover it. 00:30:36.042 [2024-12-05 12:14:09.932550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.042 [2024-12-05 12:14:09.932583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.042 qpair failed and we were unable to recover it. 00:30:36.042 [2024-12-05 12:14:09.932776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.042 [2024-12-05 12:14:09.932809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.042 qpair failed and we were unable to recover it. 00:30:36.042 [2024-12-05 12:14:09.933103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.042 [2024-12-05 12:14:09.933136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.042 qpair failed and we were unable to recover it. 00:30:36.042 [2024-12-05 12:14:09.933403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.042 [2024-12-05 12:14:09.933437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.042 qpair failed and we were unable to recover it. 00:30:36.042 [2024-12-05 12:14:09.933632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.042 [2024-12-05 12:14:09.933664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.042 qpair failed and we were unable to recover it. 00:30:36.042 [2024-12-05 12:14:09.933933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.042 [2024-12-05 12:14:09.933965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.042 qpair failed and we were unable to recover it. 00:30:36.042 [2024-12-05 12:14:09.934214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.042 [2024-12-05 12:14:09.934246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.042 qpair failed and we were unable to recover it. 00:30:36.042 [2024-12-05 12:14:09.934509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.042 [2024-12-05 12:14:09.934543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.042 qpair failed and we were unable to recover it. 00:30:36.042 [2024-12-05 12:14:09.934840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.042 [2024-12-05 12:14:09.934873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.042 qpair failed and we were unable to recover it. 00:30:36.042 [2024-12-05 12:14:09.935143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.042 [2024-12-05 12:14:09.935175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.042 qpair failed and we were unable to recover it. 00:30:36.042 [2024-12-05 12:14:09.935457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.042 [2024-12-05 12:14:09.935491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.042 qpair failed and we were unable to recover it. 00:30:36.042 [2024-12-05 12:14:09.935771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.042 [2024-12-05 12:14:09.935803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.042 qpair failed and we were unable to recover it. 00:30:36.042 [2024-12-05 12:14:09.936086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.042 [2024-12-05 12:14:09.936118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.042 qpair failed and we were unable to recover it. 00:30:36.042 [2024-12-05 12:14:09.936347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.042 [2024-12-05 12:14:09.936389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.042 qpair failed and we were unable to recover it. 00:30:36.042 [2024-12-05 12:14:09.936520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.042 [2024-12-05 12:14:09.936553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.042 qpair failed and we were unable to recover it. 00:30:36.042 [2024-12-05 12:14:09.936829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.042 [2024-12-05 12:14:09.936861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.042 qpair failed and we were unable to recover it. 00:30:36.042 [2024-12-05 12:14:09.937042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.042 [2024-12-05 12:14:09.937075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.042 qpair failed and we were unable to recover it. 00:30:36.042 [2024-12-05 12:14:09.937347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.042 [2024-12-05 12:14:09.937405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.042 qpair failed and we were unable to recover it. 00:30:36.042 [2024-12-05 12:14:09.937701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.042 [2024-12-05 12:14:09.937733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.042 qpair failed and we were unable to recover it. 00:30:36.042 [2024-12-05 12:14:09.937988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.042 [2024-12-05 12:14:09.938022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.043 qpair failed and we were unable to recover it. 00:30:36.043 [2024-12-05 12:14:09.938279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.043 [2024-12-05 12:14:09.938319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.043 qpair failed and we were unable to recover it. 00:30:36.043 [2024-12-05 12:14:09.938616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.043 [2024-12-05 12:14:09.938649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.043 qpair failed and we were unable to recover it. 00:30:36.043 [2024-12-05 12:14:09.938762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.043 [2024-12-05 12:14:09.938794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.043 qpair failed and we were unable to recover it. 00:30:36.043 [2024-12-05 12:14:09.938977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.043 [2024-12-05 12:14:09.939009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.043 qpair failed and we were unable to recover it. 00:30:36.043 [2024-12-05 12:14:09.939292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.043 [2024-12-05 12:14:09.939324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.043 qpair failed and we were unable to recover it. 00:30:36.043 [2024-12-05 12:14:09.939614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.043 [2024-12-05 12:14:09.939648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.043 qpair failed and we were unable to recover it. 00:30:36.043 [2024-12-05 12:14:09.939925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.043 [2024-12-05 12:14:09.939958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.043 qpair failed and we were unable to recover it. 00:30:36.043 [2024-12-05 12:14:09.940247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.043 [2024-12-05 12:14:09.940279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.043 qpair failed and we were unable to recover it. 00:30:36.043 [2024-12-05 12:14:09.940459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.043 [2024-12-05 12:14:09.940494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.043 qpair failed and we were unable to recover it. 00:30:36.043 [2024-12-05 12:14:09.940694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.043 [2024-12-05 12:14:09.940726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.043 qpair failed and we were unable to recover it. 00:30:36.043 [2024-12-05 12:14:09.940904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.043 [2024-12-05 12:14:09.940936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.043 qpair failed and we were unable to recover it. 00:30:36.043 [2024-12-05 12:14:09.941052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.043 [2024-12-05 12:14:09.941084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.043 qpair failed and we were unable to recover it. 00:30:36.043 [2024-12-05 12:14:09.941286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.043 [2024-12-05 12:14:09.941318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.043 qpair failed and we were unable to recover it. 00:30:36.043 [2024-12-05 12:14:09.941554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.043 [2024-12-05 12:14:09.941588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.043 qpair failed and we were unable to recover it. 00:30:36.043 [2024-12-05 12:14:09.941949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.043 [2024-12-05 12:14:09.942025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.043 qpair failed and we were unable to recover it. 00:30:36.043 [2024-12-05 12:14:09.942255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.043 [2024-12-05 12:14:09.942291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.043 qpair failed and we were unable to recover it. 00:30:36.043 [2024-12-05 12:14:09.942438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.043 [2024-12-05 12:14:09.942474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.043 qpair failed and we were unable to recover it. 00:30:36.043 [2024-12-05 12:14:09.942750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.043 [2024-12-05 12:14:09.942782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.043 qpair failed and we were unable to recover it. 00:30:36.043 [2024-12-05 12:14:09.942975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.043 [2024-12-05 12:14:09.943006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.043 qpair failed and we were unable to recover it. 00:30:36.043 [2024-12-05 12:14:09.943204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.043 [2024-12-05 12:14:09.943236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.043 qpair failed and we were unable to recover it. 00:30:36.043 [2024-12-05 12:14:09.943439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.043 [2024-12-05 12:14:09.943472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.043 qpair failed and we were unable to recover it. 00:30:36.043 [2024-12-05 12:14:09.943752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.043 [2024-12-05 12:14:09.943784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.043 qpair failed and we were unable to recover it. 00:30:36.043 [2024-12-05 12:14:09.943991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.043 [2024-12-05 12:14:09.944023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.043 qpair failed and we were unable to recover it. 00:30:36.043 [2024-12-05 12:14:09.944276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.044 [2024-12-05 12:14:09.944308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.044 qpair failed and we were unable to recover it. 00:30:36.044 [2024-12-05 12:14:09.944610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.044 [2024-12-05 12:14:09.944645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.044 qpair failed and we were unable to recover it. 00:30:36.044 [2024-12-05 12:14:09.944912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.044 [2024-12-05 12:14:09.944945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.044 qpair failed and we were unable to recover it. 00:30:36.044 [2024-12-05 12:14:09.945243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.044 [2024-12-05 12:14:09.945276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.044 qpair failed and we were unable to recover it. 00:30:36.044 [2024-12-05 12:14:09.945480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.044 [2024-12-05 12:14:09.945524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.044 qpair failed and we were unable to recover it. 00:30:36.044 [2024-12-05 12:14:09.945738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.044 [2024-12-05 12:14:09.945771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.044 qpair failed and we were unable to recover it. 00:30:36.044 [2024-12-05 12:14:09.946051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.044 [2024-12-05 12:14:09.946084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.044 qpair failed and we were unable to recover it. 00:30:36.044 [2024-12-05 12:14:09.946280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.044 [2024-12-05 12:14:09.946313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.044 qpair failed and we were unable to recover it. 00:30:36.044 [2024-12-05 12:14:09.946555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.044 [2024-12-05 12:14:09.946590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.044 qpair failed and we were unable to recover it. 00:30:36.044 [2024-12-05 12:14:09.946771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.044 [2024-12-05 12:14:09.946805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.044 qpair failed and we were unable to recover it. 00:30:36.044 [2024-12-05 12:14:09.946964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.044 [2024-12-05 12:14:09.946998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.044 qpair failed and we were unable to recover it. 00:30:36.044 [2024-12-05 12:14:09.947279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.044 [2024-12-05 12:14:09.947312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.044 qpair failed and we were unable to recover it. 00:30:36.044 [2024-12-05 12:14:09.947528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.044 [2024-12-05 12:14:09.947561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.044 qpair failed and we were unable to recover it. 00:30:36.044 [2024-12-05 12:14:09.947708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.044 [2024-12-05 12:14:09.947741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.044 qpair failed and we were unable to recover it. 00:30:36.044 [2024-12-05 12:14:09.947950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.044 [2024-12-05 12:14:09.947983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.044 qpair failed and we were unable to recover it. 00:30:36.044 [2024-12-05 12:14:09.948180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.044 [2024-12-05 12:14:09.948212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.044 qpair failed and we were unable to recover it. 00:30:36.044 [2024-12-05 12:14:09.948408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.044 [2024-12-05 12:14:09.948440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.044 qpair failed and we were unable to recover it. 00:30:36.044 [2024-12-05 12:14:09.948645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.044 [2024-12-05 12:14:09.948677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.044 qpair failed and we were unable to recover it. 00:30:36.044 [2024-12-05 12:14:09.948878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.044 [2024-12-05 12:14:09.948911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.044 qpair failed and we were unable to recover it. 00:30:36.044 [2024-12-05 12:14:09.949167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.044 [2024-12-05 12:14:09.949199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.044 qpair failed and we were unable to recover it. 00:30:36.044 [2024-12-05 12:14:09.949498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.044 [2024-12-05 12:14:09.949532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.044 qpair failed and we were unable to recover it. 00:30:36.044 [2024-12-05 12:14:09.949802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.044 [2024-12-05 12:14:09.949833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.044 qpair failed and we were unable to recover it. 00:30:36.044 [2024-12-05 12:14:09.950014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.044 [2024-12-05 12:14:09.950046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.044 qpair failed and we were unable to recover it. 00:30:36.044 [2024-12-05 12:14:09.950197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.044 [2024-12-05 12:14:09.950229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.044 qpair failed and we were unable to recover it. 00:30:36.044 [2024-12-05 12:14:09.950431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.044 [2024-12-05 12:14:09.950464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.044 qpair failed and we were unable to recover it. 00:30:36.044 [2024-12-05 12:14:09.950758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.044 [2024-12-05 12:14:09.950790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.044 qpair failed and we were unable to recover it. 00:30:36.044 [2024-12-05 12:14:09.950986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.044 [2024-12-05 12:14:09.951020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.044 qpair failed and we were unable to recover it. 00:30:36.044 [2024-12-05 12:14:09.951278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.044 [2024-12-05 12:14:09.951309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.044 qpair failed and we were unable to recover it. 00:30:36.044 [2024-12-05 12:14:09.951461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.044 [2024-12-05 12:14:09.951495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.044 qpair failed and we were unable to recover it. 00:30:36.044 [2024-12-05 12:14:09.951805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.045 [2024-12-05 12:14:09.951838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.045 qpair failed and we were unable to recover it. 00:30:36.045 [2024-12-05 12:14:09.952113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.045 [2024-12-05 12:14:09.952146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.045 qpair failed and we were unable to recover it. 00:30:36.045 [2024-12-05 12:14:09.952364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.045 [2024-12-05 12:14:09.952414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.045 qpair failed and we were unable to recover it. 00:30:36.045 [2024-12-05 12:14:09.952555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.045 [2024-12-05 12:14:09.952586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.045 qpair failed and we were unable to recover it. 00:30:36.045 [2024-12-05 12:14:09.952789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.045 [2024-12-05 12:14:09.952821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.045 qpair failed and we were unable to recover it. 00:30:36.045 [2024-12-05 12:14:09.952961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.045 [2024-12-05 12:14:09.952993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.045 qpair failed and we were unable to recover it. 00:30:36.045 [2024-12-05 12:14:09.953302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.045 [2024-12-05 12:14:09.953334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.045 qpair failed and we were unable to recover it. 00:30:36.045 [2024-12-05 12:14:09.953587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.045 [2024-12-05 12:14:09.953621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.045 qpair failed and we were unable to recover it. 00:30:36.045 [2024-12-05 12:14:09.953920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.045 [2024-12-05 12:14:09.953952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.045 qpair failed and we were unable to recover it. 00:30:36.045 [2024-12-05 12:14:09.954219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.045 [2024-12-05 12:14:09.954252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.045 qpair failed and we were unable to recover it. 00:30:36.045 [2024-12-05 12:14:09.954491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.045 [2024-12-05 12:14:09.954525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.045 qpair failed and we were unable to recover it. 00:30:36.045 [2024-12-05 12:14:09.954744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.045 [2024-12-05 12:14:09.954776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.045 qpair failed and we were unable to recover it. 00:30:36.045 [2024-12-05 12:14:09.954981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.045 [2024-12-05 12:14:09.955013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.045 qpair failed and we were unable to recover it. 00:30:36.045 [2024-12-05 12:14:09.955301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.045 [2024-12-05 12:14:09.955333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.045 qpair failed and we were unable to recover it. 00:30:36.045 [2024-12-05 12:14:09.955538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.045 [2024-12-05 12:14:09.955571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.045 qpair failed and we were unable to recover it. 00:30:36.045 [2024-12-05 12:14:09.955756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.045 [2024-12-05 12:14:09.955790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.045 qpair failed and we were unable to recover it. 00:30:36.045 [2024-12-05 12:14:09.955994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.045 [2024-12-05 12:14:09.956027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.045 qpair failed and we were unable to recover it. 00:30:36.045 [2024-12-05 12:14:09.956308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.045 [2024-12-05 12:14:09.956341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.045 qpair failed and we were unable to recover it. 00:30:36.045 [2024-12-05 12:14:09.956625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.045 [2024-12-05 12:14:09.956658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.045 qpair failed and we were unable to recover it. 00:30:36.045 [2024-12-05 12:14:09.956884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.045 [2024-12-05 12:14:09.956918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.045 qpair failed and we were unable to recover it. 00:30:36.045 [2024-12-05 12:14:09.957130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.045 [2024-12-05 12:14:09.957162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.045 qpair failed and we were unable to recover it. 00:30:36.045 [2024-12-05 12:14:09.957441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.045 [2024-12-05 12:14:09.957474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.045 qpair failed and we were unable to recover it. 00:30:36.045 [2024-12-05 12:14:09.957656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.045 [2024-12-05 12:14:09.957688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.045 qpair failed and we were unable to recover it. 00:30:36.045 [2024-12-05 12:14:09.957885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.045 [2024-12-05 12:14:09.957918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.045 qpair failed and we were unable to recover it. 00:30:36.045 [2024-12-05 12:14:09.958184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.045 [2024-12-05 12:14:09.958218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.045 qpair failed and we were unable to recover it. 00:30:36.045 [2024-12-05 12:14:09.958336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.045 [2024-12-05 12:14:09.958380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.045 qpair failed and we were unable to recover it. 00:30:36.045 [2024-12-05 12:14:09.958577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.045 [2024-12-05 12:14:09.958608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.045 qpair failed and we were unable to recover it. 00:30:36.045 [2024-12-05 12:14:09.958894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.045 [2024-12-05 12:14:09.958927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.045 qpair failed and we were unable to recover it. 00:30:36.045 [2024-12-05 12:14:09.959170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.045 [2024-12-05 12:14:09.959202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.045 qpair failed and we were unable to recover it. 00:30:36.045 [2024-12-05 12:14:09.959404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.045 [2024-12-05 12:14:09.959446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.045 qpair failed and we were unable to recover it. 00:30:36.046 [2024-12-05 12:14:09.959630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.046 [2024-12-05 12:14:09.959661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.046 qpair failed and we were unable to recover it. 00:30:36.046 [2024-12-05 12:14:09.959866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.046 [2024-12-05 12:14:09.959898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.046 qpair failed and we were unable to recover it. 00:30:36.046 [2024-12-05 12:14:09.960175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.046 [2024-12-05 12:14:09.960208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.046 qpair failed and we were unable to recover it. 00:30:36.046 [2024-12-05 12:14:09.960403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.046 [2024-12-05 12:14:09.960439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.046 qpair failed and we were unable to recover it. 00:30:36.046 [2024-12-05 12:14:09.960569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.046 [2024-12-05 12:14:09.960601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.046 qpair failed and we were unable to recover it. 00:30:36.046 [2024-12-05 12:14:09.960791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.046 [2024-12-05 12:14:09.960825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.046 qpair failed and we were unable to recover it. 00:30:36.046 [2024-12-05 12:14:09.961028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.046 [2024-12-05 12:14:09.961060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.046 qpair failed and we were unable to recover it. 00:30:36.046 [2024-12-05 12:14:09.961349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.046 [2024-12-05 12:14:09.961393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.046 qpair failed and we were unable to recover it. 00:30:36.046 [2024-12-05 12:14:09.961557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.046 [2024-12-05 12:14:09.961591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.046 qpair failed and we were unable to recover it. 00:30:36.046 [2024-12-05 12:14:09.961865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.046 [2024-12-05 12:14:09.961899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.046 qpair failed and we were unable to recover it. 00:30:36.046 [2024-12-05 12:14:09.962050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.046 [2024-12-05 12:14:09.962083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.046 qpair failed and we were unable to recover it. 00:30:36.046 [2024-12-05 12:14:09.962336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.046 [2024-12-05 12:14:09.962381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.046 qpair failed and we were unable to recover it. 00:30:36.046 [2024-12-05 12:14:09.962575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.046 [2024-12-05 12:14:09.962610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.046 qpair failed and we were unable to recover it. 00:30:36.046 [2024-12-05 12:14:09.962879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.046 [2024-12-05 12:14:09.962912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.046 qpair failed and we were unable to recover it. 00:30:36.046 [2024-12-05 12:14:09.963108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.046 [2024-12-05 12:14:09.963142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.046 qpair failed and we were unable to recover it. 00:30:36.046 [2024-12-05 12:14:09.963440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.046 [2024-12-05 12:14:09.963475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.046 qpair failed and we were unable to recover it. 00:30:36.046 [2024-12-05 12:14:09.963734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.046 [2024-12-05 12:14:09.963766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.046 qpair failed and we were unable to recover it. 00:30:36.046 [2024-12-05 12:14:09.964026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.046 [2024-12-05 12:14:09.964059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.046 qpair failed and we were unable to recover it. 00:30:36.046 [2024-12-05 12:14:09.964241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.046 [2024-12-05 12:14:09.964274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.046 qpair failed and we were unable to recover it. 00:30:36.046 [2024-12-05 12:14:09.964471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.046 [2024-12-05 12:14:09.964505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.046 qpair failed and we were unable to recover it. 00:30:36.046 [2024-12-05 12:14:09.964761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.046 [2024-12-05 12:14:09.964794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.046 qpair failed and we were unable to recover it. 00:30:36.046 [2024-12-05 12:14:09.965019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.046 [2024-12-05 12:14:09.965051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.046 qpair failed and we were unable to recover it. 00:30:36.046 [2024-12-05 12:14:09.965239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.046 [2024-12-05 12:14:09.965273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.046 qpair failed and we were unable to recover it. 00:30:36.046 [2024-12-05 12:14:09.965483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.046 [2024-12-05 12:14:09.965517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.046 qpair failed and we were unable to recover it. 00:30:36.046 [2024-12-05 12:14:09.965720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.046 [2024-12-05 12:14:09.965754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.046 qpair failed and we were unable to recover it. 00:30:36.046 [2024-12-05 12:14:09.966029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.046 [2024-12-05 12:14:09.966061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.046 qpair failed and we were unable to recover it. 00:30:36.046 [2024-12-05 12:14:09.966244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.046 [2024-12-05 12:14:09.966284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.046 qpair failed and we were unable to recover it. 00:30:36.046 [2024-12-05 12:14:09.966554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.046 [2024-12-05 12:14:09.966589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.046 qpair failed and we were unable to recover it. 00:30:36.046 [2024-12-05 12:14:09.966735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.047 [2024-12-05 12:14:09.966769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.047 qpair failed and we were unable to recover it. 00:30:36.047 [2024-12-05 12:14:09.967022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.047 [2024-12-05 12:14:09.967057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.047 qpair failed and we were unable to recover it. 00:30:36.047 [2024-12-05 12:14:09.967262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.047 [2024-12-05 12:14:09.967297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.047 qpair failed and we were unable to recover it. 00:30:36.047 [2024-12-05 12:14:09.967552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.047 [2024-12-05 12:14:09.967587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.047 qpair failed and we were unable to recover it. 00:30:36.047 [2024-12-05 12:14:09.967731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.047 [2024-12-05 12:14:09.967764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.047 qpair failed and we were unable to recover it. 00:30:36.047 [2024-12-05 12:14:09.967945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.047 [2024-12-05 12:14:09.967978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.047 qpair failed and we were unable to recover it. 00:30:36.047 [2024-12-05 12:14:09.968160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.047 [2024-12-05 12:14:09.968194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.047 qpair failed and we were unable to recover it. 00:30:36.047 [2024-12-05 12:14:09.968492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.047 [2024-12-05 12:14:09.968528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.047 qpair failed and we were unable to recover it. 00:30:36.047 [2024-12-05 12:14:09.968732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.047 [2024-12-05 12:14:09.968764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.047 qpair failed and we were unable to recover it. 00:30:36.047 [2024-12-05 12:14:09.969046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.047 [2024-12-05 12:14:09.969080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.047 qpair failed and we were unable to recover it. 00:30:36.047 [2024-12-05 12:14:09.969286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.047 [2024-12-05 12:14:09.969320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.047 qpair failed and we were unable to recover it. 00:30:36.047 [2024-12-05 12:14:09.969636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.047 [2024-12-05 12:14:09.969670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.047 qpair failed and we were unable to recover it. 00:30:36.047 [2024-12-05 12:14:09.969919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.047 [2024-12-05 12:14:09.969967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.047 qpair failed and we were unable to recover it. 00:30:36.047 [2024-12-05 12:14:09.970249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.047 [2024-12-05 12:14:09.970284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.047 qpair failed and we were unable to recover it. 00:30:36.047 [2024-12-05 12:14:09.970567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.047 [2024-12-05 12:14:09.970607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.047 qpair failed and we were unable to recover it. 00:30:36.047 [2024-12-05 12:14:09.970815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.047 [2024-12-05 12:14:09.970852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.047 qpair failed and we were unable to recover it. 00:30:36.047 [2024-12-05 12:14:09.971049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.047 [2024-12-05 12:14:09.971083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.047 qpair failed and we were unable to recover it. 00:30:36.047 [2024-12-05 12:14:09.971342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.047 [2024-12-05 12:14:09.971392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.047 qpair failed and we were unable to recover it. 00:30:36.047 [2024-12-05 12:14:09.971681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.047 [2024-12-05 12:14:09.971717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.047 qpair failed and we were unable to recover it. 00:30:36.047 [2024-12-05 12:14:09.971982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.047 [2024-12-05 12:14:09.972015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.047 qpair failed and we were unable to recover it. 00:30:36.047 [2024-12-05 12:14:09.972210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.047 [2024-12-05 12:14:09.972245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.047 qpair failed and we were unable to recover it. 00:30:36.047 [2024-12-05 12:14:09.972525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.047 [2024-12-05 12:14:09.972561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.047 qpair failed and we were unable to recover it. 00:30:36.047 [2024-12-05 12:14:09.972836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.047 [2024-12-05 12:14:09.972872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.047 qpair failed and we were unable to recover it. 00:30:36.047 [2024-12-05 12:14:09.973158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.047 [2024-12-05 12:14:09.973194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.047 qpair failed and we were unable to recover it. 00:30:36.047 [2024-12-05 12:14:09.973489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.047 [2024-12-05 12:14:09.973525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.047 qpair failed and we were unable to recover it. 00:30:36.047 [2024-12-05 12:14:09.973791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.047 [2024-12-05 12:14:09.973839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.047 qpair failed and we were unable to recover it. 00:30:36.047 [2024-12-05 12:14:09.974135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.047 [2024-12-05 12:14:09.974170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.047 qpair failed and we were unable to recover it. 00:30:36.047 [2024-12-05 12:14:09.974386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.047 [2024-12-05 12:14:09.974424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.047 qpair failed and we were unable to recover it. 00:30:36.047 [2024-12-05 12:14:09.974686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.047 [2024-12-05 12:14:09.974729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.047 qpair failed and we were unable to recover it. 00:30:36.047 [2024-12-05 12:14:09.974964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.047 [2024-12-05 12:14:09.974997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.047 qpair failed and we were unable to recover it. 00:30:36.047 [2024-12-05 12:14:09.975261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.048 [2024-12-05 12:14:09.975297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.048 qpair failed and we were unable to recover it. 00:30:36.048 [2024-12-05 12:14:09.975501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.048 [2024-12-05 12:14:09.975544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.048 qpair failed and we were unable to recover it. 00:30:36.048 [2024-12-05 12:14:09.975745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.048 [2024-12-05 12:14:09.975777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.048 qpair failed and we were unable to recover it. 00:30:36.048 [2024-12-05 12:14:09.975996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.048 [2024-12-05 12:14:09.976030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.048 qpair failed and we were unable to recover it. 00:30:36.048 [2024-12-05 12:14:09.976309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.048 [2024-12-05 12:14:09.976349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.048 qpair failed and we were unable to recover it. 00:30:36.048 [2024-12-05 12:14:09.976570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.048 [2024-12-05 12:14:09.976605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.048 qpair failed and we were unable to recover it. 00:30:36.048 [2024-12-05 12:14:09.976743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.048 [2024-12-05 12:14:09.976777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.048 qpair failed and we were unable to recover it. 00:30:36.048 [2024-12-05 12:14:09.977034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.048 [2024-12-05 12:14:09.977074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.048 qpair failed and we were unable to recover it. 00:30:36.048 [2024-12-05 12:14:09.977331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.048 [2024-12-05 12:14:09.977375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.048 qpair failed and we were unable to recover it. 00:30:36.048 [2024-12-05 12:14:09.977593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.048 [2024-12-05 12:14:09.977631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.048 qpair failed and we were unable to recover it. 00:30:36.048 [2024-12-05 12:14:09.977761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.048 [2024-12-05 12:14:09.977795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.048 qpair failed and we were unable to recover it. 00:30:36.048 [2024-12-05 12:14:09.977937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.048 [2024-12-05 12:14:09.977972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.048 qpair failed and we were unable to recover it. 00:30:36.048 [2024-12-05 12:14:09.978245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.048 [2024-12-05 12:14:09.978281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.048 qpair failed and we were unable to recover it. 00:30:36.048 [2024-12-05 12:14:09.978538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.048 [2024-12-05 12:14:09.978572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.048 qpair failed and we were unable to recover it. 00:30:36.048 [2024-12-05 12:14:09.978692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.048 [2024-12-05 12:14:09.978726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.048 qpair failed and we were unable to recover it. 00:30:36.048 [2024-12-05 12:14:09.979002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.048 [2024-12-05 12:14:09.979033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.048 qpair failed and we were unable to recover it. 00:30:36.048 [2024-12-05 12:14:09.979234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.048 [2024-12-05 12:14:09.979272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.048 qpair failed and we were unable to recover it. 00:30:36.048 [2024-12-05 12:14:09.979560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.048 [2024-12-05 12:14:09.979597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.048 qpair failed and we were unable to recover it. 00:30:36.048 [2024-12-05 12:14:09.979798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.048 [2024-12-05 12:14:09.979834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.048 qpair failed and we were unable to recover it. 00:30:36.048 [2024-12-05 12:14:09.980031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.048 [2024-12-05 12:14:09.980072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.048 qpair failed and we were unable to recover it. 00:30:36.048 [2024-12-05 12:14:09.980204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.048 [2024-12-05 12:14:09.980237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.048 qpair failed and we were unable to recover it. 00:30:36.048 [2024-12-05 12:14:09.980419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.048 [2024-12-05 12:14:09.980453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.048 qpair failed and we were unable to recover it. 00:30:36.048 [2024-12-05 12:14:09.980670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.048 [2024-12-05 12:14:09.980708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.048 qpair failed and we were unable to recover it. 00:30:36.048 [2024-12-05 12:14:09.980965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.048 [2024-12-05 12:14:09.980998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.048 qpair failed and we were unable to recover it. 00:30:36.048 [2024-12-05 12:14:09.981123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.048 [2024-12-05 12:14:09.981156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.048 qpair failed and we were unable to recover it. 00:30:36.048 [2024-12-05 12:14:09.981336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.048 [2024-12-05 12:14:09.981380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.048 qpair failed and we were unable to recover it. 00:30:36.048 [2024-12-05 12:14:09.981588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.048 [2024-12-05 12:14:09.981621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.048 qpair failed and we were unable to recover it. 00:30:36.048 [2024-12-05 12:14:09.981866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.048 [2024-12-05 12:14:09.981898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.048 qpair failed and we were unable to recover it. 00:30:36.048 [2024-12-05 12:14:09.982151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.048 [2024-12-05 12:14:09.982183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.048 qpair failed and we were unable to recover it. 00:30:36.049 [2024-12-05 12:14:09.982309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.049 [2024-12-05 12:14:09.982341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.049 qpair failed and we were unable to recover it. 00:30:36.049 [2024-12-05 12:14:09.982631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.049 [2024-12-05 12:14:09.982665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.049 qpair failed and we were unable to recover it. 00:30:36.049 [2024-12-05 12:14:09.982851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.049 [2024-12-05 12:14:09.982884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.049 qpair failed and we were unable to recover it. 00:30:36.049 [2024-12-05 12:14:09.983151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.049 [2024-12-05 12:14:09.983184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.049 qpair failed and we were unable to recover it. 00:30:36.049 [2024-12-05 12:14:09.983309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.049 [2024-12-05 12:14:09.983340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.049 qpair failed and we were unable to recover it. 00:30:36.049 [2024-12-05 12:14:09.983571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.049 [2024-12-05 12:14:09.983605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.049 qpair failed and we were unable to recover it. 00:30:36.049 [2024-12-05 12:14:09.983809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.049 [2024-12-05 12:14:09.983841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.049 qpair failed and we were unable to recover it. 00:30:36.049 [2024-12-05 12:14:09.984107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.049 [2024-12-05 12:14:09.984140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.049 qpair failed and we were unable to recover it. 00:30:36.049 [2024-12-05 12:14:09.984324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.049 [2024-12-05 12:14:09.984356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.049 qpair failed and we were unable to recover it. 00:30:36.049 [2024-12-05 12:14:09.984575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.049 [2024-12-05 12:14:09.984608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.049 qpair failed and we were unable to recover it. 00:30:36.049 [2024-12-05 12:14:09.984862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.049 [2024-12-05 12:14:09.984895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.049 qpair failed and we were unable to recover it. 00:30:36.049 [2024-12-05 12:14:09.985151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.049 [2024-12-05 12:14:09.985182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.049 qpair failed and we were unable to recover it. 00:30:36.049 [2024-12-05 12:14:09.985305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.049 [2024-12-05 12:14:09.985338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.049 qpair failed and we were unable to recover it. 00:30:36.049 [2024-12-05 12:14:09.985548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.049 [2024-12-05 12:14:09.985585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.049 qpair failed and we were unable to recover it. 00:30:36.049 [2024-12-05 12:14:09.985770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.049 [2024-12-05 12:14:09.985803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.049 qpair failed and we were unable to recover it. 00:30:36.049 [2024-12-05 12:14:09.986084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.049 [2024-12-05 12:14:09.986116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.049 qpair failed and we were unable to recover it. 00:30:36.049 [2024-12-05 12:14:09.986320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.049 [2024-12-05 12:14:09.986352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.049 qpair failed and we were unable to recover it. 00:30:36.049 [2024-12-05 12:14:09.986559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.049 [2024-12-05 12:14:09.986592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.049 qpair failed and we were unable to recover it. 00:30:36.049 [2024-12-05 12:14:09.986845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.049 [2024-12-05 12:14:09.986877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.049 qpair failed and we were unable to recover it. 00:30:36.049 [2024-12-05 12:14:09.987092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.049 [2024-12-05 12:14:09.987124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.049 qpair failed and we were unable to recover it. 00:30:36.049 [2024-12-05 12:14:09.987319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.049 [2024-12-05 12:14:09.987357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.049 qpair failed and we were unable to recover it. 00:30:36.049 [2024-12-05 12:14:09.987520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.049 [2024-12-05 12:14:09.987552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.049 qpair failed and we were unable to recover it. 00:30:36.049 [2024-12-05 12:14:09.987816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.049 [2024-12-05 12:14:09.987848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.049 qpair failed and we were unable to recover it. 00:30:36.049 [2024-12-05 12:14:09.988042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.049 [2024-12-05 12:14:09.988074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.050 qpair failed and we were unable to recover it. 00:30:36.050 [2024-12-05 12:14:09.988354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.050 [2024-12-05 12:14:09.988397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.050 qpair failed and we were unable to recover it. 00:30:36.050 [2024-12-05 12:14:09.988541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.050 [2024-12-05 12:14:09.988573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.050 qpair failed and we were unable to recover it. 00:30:36.050 [2024-12-05 12:14:09.988824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.050 [2024-12-05 12:14:09.988856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.050 qpair failed and we were unable to recover it. 00:30:36.050 [2024-12-05 12:14:09.989062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.050 [2024-12-05 12:14:09.989094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.050 qpair failed and we were unable to recover it. 00:30:36.050 [2024-12-05 12:14:09.989377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.050 [2024-12-05 12:14:09.989411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.050 qpair failed and we were unable to recover it. 00:30:36.050 [2024-12-05 12:14:09.989616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.050 [2024-12-05 12:14:09.989649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.050 qpair failed and we were unable to recover it. 00:30:36.050 [2024-12-05 12:14:09.989864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.050 [2024-12-05 12:14:09.989897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.050 qpair failed and we were unable to recover it. 00:30:36.050 [2024-12-05 12:14:09.990098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.050 [2024-12-05 12:14:09.990131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.050 qpair failed and we were unable to recover it. 00:30:36.050 [2024-12-05 12:14:09.990387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.050 [2024-12-05 12:14:09.990422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.050 qpair failed and we were unable to recover it. 00:30:36.050 [2024-12-05 12:14:09.990627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.050 [2024-12-05 12:14:09.990660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.050 qpair failed and we were unable to recover it. 00:30:36.050 [2024-12-05 12:14:09.990860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.050 [2024-12-05 12:14:09.990892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.050 qpair failed and we were unable to recover it. 00:30:36.050 [2024-12-05 12:14:09.991167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.050 [2024-12-05 12:14:09.991200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.050 qpair failed and we were unable to recover it. 00:30:36.050 [2024-12-05 12:14:09.991340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.050 [2024-12-05 12:14:09.991383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.050 qpair failed and we were unable to recover it. 00:30:36.050 [2024-12-05 12:14:09.991639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.050 [2024-12-05 12:14:09.991672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.050 qpair failed and we were unable to recover it. 00:30:36.050 [2024-12-05 12:14:09.991804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.050 [2024-12-05 12:14:09.991837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.050 qpair failed and we were unable to recover it. 00:30:36.050 [2024-12-05 12:14:09.992109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.050 [2024-12-05 12:14:09.992141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.050 qpair failed and we were unable to recover it. 00:30:36.050 [2024-12-05 12:14:09.992421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.050 [2024-12-05 12:14:09.992455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.050 qpair failed and we were unable to recover it. 00:30:36.050 [2024-12-05 12:14:09.992740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.050 [2024-12-05 12:14:09.992772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.050 qpair failed and we were unable to recover it. 00:30:36.050 [2024-12-05 12:14:09.993048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.050 [2024-12-05 12:14:09.993080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.050 qpair failed and we were unable to recover it. 00:30:36.050 [2024-12-05 12:14:09.993387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.050 [2024-12-05 12:14:09.993420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.050 qpair failed and we were unable to recover it. 00:30:36.050 [2024-12-05 12:14:09.993580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.050 [2024-12-05 12:14:09.993615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.050 qpair failed and we were unable to recover it. 00:30:36.050 [2024-12-05 12:14:09.993879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.050 [2024-12-05 12:14:09.993911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.050 qpair failed and we were unable to recover it. 00:30:36.050 [2024-12-05 12:14:09.994176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.050 [2024-12-05 12:14:09.994208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.050 qpair failed and we were unable to recover it. 00:30:36.050 [2024-12-05 12:14:09.994421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.050 [2024-12-05 12:14:09.994455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.050 qpair failed and we were unable to recover it. 00:30:36.050 [2024-12-05 12:14:09.994730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.050 [2024-12-05 12:14:09.994762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.050 qpair failed and we were unable to recover it. 00:30:36.050 [2024-12-05 12:14:09.995068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.050 [2024-12-05 12:14:09.995103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.050 qpair failed and we were unable to recover it. 00:30:36.050 [2024-12-05 12:14:09.995357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.050 [2024-12-05 12:14:09.995408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.050 qpair failed and we were unable to recover it. 00:30:36.050 [2024-12-05 12:14:09.995619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.051 [2024-12-05 12:14:09.995651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.051 qpair failed and we were unable to recover it. 00:30:36.051 [2024-12-05 12:14:09.995778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.051 [2024-12-05 12:14:09.995810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.051 qpair failed and we were unable to recover it. 00:30:36.051 [2024-12-05 12:14:09.996085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.051 [2024-12-05 12:14:09.996120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.051 qpair failed and we were unable to recover it. 00:30:36.051 [2024-12-05 12:14:09.996395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.051 [2024-12-05 12:14:09.996431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.051 qpair failed and we were unable to recover it. 00:30:36.051 [2024-12-05 12:14:09.996704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.051 [2024-12-05 12:14:09.996736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.051 qpair failed and we were unable to recover it. 00:30:36.051 [2024-12-05 12:14:09.996934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.051 [2024-12-05 12:14:09.996966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.051 qpair failed and we were unable to recover it. 00:30:36.051 [2024-12-05 12:14:09.997196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.051 [2024-12-05 12:14:09.997229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.051 qpair failed and we were unable to recover it. 00:30:36.051 [2024-12-05 12:14:09.997425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.051 [2024-12-05 12:14:09.997456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.051 qpair failed and we were unable to recover it. 00:30:36.051 [2024-12-05 12:14:09.997663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.051 [2024-12-05 12:14:09.997696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.051 qpair failed and we were unable to recover it. 00:30:36.051 [2024-12-05 12:14:09.997893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.051 [2024-12-05 12:14:09.997931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.051 qpair failed and we were unable to recover it. 00:30:36.051 [2024-12-05 12:14:09.998211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.051 [2024-12-05 12:14:09.998242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.051 qpair failed and we were unable to recover it. 00:30:36.051 [2024-12-05 12:14:09.998388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.051 [2024-12-05 12:14:09.998422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.051 qpair failed and we were unable to recover it. 00:30:36.051 [2024-12-05 12:14:09.998559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.051 [2024-12-05 12:14:09.998591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.051 qpair failed and we were unable to recover it. 00:30:36.051 [2024-12-05 12:14:09.998787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.051 [2024-12-05 12:14:09.998818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.051 qpair failed and we were unable to recover it. 00:30:36.051 [2024-12-05 12:14:09.999014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.051 [2024-12-05 12:14:09.999046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.051 qpair failed and we were unable to recover it. 00:30:36.051 [2024-12-05 12:14:09.999249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.051 [2024-12-05 12:14:09.999281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.051 qpair failed and we were unable to recover it. 00:30:36.051 [2024-12-05 12:14:09.999487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.051 [2024-12-05 12:14:09.999519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.051 qpair failed and we were unable to recover it. 00:30:36.051 [2024-12-05 12:14:09.999820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.051 [2024-12-05 12:14:09.999852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.051 qpair failed and we were unable to recover it. 00:30:36.051 [2024-12-05 12:14:10.000049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.051 [2024-12-05 12:14:10.000087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.051 qpair failed and we were unable to recover it. 00:30:36.051 [2024-12-05 12:14:10.000396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.051 [2024-12-05 12:14:10.000435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.051 qpair failed and we were unable to recover it. 00:30:36.051 [2024-12-05 12:14:10.000663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.051 [2024-12-05 12:14:10.000703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.051 qpair failed and we were unable to recover it. 00:30:36.051 [2024-12-05 12:14:10.001635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.051 [2024-12-05 12:14:10.001700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.051 qpair failed and we were unable to recover it. 00:30:36.051 [2024-12-05 12:14:10.001913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.051 [2024-12-05 12:14:10.001947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.051 qpair failed and we were unable to recover it. 00:30:36.051 [2024-12-05 12:14:10.002163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.051 [2024-12-05 12:14:10.002206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.051 qpair failed and we were unable to recover it. 00:30:36.051 [2024-12-05 12:14:10.002438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.051 [2024-12-05 12:14:10.002485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.051 qpair failed and we were unable to recover it. 00:30:36.051 [2024-12-05 12:14:10.002767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.051 [2024-12-05 12:14:10.002808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.051 qpair failed and we were unable to recover it. 00:30:36.051 [2024-12-05 12:14:10.003107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.051 [2024-12-05 12:14:10.003144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.051 qpair failed and we were unable to recover it. 00:30:36.051 [2024-12-05 12:14:10.003355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.051 [2024-12-05 12:14:10.003410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.051 qpair failed and we were unable to recover it. 00:30:36.051 [2024-12-05 12:14:10.003683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.051 [2024-12-05 12:14:10.003724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.051 qpair failed and we were unable to recover it. 00:30:36.052 [2024-12-05 12:14:10.003891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.052 [2024-12-05 12:14:10.003936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.052 qpair failed and we were unable to recover it. 00:30:36.052 [2024-12-05 12:14:10.004113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.052 [2024-12-05 12:14:10.004154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.052 qpair failed and we were unable to recover it. 00:30:36.052 [2024-12-05 12:14:10.004393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.052 [2024-12-05 12:14:10.004438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.052 qpair failed and we were unable to recover it. 00:30:36.052 [2024-12-05 12:14:10.004683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.052 [2024-12-05 12:14:10.004722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.052 qpair failed and we were unable to recover it. 00:30:36.052 [2024-12-05 12:14:10.004993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.052 [2024-12-05 12:14:10.005034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.052 qpair failed and we were unable to recover it. 00:30:36.052 [2024-12-05 12:14:10.005251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.052 [2024-12-05 12:14:10.005288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.052 qpair failed and we were unable to recover it. 00:30:36.052 [2024-12-05 12:14:10.005517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.052 [2024-12-05 12:14:10.005559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.052 qpair failed and we were unable to recover it. 00:30:36.052 [2024-12-05 12:14:10.005739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.052 [2024-12-05 12:14:10.005781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.052 qpair failed and we were unable to recover it. 00:30:36.052 [2024-12-05 12:14:10.006047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.052 [2024-12-05 12:14:10.006091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.052 qpair failed and we were unable to recover it. 00:30:36.052 [2024-12-05 12:14:10.006313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.052 [2024-12-05 12:14:10.006354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.052 qpair failed and we were unable to recover it. 00:30:36.052 [2024-12-05 12:14:10.006554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.052 [2024-12-05 12:14:10.006591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.052 qpair failed and we were unable to recover it. 00:30:36.052 [2024-12-05 12:14:10.006795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.052 [2024-12-05 12:14:10.006828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.052 qpair failed and we were unable to recover it. 00:30:36.052 [2024-12-05 12:14:10.007073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.052 [2024-12-05 12:14:10.007110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.052 qpair failed and we were unable to recover it. 00:30:36.052 [2024-12-05 12:14:10.007326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.052 [2024-12-05 12:14:10.007360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.052 qpair failed and we were unable to recover it. 00:30:36.052 [2024-12-05 12:14:10.007528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.052 [2024-12-05 12:14:10.007561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.052 qpair failed and we were unable to recover it. 00:30:36.052 [2024-12-05 12:14:10.007726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.052 [2024-12-05 12:14:10.007759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.052 qpair failed and we were unable to recover it. 00:30:36.052 [2024-12-05 12:14:10.007888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.052 [2024-12-05 12:14:10.007920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.052 qpair failed and we were unable to recover it. 00:30:36.052 [2024-12-05 12:14:10.008047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.052 [2024-12-05 12:14:10.008081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.052 qpair failed and we were unable to recover it. 00:30:36.052 [2024-12-05 12:14:10.008274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.052 [2024-12-05 12:14:10.008306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.052 qpair failed and we were unable to recover it. 00:30:36.052 [2024-12-05 12:14:10.008559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.052 [2024-12-05 12:14:10.008595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.052 qpair failed and we were unable to recover it. 00:30:36.052 [2024-12-05 12:14:10.008829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.052 [2024-12-05 12:14:10.008874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.052 qpair failed and we were unable to recover it. 00:30:36.052 [2024-12-05 12:14:10.009073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.052 [2024-12-05 12:14:10.009109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.052 qpair failed and we were unable to recover it. 00:30:36.052 [2024-12-05 12:14:10.009315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.052 [2024-12-05 12:14:10.009347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.052 qpair failed and we were unable to recover it. 00:30:36.052 [2024-12-05 12:14:10.009564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.052 [2024-12-05 12:14:10.009598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.052 qpair failed and we were unable to recover it. 00:30:36.052 [2024-12-05 12:14:10.009741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.052 [2024-12-05 12:14:10.009776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.052 qpair failed and we were unable to recover it. 00:30:36.052 [2024-12-05 12:14:10.009930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.052 [2024-12-05 12:14:10.009963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.052 qpair failed and we were unable to recover it. 00:30:36.052 [2024-12-05 12:14:10.010165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.052 [2024-12-05 12:14:10.010198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.052 qpair failed and we were unable to recover it. 00:30:36.052 [2024-12-05 12:14:10.010398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.052 [2024-12-05 12:14:10.010434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.052 qpair failed and we were unable to recover it. 00:30:36.052 [2024-12-05 12:14:10.010693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.052 [2024-12-05 12:14:10.010725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.052 qpair failed and we were unable to recover it. 00:30:36.052 [2024-12-05 12:14:10.010882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.053 [2024-12-05 12:14:10.010915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.053 qpair failed and we were unable to recover it. 00:30:36.053 [2024-12-05 12:14:10.011228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.053 [2024-12-05 12:14:10.011262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.053 qpair failed and we were unable to recover it. 00:30:36.053 [2024-12-05 12:14:10.011553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.053 [2024-12-05 12:14:10.011588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.053 qpair failed and we were unable to recover it. 00:30:36.053 [2024-12-05 12:14:10.011863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.053 [2024-12-05 12:14:10.011898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.053 qpair failed and we were unable to recover it. 00:30:36.053 [2024-12-05 12:14:10.012228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.053 [2024-12-05 12:14:10.012261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.053 qpair failed and we were unable to recover it. 00:30:36.053 [2024-12-05 12:14:10.012491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.053 [2024-12-05 12:14:10.012528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.053 qpair failed and we were unable to recover it. 00:30:36.053 [2024-12-05 12:14:10.012749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.053 [2024-12-05 12:14:10.012783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.053 qpair failed and we were unable to recover it. 00:30:36.053 [2024-12-05 12:14:10.012988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.053 [2024-12-05 12:14:10.013021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.053 qpair failed and we were unable to recover it. 00:30:36.053 [2024-12-05 12:14:10.013216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.053 [2024-12-05 12:14:10.013249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.053 qpair failed and we were unable to recover it. 00:30:36.053 [2024-12-05 12:14:10.013472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.053 [2024-12-05 12:14:10.013507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.053 qpair failed and we were unable to recover it. 00:30:36.053 [2024-12-05 12:14:10.013709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.053 [2024-12-05 12:14:10.013745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.053 qpair failed and we were unable to recover it. 00:30:36.053 [2024-12-05 12:14:10.013894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.053 [2024-12-05 12:14:10.013926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.053 qpair failed and we were unable to recover it. 00:30:36.053 [2024-12-05 12:14:10.014124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.053 [2024-12-05 12:14:10.014160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.053 qpair failed and we were unable to recover it. 00:30:36.053 [2024-12-05 12:14:10.014344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.053 [2024-12-05 12:14:10.014385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.053 qpair failed and we were unable to recover it. 00:30:36.053 [2024-12-05 12:14:10.014666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.053 [2024-12-05 12:14:10.014699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.053 qpair failed and we were unable to recover it. 00:30:36.053 [2024-12-05 12:14:10.014850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.053 [2024-12-05 12:14:10.014883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.053 qpair failed and we were unable to recover it. 00:30:36.053 [2024-12-05 12:14:10.015207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.053 [2024-12-05 12:14:10.015240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.053 qpair failed and we were unable to recover it. 00:30:36.053 [2024-12-05 12:14:10.015522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.053 [2024-12-05 12:14:10.015557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.053 qpair failed and we were unable to recover it. 00:30:36.053 [2024-12-05 12:14:10.015787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.053 [2024-12-05 12:14:10.015820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.053 qpair failed and we were unable to recover it. 00:30:36.053 [2024-12-05 12:14:10.016045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.053 [2024-12-05 12:14:10.016079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.053 qpair failed and we were unable to recover it. 00:30:36.053 [2024-12-05 12:14:10.016277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.053 [2024-12-05 12:14:10.016310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.053 qpair failed and we were unable to recover it. 00:30:36.053 [2024-12-05 12:14:10.016536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.053 [2024-12-05 12:14:10.016570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.053 qpair failed and we were unable to recover it. 00:30:36.053 [2024-12-05 12:14:10.016772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.053 [2024-12-05 12:14:10.016806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.053 qpair failed and we were unable to recover it. 00:30:36.053 [2024-12-05 12:14:10.017091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.053 [2024-12-05 12:14:10.017126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.053 qpair failed and we were unable to recover it. 00:30:36.053 [2024-12-05 12:14:10.017403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.053 [2024-12-05 12:14:10.017437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.053 qpair failed and we were unable to recover it. 00:30:36.053 [2024-12-05 12:14:10.017606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.053 [2024-12-05 12:14:10.017640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.053 qpair failed and we were unable to recover it. 00:30:36.053 [2024-12-05 12:14:10.017774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.053 [2024-12-05 12:14:10.017808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.053 qpair failed and we were unable to recover it. 00:30:36.053 [2024-12-05 12:14:10.018078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.053 [2024-12-05 12:14:10.018113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.053 qpair failed and we were unable to recover it. 00:30:36.053 [2024-12-05 12:14:10.018396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.053 [2024-12-05 12:14:10.018431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.053 qpair failed and we were unable to recover it. 00:30:36.053 [2024-12-05 12:14:10.018587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.053 [2024-12-05 12:14:10.018621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.053 qpair failed and we were unable to recover it. 00:30:36.053 [2024-12-05 12:14:10.018884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.054 [2024-12-05 12:14:10.018925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.054 qpair failed and we were unable to recover it. 00:30:36.054 [2024-12-05 12:14:10.019145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.054 [2024-12-05 12:14:10.019197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.054 qpair failed and we were unable to recover it. 00:30:36.054 [2024-12-05 12:14:10.019520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.054 [2024-12-05 12:14:10.019573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.054 qpair failed and we were unable to recover it. 00:30:36.054 [2024-12-05 12:14:10.019768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.054 [2024-12-05 12:14:10.019816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.054 qpair failed and we were unable to recover it. 00:30:36.054 [2024-12-05 12:14:10.020078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.054 [2024-12-05 12:14:10.020120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.054 qpair failed and we were unable to recover it. 00:30:36.054 [2024-12-05 12:14:10.020326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.054 [2024-12-05 12:14:10.020384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.054 qpair failed and we were unable to recover it. 00:30:36.054 [2024-12-05 12:14:10.020620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.054 [2024-12-05 12:14:10.020664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.054 qpair failed and we were unable to recover it. 00:30:36.054 [2024-12-05 12:14:10.020874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.054 [2024-12-05 12:14:10.020907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.054 qpair failed and we were unable to recover it. 00:30:36.054 [2024-12-05 12:14:10.021161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.054 [2024-12-05 12:14:10.021195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.054 qpair failed and we were unable to recover it. 00:30:36.054 [2024-12-05 12:14:10.021405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.054 [2024-12-05 12:14:10.021442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.054 qpair failed and we were unable to recover it. 00:30:36.054 [2024-12-05 12:14:10.021597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.054 [2024-12-05 12:14:10.021630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.054 qpair failed and we were unable to recover it. 00:30:36.054 [2024-12-05 12:14:10.021885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.054 [2024-12-05 12:14:10.021918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.054 qpair failed and we were unable to recover it. 00:30:36.054 [2024-12-05 12:14:10.022109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.054 [2024-12-05 12:14:10.022143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.054 qpair failed and we were unable to recover it. 00:30:36.054 [2024-12-05 12:14:10.022284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.054 [2024-12-05 12:14:10.022317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.054 qpair failed and we were unable to recover it. 00:30:36.054 [2024-12-05 12:14:10.022585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.054 [2024-12-05 12:14:10.022620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.054 qpair failed and we were unable to recover it. 00:30:36.054 [2024-12-05 12:14:10.022761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.054 [2024-12-05 12:14:10.022794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.054 qpair failed and we were unable to recover it. 00:30:36.054 [2024-12-05 12:14:10.023007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.054 [2024-12-05 12:14:10.023039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.054 qpair failed and we were unable to recover it. 00:30:36.054 [2024-12-05 12:14:10.023302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.054 [2024-12-05 12:14:10.023335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.054 qpair failed and we were unable to recover it. 00:30:36.054 [2024-12-05 12:14:10.023505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.054 [2024-12-05 12:14:10.023540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.054 qpair failed and we were unable to recover it. 00:30:36.054 [2024-12-05 12:14:10.023741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.054 [2024-12-05 12:14:10.023774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.054 qpair failed and we were unable to recover it. 00:30:36.054 [2024-12-05 12:14:10.023973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.054 [2024-12-05 12:14:10.024005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.054 qpair failed and we were unable to recover it. 00:30:36.054 [2024-12-05 12:14:10.024258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.054 [2024-12-05 12:14:10.024291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.054 qpair failed and we were unable to recover it. 00:30:36.054 [2024-12-05 12:14:10.024436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.054 [2024-12-05 12:14:10.024472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.054 qpair failed and we were unable to recover it. 00:30:36.054 [2024-12-05 12:14:10.024603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.054 [2024-12-05 12:14:10.024636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.054 qpair failed and we were unable to recover it. 00:30:36.054 [2024-12-05 12:14:10.024770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.054 [2024-12-05 12:14:10.024803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.054 qpair failed and we were unable to recover it. 00:30:36.054 [2024-12-05 12:14:10.025095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.054 [2024-12-05 12:14:10.025129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.054 qpair failed and we were unable to recover it. 00:30:36.054 [2024-12-05 12:14:10.025393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.054 [2024-12-05 12:14:10.025428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.054 qpair failed and we were unable to recover it. 00:30:36.054 [2024-12-05 12:14:10.025691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.054 [2024-12-05 12:14:10.025724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.054 qpair failed and we were unable to recover it. 00:30:36.054 [2024-12-05 12:14:10.025881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.054 [2024-12-05 12:14:10.025914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.054 qpair failed and we were unable to recover it. 00:30:36.054 [2024-12-05 12:14:10.026126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.054 [2024-12-05 12:14:10.026160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.054 qpair failed and we were unable to recover it. 00:30:36.054 [2024-12-05 12:14:10.026438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.055 [2024-12-05 12:14:10.026474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.055 qpair failed and we were unable to recover it. 00:30:36.055 [2024-12-05 12:14:10.026662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.055 [2024-12-05 12:14:10.026694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.055 qpair failed and we were unable to recover it. 00:30:36.055 [2024-12-05 12:14:10.026904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.055 [2024-12-05 12:14:10.026945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.055 qpair failed and we were unable to recover it. 00:30:36.055 [2024-12-05 12:14:10.027246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.055 [2024-12-05 12:14:10.027281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.055 qpair failed and we were unable to recover it. 00:30:36.055 [2024-12-05 12:14:10.027506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.055 [2024-12-05 12:14:10.027544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.055 qpair failed and we were unable to recover it. 00:30:36.055 [2024-12-05 12:14:10.027718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.055 [2024-12-05 12:14:10.027757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.055 qpair failed and we were unable to recover it. 00:30:36.055 [2024-12-05 12:14:10.028018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.055 [2024-12-05 12:14:10.028060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.055 qpair failed and we were unable to recover it. 00:30:36.055 [2024-12-05 12:14:10.028349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.055 [2024-12-05 12:14:10.028415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.055 qpair failed and we were unable to recover it. 00:30:36.055 [2024-12-05 12:14:10.028633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.055 [2024-12-05 12:14:10.028668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.055 qpair failed and we were unable to recover it. 00:30:36.055 [2024-12-05 12:14:10.028860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.055 [2024-12-05 12:14:10.028897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.055 qpair failed and we were unable to recover it. 00:30:36.055 [2024-12-05 12:14:10.029102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.055 [2024-12-05 12:14:10.029141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.055 qpair failed and we were unable to recover it. 00:30:36.055 [2024-12-05 12:14:10.029401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.055 [2024-12-05 12:14:10.029453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.055 qpair failed and we were unable to recover it. 00:30:36.055 [2024-12-05 12:14:10.029595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.055 [2024-12-05 12:14:10.029635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.055 qpair failed and we were unable to recover it. 00:30:36.055 [2024-12-05 12:14:10.029793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.055 [2024-12-05 12:14:10.029828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.055 qpair failed and we were unable to recover it. 00:30:36.055 [2024-12-05 12:14:10.029979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.055 [2024-12-05 12:14:10.030013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.055 qpair failed and we were unable to recover it. 00:30:36.055 [2024-12-05 12:14:10.030251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.055 [2024-12-05 12:14:10.030293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.055 qpair failed and we were unable to recover it. 00:30:36.055 [2024-12-05 12:14:10.030511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.055 [2024-12-05 12:14:10.030558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.055 qpair failed and we were unable to recover it. 00:30:36.055 [2024-12-05 12:14:10.030820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.055 [2024-12-05 12:14:10.030862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.055 qpair failed and we were unable to recover it. 00:30:36.055 [2024-12-05 12:14:10.031105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.055 [2024-12-05 12:14:10.031152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.055 qpair failed and we were unable to recover it. 00:30:36.055 [2024-12-05 12:14:10.031378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.055 [2024-12-05 12:14:10.031424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.055 qpair failed and we were unable to recover it. 00:30:36.055 [2024-12-05 12:14:10.031594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.055 [2024-12-05 12:14:10.031634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.055 qpair failed and we were unable to recover it. 00:30:36.055 [2024-12-05 12:14:10.031811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.055 [2024-12-05 12:14:10.031859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.055 qpair failed and we were unable to recover it. 00:30:36.055 [2024-12-05 12:14:10.032091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.055 [2024-12-05 12:14:10.032135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.055 qpair failed and we were unable to recover it. 00:30:36.055 [2024-12-05 12:14:10.032366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.056 [2024-12-05 12:14:10.032417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.056 qpair failed and we were unable to recover it. 00:30:36.056 [2024-12-05 12:14:10.032577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.056 [2024-12-05 12:14:10.032618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.056 qpair failed and we were unable to recover it. 00:30:36.056 [2024-12-05 12:14:10.032786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.056 [2024-12-05 12:14:10.032830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.056 qpair failed and we were unable to recover it. 00:30:36.056 [2024-12-05 12:14:10.032980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.056 [2024-12-05 12:14:10.033014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.056 qpair failed and we were unable to recover it. 00:30:36.056 [2024-12-05 12:14:10.033146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.056 [2024-12-05 12:14:10.033186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.056 qpair failed and we were unable to recover it. 00:30:36.056 [2024-12-05 12:14:10.033402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.056 [2024-12-05 12:14:10.033437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.056 qpair failed and we were unable to recover it. 00:30:36.056 [2024-12-05 12:14:10.033645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.056 [2024-12-05 12:14:10.033679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.056 qpair failed and we were unable to recover it. 00:30:36.056 [2024-12-05 12:14:10.033918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.056 [2024-12-05 12:14:10.033953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.056 qpair failed and we were unable to recover it. 00:30:36.056 [2024-12-05 12:14:10.034087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.056 [2024-12-05 12:14:10.034121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.056 qpair failed and we were unable to recover it. 00:30:36.056 [2024-12-05 12:14:10.034276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.056 [2024-12-05 12:14:10.034311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.056 qpair failed and we were unable to recover it. 00:30:36.056 [2024-12-05 12:14:10.034450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.056 [2024-12-05 12:14:10.034485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.056 qpair failed and we were unable to recover it. 00:30:36.056 [2024-12-05 12:14:10.034705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.056 [2024-12-05 12:14:10.034742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.056 qpair failed and we were unable to recover it. 00:30:36.056 [2024-12-05 12:14:10.034948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.056 [2024-12-05 12:14:10.034982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.056 qpair failed and we were unable to recover it. 00:30:36.056 [2024-12-05 12:14:10.035175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.056 [2024-12-05 12:14:10.035210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.056 qpair failed and we were unable to recover it. 00:30:36.056 [2024-12-05 12:14:10.035397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.056 [2024-12-05 12:14:10.035433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.056 qpair failed and we were unable to recover it. 00:30:36.056 [2024-12-05 12:14:10.035629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.056 [2024-12-05 12:14:10.035667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.056 qpair failed and we were unable to recover it. 00:30:36.056 [2024-12-05 12:14:10.035795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.056 [2024-12-05 12:14:10.035828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.056 qpair failed and we were unable to recover it. 00:30:36.056 [2024-12-05 12:14:10.036010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.056 [2024-12-05 12:14:10.036043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.056 qpair failed and we were unable to recover it. 00:30:36.056 [2024-12-05 12:14:10.036168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.056 [2024-12-05 12:14:10.036200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.056 qpair failed and we were unable to recover it. 00:30:36.056 [2024-12-05 12:14:10.036321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.056 [2024-12-05 12:14:10.036353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.056 qpair failed and we were unable to recover it. 00:30:36.056 [2024-12-05 12:14:10.036573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.056 [2024-12-05 12:14:10.036607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.056 qpair failed and we were unable to recover it. 00:30:36.056 [2024-12-05 12:14:10.036798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.056 [2024-12-05 12:14:10.036832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.056 qpair failed and we were unable to recover it. 00:30:36.056 [2024-12-05 12:14:10.037134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.056 [2024-12-05 12:14:10.037167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.056 qpair failed and we were unable to recover it. 00:30:36.056 [2024-12-05 12:14:10.037473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.056 [2024-12-05 12:14:10.037508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.056 qpair failed and we were unable to recover it. 00:30:36.056 [2024-12-05 12:14:10.037643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.056 [2024-12-05 12:14:10.037675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.056 qpair failed and we were unable to recover it. 00:30:36.056 [2024-12-05 12:14:10.037828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.056 [2024-12-05 12:14:10.037861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.056 qpair failed and we were unable to recover it. 00:30:36.056 [2024-12-05 12:14:10.038133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.056 [2024-12-05 12:14:10.038166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.056 qpair failed and we were unable to recover it. 00:30:36.056 [2024-12-05 12:14:10.038287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.056 [2024-12-05 12:14:10.038322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.056 qpair failed and we were unable to recover it. 00:30:36.057 [2024-12-05 12:14:10.038635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.057 [2024-12-05 12:14:10.038677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.057 qpair failed and we were unable to recover it. 00:30:36.057 [2024-12-05 12:14:10.038884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.057 [2024-12-05 12:14:10.038918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.057 qpair failed and we were unable to recover it. 00:30:36.057 [2024-12-05 12:14:10.039112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.057 [2024-12-05 12:14:10.039146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.057 qpair failed and we were unable to recover it. 00:30:36.057 [2024-12-05 12:14:10.039267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.057 [2024-12-05 12:14:10.039300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.057 qpair failed and we were unable to recover it. 00:30:36.057 [2024-12-05 12:14:10.039523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.057 [2024-12-05 12:14:10.039559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.057 qpair failed and we were unable to recover it. 00:30:36.057 [2024-12-05 12:14:10.039765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.057 [2024-12-05 12:14:10.039798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.057 qpair failed and we were unable to recover it. 00:30:36.057 [2024-12-05 12:14:10.039931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.057 [2024-12-05 12:14:10.039965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.057 qpair failed and we were unable to recover it. 00:30:36.057 [2024-12-05 12:14:10.040089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.057 [2024-12-05 12:14:10.040122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.057 qpair failed and we were unable to recover it. 00:30:36.057 [2024-12-05 12:14:10.040254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.057 [2024-12-05 12:14:10.040286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.057 qpair failed and we were unable to recover it. 00:30:36.057 [2024-12-05 12:14:10.040501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.057 [2024-12-05 12:14:10.040534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.057 qpair failed and we were unable to recover it. 00:30:36.057 [2024-12-05 12:14:10.040776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.057 [2024-12-05 12:14:10.040809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.057 qpair failed and we were unable to recover it. 00:30:36.057 [2024-12-05 12:14:10.041009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.057 [2024-12-05 12:14:10.041042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.057 qpair failed and we were unable to recover it. 00:30:36.057 [2024-12-05 12:14:10.041237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.057 [2024-12-05 12:14:10.041270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.057 qpair failed and we were unable to recover it. 00:30:36.057 [2024-12-05 12:14:10.041394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.057 [2024-12-05 12:14:10.041429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.057 qpair failed and we were unable to recover it. 00:30:36.057 [2024-12-05 12:14:10.041641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.057 [2024-12-05 12:14:10.041674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.057 qpair failed and we were unable to recover it. 00:30:36.057 [2024-12-05 12:14:10.041878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.057 [2024-12-05 12:14:10.041911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.057 qpair failed and we were unable to recover it. 00:30:36.057 [2024-12-05 12:14:10.042105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.057 [2024-12-05 12:14:10.042139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.057 qpair failed and we were unable to recover it. 00:30:36.057 [2024-12-05 12:14:10.042267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.057 [2024-12-05 12:14:10.042299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.057 qpair failed and we were unable to recover it. 00:30:36.057 [2024-12-05 12:14:10.042560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.057 [2024-12-05 12:14:10.042595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.057 qpair failed and we were unable to recover it. 00:30:36.057 [2024-12-05 12:14:10.042732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.057 [2024-12-05 12:14:10.042765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.057 qpair failed and we were unable to recover it. 00:30:36.057 [2024-12-05 12:14:10.042897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.057 [2024-12-05 12:14:10.042928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.057 qpair failed and we were unable to recover it. 00:30:36.057 [2024-12-05 12:14:10.043150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.057 [2024-12-05 12:14:10.043181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.057 qpair failed and we were unable to recover it. 00:30:36.057 [2024-12-05 12:14:10.043441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.057 [2024-12-05 12:14:10.043481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.057 qpair failed and we were unable to recover it. 00:30:36.057 [2024-12-05 12:14:10.043697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.057 [2024-12-05 12:14:10.043729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.057 qpair failed and we were unable to recover it. 00:30:36.057 [2024-12-05 12:14:10.043873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.057 [2024-12-05 12:14:10.043905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.057 qpair failed and we were unable to recover it. 00:30:36.057 [2024-12-05 12:14:10.044156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.057 [2024-12-05 12:14:10.044188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.057 qpair failed and we were unable to recover it. 00:30:36.057 [2024-12-05 12:14:10.044403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.057 [2024-12-05 12:14:10.044437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.057 qpair failed and we were unable to recover it. 00:30:36.057 [2024-12-05 12:14:10.044622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.057 [2024-12-05 12:14:10.044655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.057 qpair failed and we were unable to recover it. 00:30:36.058 [2024-12-05 12:14:10.044794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.058 [2024-12-05 12:14:10.044827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.058 qpair failed and we were unable to recover it. 00:30:36.058 [2024-12-05 12:14:10.045031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.058 [2024-12-05 12:14:10.045063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.058 qpair failed and we were unable to recover it. 00:30:36.058 [2024-12-05 12:14:10.045263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.058 [2024-12-05 12:14:10.045296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.058 qpair failed and we were unable to recover it. 00:30:36.058 [2024-12-05 12:14:10.045494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.058 [2024-12-05 12:14:10.045530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.058 qpair failed and we were unable to recover it. 00:30:36.058 [2024-12-05 12:14:10.045715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.058 [2024-12-05 12:14:10.045748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.058 qpair failed and we were unable to recover it. 00:30:36.058 [2024-12-05 12:14:10.045856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.058 [2024-12-05 12:14:10.045889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.058 qpair failed and we were unable to recover it. 00:30:36.058 [2024-12-05 12:14:10.046088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.058 [2024-12-05 12:14:10.046121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.058 qpair failed and we were unable to recover it. 00:30:36.058 [2024-12-05 12:14:10.046254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.058 [2024-12-05 12:14:10.046287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.058 qpair failed and we were unable to recover it. 00:30:36.058 [2024-12-05 12:14:10.046411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.058 [2024-12-05 12:14:10.046446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.058 qpair failed and we were unable to recover it. 00:30:36.058 [2024-12-05 12:14:10.046643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.058 [2024-12-05 12:14:10.046675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.058 qpair failed and we were unable to recover it. 00:30:36.058 [2024-12-05 12:14:10.046799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.058 [2024-12-05 12:14:10.046832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.058 qpair failed and we were unable to recover it. 00:30:36.058 [2024-12-05 12:14:10.046954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.058 [2024-12-05 12:14:10.046986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.058 qpair failed and we were unable to recover it. 00:30:36.058 [2024-12-05 12:14:10.047180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.058 [2024-12-05 12:14:10.047219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.058 qpair failed and we were unable to recover it. 00:30:36.058 [2024-12-05 12:14:10.047412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.058 [2024-12-05 12:14:10.047446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.058 qpair failed and we were unable to recover it. 00:30:36.058 [2024-12-05 12:14:10.047584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.058 [2024-12-05 12:14:10.047617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.058 qpair failed and we were unable to recover it. 00:30:36.058 [2024-12-05 12:14:10.047797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.058 [2024-12-05 12:14:10.047830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.058 qpair failed and we were unable to recover it. 00:30:36.058 [2024-12-05 12:14:10.048021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.058 [2024-12-05 12:14:10.048054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.058 qpair failed and we were unable to recover it. 00:30:36.058 [2024-12-05 12:14:10.048186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.058 [2024-12-05 12:14:10.048218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.058 qpair failed and we were unable to recover it. 00:30:36.058 [2024-12-05 12:14:10.048421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.058 [2024-12-05 12:14:10.048458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.058 qpair failed and we were unable to recover it. 00:30:36.058 [2024-12-05 12:14:10.048575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.058 [2024-12-05 12:14:10.048607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.058 qpair failed and we were unable to recover it. 00:30:36.058 [2024-12-05 12:14:10.048730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.058 [2024-12-05 12:14:10.048762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.058 qpair failed and we were unable to recover it. 00:30:36.058 [2024-12-05 12:14:10.048884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.058 [2024-12-05 12:14:10.048916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.058 qpair failed and we were unable to recover it. 00:30:36.058 [2024-12-05 12:14:10.049178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.058 [2024-12-05 12:14:10.049211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.058 qpair failed and we were unable to recover it. 00:30:36.058 [2024-12-05 12:14:10.049323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.058 [2024-12-05 12:14:10.049357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.058 qpair failed and we were unable to recover it. 00:30:36.058 [2024-12-05 12:14:10.049569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.058 [2024-12-05 12:14:10.049601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.058 qpair failed and we were unable to recover it. 00:30:36.058 [2024-12-05 12:14:10.049794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.058 [2024-12-05 12:14:10.049827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.058 qpair failed and we were unable to recover it. 00:30:36.058 [2024-12-05 12:14:10.050130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.058 [2024-12-05 12:14:10.050163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.058 qpair failed and we were unable to recover it. 00:30:36.058 [2024-12-05 12:14:10.050295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.058 [2024-12-05 12:14:10.050328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.058 qpair failed and we were unable to recover it. 00:30:36.058 [2024-12-05 12:14:10.050630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.059 [2024-12-05 12:14:10.050707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.059 qpair failed and we were unable to recover it. 00:30:36.059 [2024-12-05 12:14:10.050918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.059 [2024-12-05 12:14:10.050956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.059 qpair failed and we were unable to recover it. 00:30:36.059 [2024-12-05 12:14:10.051075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.059 [2024-12-05 12:14:10.051109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.059 qpair failed and we were unable to recover it. 00:30:36.059 [2024-12-05 12:14:10.051243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.059 [2024-12-05 12:14:10.051276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.059 qpair failed and we were unable to recover it. 00:30:36.059 [2024-12-05 12:14:10.051469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.059 [2024-12-05 12:14:10.051504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.059 qpair failed and we were unable to recover it. 00:30:36.059 [2024-12-05 12:14:10.051629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.059 [2024-12-05 12:14:10.051661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.059 qpair failed and we were unable to recover it. 00:30:36.059 [2024-12-05 12:14:10.051871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.059 [2024-12-05 12:14:10.051903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.059 qpair failed and we were unable to recover it. 00:30:36.059 [2024-12-05 12:14:10.052030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.059 [2024-12-05 12:14:10.052062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.059 qpair failed and we were unable to recover it. 00:30:36.059 [2024-12-05 12:14:10.052349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.059 [2024-12-05 12:14:10.052391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.059 qpair failed and we were unable to recover it. 00:30:36.059 [2024-12-05 12:14:10.052573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.059 [2024-12-05 12:14:10.052605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.059 qpair failed and we were unable to recover it. 00:30:36.059 [2024-12-05 12:14:10.052799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.059 [2024-12-05 12:14:10.052831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.059 qpair failed and we were unable to recover it. 00:30:36.059 [2024-12-05 12:14:10.053009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.059 [2024-12-05 12:14:10.053052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.059 qpair failed and we were unable to recover it. 00:30:36.059 [2024-12-05 12:14:10.053264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.059 [2024-12-05 12:14:10.053295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.059 qpair failed and we were unable to recover it. 00:30:36.059 [2024-12-05 12:14:10.053520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.059 [2024-12-05 12:14:10.053553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.059 qpair failed and we were unable to recover it. 00:30:36.059 [2024-12-05 12:14:10.053768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.059 [2024-12-05 12:14:10.053800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.059 qpair failed and we were unable to recover it. 00:30:36.059 [2024-12-05 12:14:10.053993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.059 [2024-12-05 12:14:10.054024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.059 qpair failed and we were unable to recover it. 00:30:36.059 [2024-12-05 12:14:10.054153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.059 [2024-12-05 12:14:10.054184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.059 qpair failed and we were unable to recover it. 00:30:36.059 [2024-12-05 12:14:10.054340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.059 [2024-12-05 12:14:10.054382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.059 qpair failed and we were unable to recover it. 00:30:36.059 [2024-12-05 12:14:10.054564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.059 [2024-12-05 12:14:10.054596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.059 qpair failed and we were unable to recover it. 00:30:36.059 [2024-12-05 12:14:10.054780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.059 [2024-12-05 12:14:10.054811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.059 qpair failed and we were unable to recover it. 00:30:36.059 [2024-12-05 12:14:10.054945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.059 [2024-12-05 12:14:10.054975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.059 qpair failed and we were unable to recover it. 00:30:36.059 [2024-12-05 12:14:10.055152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.059 [2024-12-05 12:14:10.055185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.059 qpair failed and we were unable to recover it. 00:30:36.059 [2024-12-05 12:14:10.055391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.059 [2024-12-05 12:14:10.055424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.059 qpair failed and we were unable to recover it. 00:30:36.059 [2024-12-05 12:14:10.055546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.059 [2024-12-05 12:14:10.055579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.059 qpair failed and we were unable to recover it. 00:30:36.059 [2024-12-05 12:14:10.055773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.059 [2024-12-05 12:14:10.055806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.059 qpair failed and we were unable to recover it. 00:30:36.059 [2024-12-05 12:14:10.056000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.059 [2024-12-05 12:14:10.056032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.059 qpair failed and we were unable to recover it. 00:30:36.059 [2024-12-05 12:14:10.056283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.059 [2024-12-05 12:14:10.056314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.059 qpair failed and we were unable to recover it. 00:30:36.059 [2024-12-05 12:14:10.056508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.059 [2024-12-05 12:14:10.056542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.059 qpair failed and we were unable to recover it. 00:30:36.059 [2024-12-05 12:14:10.056747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.059 [2024-12-05 12:14:10.056778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.059 qpair failed and we were unable to recover it. 00:30:36.059 [2024-12-05 12:14:10.056900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.059 [2024-12-05 12:14:10.056931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.059 qpair failed and we were unable to recover it. 00:30:36.059 [2024-12-05 12:14:10.057082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.060 [2024-12-05 12:14:10.057114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.060 qpair failed and we were unable to recover it. 00:30:36.060 [2024-12-05 12:14:10.057293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.060 [2024-12-05 12:14:10.057325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.060 qpair failed and we were unable to recover it. 00:30:36.060 [2024-12-05 12:14:10.057555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.060 [2024-12-05 12:14:10.057590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.060 qpair failed and we were unable to recover it. 00:30:36.060 [2024-12-05 12:14:10.057784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.060 [2024-12-05 12:14:10.057816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.060 qpair failed and we were unable to recover it. 00:30:36.060 [2024-12-05 12:14:10.058018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.060 [2024-12-05 12:14:10.058050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.060 qpair failed and we were unable to recover it. 00:30:36.060 [2024-12-05 12:14:10.058232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.060 [2024-12-05 12:14:10.058264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.060 qpair failed and we were unable to recover it. 00:30:36.060 [2024-12-05 12:14:10.058387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.060 [2024-12-05 12:14:10.058421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.060 qpair failed and we were unable to recover it. 00:30:36.060 [2024-12-05 12:14:10.058603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.060 [2024-12-05 12:14:10.058635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.060 qpair failed and we were unable to recover it. 00:30:36.060 [2024-12-05 12:14:10.058892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.060 [2024-12-05 12:14:10.058930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.060 qpair failed and we were unable to recover it. 00:30:36.060 [2024-12-05 12:14:10.059207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.060 [2024-12-05 12:14:10.059238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.060 qpair failed and we were unable to recover it. 00:30:36.060 [2024-12-05 12:14:10.059346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.060 [2024-12-05 12:14:10.059388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.060 qpair failed and we were unable to recover it. 00:30:36.060 [2024-12-05 12:14:10.059510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.060 [2024-12-05 12:14:10.059542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.060 qpair failed and we were unable to recover it. 00:30:36.060 [2024-12-05 12:14:10.059682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.060 [2024-12-05 12:14:10.059713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.060 qpair failed and we were unable to recover it. 00:30:36.060 [2024-12-05 12:14:10.059901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.060 [2024-12-05 12:14:10.059933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.060 qpair failed and we were unable to recover it. 00:30:36.060 [2024-12-05 12:14:10.060051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.060 [2024-12-05 12:14:10.060083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.060 qpair failed and we were unable to recover it. 00:30:36.060 [2024-12-05 12:14:10.060304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.060 [2024-12-05 12:14:10.060335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.060 qpair failed and we were unable to recover it. 00:30:36.060 [2024-12-05 12:14:10.060394] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc5ab20 (9): Bad file descriptor 00:30:36.060 [2024-12-05 12:14:10.060648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.060 [2024-12-05 12:14:10.060727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.060 qpair failed and we were unable to recover it. 00:30:36.060 [2024-12-05 12:14:10.061061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.060 [2024-12-05 12:14:10.061100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.060 qpair failed and we were unable to recover it. 00:30:36.060 [2024-12-05 12:14:10.061304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.060 [2024-12-05 12:14:10.061335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.060 qpair failed and we were unable to recover it. 00:30:36.060 [2024-12-05 12:14:10.061667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.060 [2024-12-05 12:14:10.061702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.060 qpair failed and we were unable to recover it. 00:30:36.060 [2024-12-05 12:14:10.061921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.060 [2024-12-05 12:14:10.061953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.060 qpair failed and we were unable to recover it. 00:30:36.060 [2024-12-05 12:14:10.062154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.060 [2024-12-05 12:14:10.062185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.060 qpair failed and we were unable to recover it. 00:30:36.060 [2024-12-05 12:14:10.062309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.060 [2024-12-05 12:14:10.062341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.060 qpair failed and we were unable to recover it. 00:30:36.060 [2024-12-05 12:14:10.062601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.060 [2024-12-05 12:14:10.062634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.060 qpair failed and we were unable to recover it. 00:30:36.060 [2024-12-05 12:14:10.062809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.060 [2024-12-05 12:14:10.062840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.060 qpair failed and we were unable to recover it. 00:30:36.060 [2024-12-05 12:14:10.062961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.060 [2024-12-05 12:14:10.062994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.060 qpair failed and we were unable to recover it. 00:30:36.060 [2024-12-05 12:14:10.063202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.060 [2024-12-05 12:14:10.063234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.060 qpair failed and we were unable to recover it. 00:30:36.060 [2024-12-05 12:14:10.063365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.060 [2024-12-05 12:14:10.063409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.061 qpair failed and we were unable to recover it. 00:30:36.061 [2024-12-05 12:14:10.063609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.061 [2024-12-05 12:14:10.063641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.061 qpair failed and we were unable to recover it. 00:30:36.061 [2024-12-05 12:14:10.063842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.061 [2024-12-05 12:14:10.063874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.061 qpair failed and we were unable to recover it. 00:30:36.061 [2024-12-05 12:14:10.064147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.061 [2024-12-05 12:14:10.064178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.061 qpair failed and we were unable to recover it. 00:30:36.061 [2024-12-05 12:14:10.064352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.061 [2024-12-05 12:14:10.064393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.061 qpair failed and we were unable to recover it. 00:30:36.061 [2024-12-05 12:14:10.064512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.061 [2024-12-05 12:14:10.064544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.061 qpair failed and we were unable to recover it. 00:30:36.061 [2024-12-05 12:14:10.064669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.061 [2024-12-05 12:14:10.064702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.061 qpair failed and we were unable to recover it. 00:30:36.061 [2024-12-05 12:14:10.064949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.061 [2024-12-05 12:14:10.064982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.061 qpair failed and we were unable to recover it. 00:30:36.061 [2024-12-05 12:14:10.065131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.061 [2024-12-05 12:14:10.065172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.061 qpair failed and we were unable to recover it. 00:30:36.061 [2024-12-05 12:14:10.065356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.061 [2024-12-05 12:14:10.065405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.061 qpair failed and we were unable to recover it. 00:30:36.061 [2024-12-05 12:14:10.065601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.061 [2024-12-05 12:14:10.065633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.061 qpair failed and we were unable to recover it. 00:30:36.061 [2024-12-05 12:14:10.065811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.061 [2024-12-05 12:14:10.065842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.061 qpair failed and we were unable to recover it. 00:30:36.061 [2024-12-05 12:14:10.065976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.061 [2024-12-05 12:14:10.066008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.061 qpair failed and we were unable to recover it. 00:30:36.061 [2024-12-05 12:14:10.066148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.061 [2024-12-05 12:14:10.066180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.061 qpair failed and we were unable to recover it. 00:30:36.061 [2024-12-05 12:14:10.066361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.061 [2024-12-05 12:14:10.066406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.061 qpair failed and we were unable to recover it. 00:30:36.061 [2024-12-05 12:14:10.066671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.061 [2024-12-05 12:14:10.066703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.061 qpair failed and we were unable to recover it. 00:30:36.061 [2024-12-05 12:14:10.066890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.061 [2024-12-05 12:14:10.066922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.061 qpair failed and we were unable to recover it. 00:30:36.061 [2024-12-05 12:14:10.067107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.061 [2024-12-05 12:14:10.067138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.061 qpair failed and we were unable to recover it. 00:30:36.061 [2024-12-05 12:14:10.067331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.061 [2024-12-05 12:14:10.067362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.061 qpair failed and we were unable to recover it. 00:30:36.061 [2024-12-05 12:14:10.067507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.061 [2024-12-05 12:14:10.067539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.061 qpair failed and we were unable to recover it. 00:30:36.061 [2024-12-05 12:14:10.067768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.061 [2024-12-05 12:14:10.067799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.061 qpair failed and we were unable to recover it. 00:30:36.061 [2024-12-05 12:14:10.067931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.061 [2024-12-05 12:14:10.067974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.061 qpair failed and we were unable to recover it. 00:30:36.061 [2024-12-05 12:14:10.068086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.061 [2024-12-05 12:14:10.068118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.061 qpair failed and we were unable to recover it. 00:30:36.061 [2024-12-05 12:14:10.068239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.061 [2024-12-05 12:14:10.068271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.061 qpair failed and we were unable to recover it. 00:30:36.061 [2024-12-05 12:14:10.068451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.061 [2024-12-05 12:14:10.068483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.061 qpair failed and we were unable to recover it. 00:30:36.061 [2024-12-05 12:14:10.068588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.061 [2024-12-05 12:14:10.068619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.061 qpair failed and we were unable to recover it. 00:30:36.061 [2024-12-05 12:14:10.068806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.061 [2024-12-05 12:14:10.068837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.061 qpair failed and we were unable to recover it. 00:30:36.061 [2024-12-05 12:14:10.069083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.061 [2024-12-05 12:14:10.069114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.061 qpair failed and we were unable to recover it. 00:30:36.061 [2024-12-05 12:14:10.069238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.061 [2024-12-05 12:14:10.069269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.062 qpair failed and we were unable to recover it. 00:30:36.062 [2024-12-05 12:14:10.069465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.062 [2024-12-05 12:14:10.069498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.062 qpair failed and we were unable to recover it. 00:30:36.062 [2024-12-05 12:14:10.069622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.062 [2024-12-05 12:14:10.069654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.062 qpair failed and we were unable to recover it. 00:30:36.062 [2024-12-05 12:14:10.069784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.062 [2024-12-05 12:14:10.069815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.062 qpair failed and we were unable to recover it. 00:30:36.062 [2024-12-05 12:14:10.070076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.062 [2024-12-05 12:14:10.070107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.062 qpair failed and we were unable to recover it. 00:30:36.062 [2024-12-05 12:14:10.070403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.062 [2024-12-05 12:14:10.070437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.062 qpair failed and we were unable to recover it. 00:30:36.062 [2024-12-05 12:14:10.070622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.062 [2024-12-05 12:14:10.070653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.062 qpair failed and we were unable to recover it. 00:30:36.062 [2024-12-05 12:14:10.070869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.062 [2024-12-05 12:14:10.070901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.062 qpair failed and we were unable to recover it. 00:30:36.062 [2024-12-05 12:14:10.071214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.062 [2024-12-05 12:14:10.071245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.062 qpair failed and we were unable to recover it. 00:30:36.062 [2024-12-05 12:14:10.071481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.062 [2024-12-05 12:14:10.071514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.062 qpair failed and we were unable to recover it. 00:30:36.062 [2024-12-05 12:14:10.071813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.062 [2024-12-05 12:14:10.071845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.062 qpair failed and we were unable to recover it. 00:30:36.062 [2024-12-05 12:14:10.072147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.062 [2024-12-05 12:14:10.072178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.062 qpair failed and we were unable to recover it. 00:30:36.062 [2024-12-05 12:14:10.072412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.062 [2024-12-05 12:14:10.072445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.062 qpair failed and we were unable to recover it. 00:30:36.062 [2024-12-05 12:14:10.072632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.062 [2024-12-05 12:14:10.072664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.062 qpair failed and we were unable to recover it. 00:30:36.062 [2024-12-05 12:14:10.072853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.062 [2024-12-05 12:14:10.072884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.062 qpair failed and we were unable to recover it. 00:30:36.062 [2024-12-05 12:14:10.073033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.062 [2024-12-05 12:14:10.073063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.062 qpair failed and we were unable to recover it. 00:30:36.062 [2024-12-05 12:14:10.073219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.062 [2024-12-05 12:14:10.073249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.062 qpair failed and we were unable to recover it. 00:30:36.062 [2024-12-05 12:14:10.073410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.062 [2024-12-05 12:14:10.073443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.062 qpair failed and we were unable to recover it. 00:30:36.062 [2024-12-05 12:14:10.073592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.062 [2024-12-05 12:14:10.073624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.062 qpair failed and we were unable to recover it. 00:30:36.062 [2024-12-05 12:14:10.073819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.062 [2024-12-05 12:14:10.073850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.062 qpair failed and we were unable to recover it. 00:30:36.062 [2024-12-05 12:14:10.073994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.062 [2024-12-05 12:14:10.074031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.062 qpair failed and we were unable to recover it. 00:30:36.062 [2024-12-05 12:14:10.074170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.062 [2024-12-05 12:14:10.074201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.062 qpair failed and we were unable to recover it. 00:30:36.062 [2024-12-05 12:14:10.074409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.062 [2024-12-05 12:14:10.074442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.062 qpair failed and we were unable to recover it. 00:30:36.062 [2024-12-05 12:14:10.074656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.062 [2024-12-05 12:14:10.074686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.062 qpair failed and we were unable to recover it. 00:30:36.062 [2024-12-05 12:14:10.074883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.063 [2024-12-05 12:14:10.074914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.063 qpair failed and we were unable to recover it. 00:30:36.063 [2024-12-05 12:14:10.075103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.063 [2024-12-05 12:14:10.075134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.063 qpair failed and we were unable to recover it. 00:30:36.063 [2024-12-05 12:14:10.075409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.063 [2024-12-05 12:14:10.075441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.063 qpair failed and we were unable to recover it. 00:30:36.063 [2024-12-05 12:14:10.075638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.063 [2024-12-05 12:14:10.075670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.063 qpair failed and we were unable to recover it. 00:30:36.063 [2024-12-05 12:14:10.075888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.063 [2024-12-05 12:14:10.075920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.063 qpair failed and we were unable to recover it. 00:30:36.063 [2024-12-05 12:14:10.076119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.063 [2024-12-05 12:14:10.076150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.063 qpair failed and we were unable to recover it. 00:30:36.063 [2024-12-05 12:14:10.076378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.063 [2024-12-05 12:14:10.076412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.063 qpair failed and we were unable to recover it. 00:30:36.063 [2024-12-05 12:14:10.076606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.063 [2024-12-05 12:14:10.076638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.063 qpair failed and we were unable to recover it. 00:30:36.063 [2024-12-05 12:14:10.076823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.063 [2024-12-05 12:14:10.076854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.063 qpair failed and we were unable to recover it. 00:30:36.063 [2024-12-05 12:14:10.077152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.063 [2024-12-05 12:14:10.077183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.063 qpair failed and we were unable to recover it. 00:30:36.063 [2024-12-05 12:14:10.077456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.063 [2024-12-05 12:14:10.077490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.063 qpair failed and we were unable to recover it. 00:30:36.063 [2024-12-05 12:14:10.077623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.063 [2024-12-05 12:14:10.077655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.063 qpair failed and we were unable to recover it. 00:30:36.063 [2024-12-05 12:14:10.077852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.063 [2024-12-05 12:14:10.077884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.063 qpair failed and we were unable to recover it. 00:30:36.063 [2024-12-05 12:14:10.078093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.063 [2024-12-05 12:14:10.078124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.063 qpair failed and we were unable to recover it. 00:30:36.063 [2024-12-05 12:14:10.078393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.063 [2024-12-05 12:14:10.078425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.063 qpair failed and we were unable to recover it. 00:30:36.063 [2024-12-05 12:14:10.078714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.063 [2024-12-05 12:14:10.078745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.063 qpair failed and we were unable to recover it. 00:30:36.063 [2024-12-05 12:14:10.078932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.063 [2024-12-05 12:14:10.078963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.063 qpair failed and we were unable to recover it. 00:30:36.063 [2024-12-05 12:14:10.079252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.063 [2024-12-05 12:14:10.079284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.063 qpair failed and we were unable to recover it. 00:30:36.063 [2024-12-05 12:14:10.079533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.063 [2024-12-05 12:14:10.079566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.063 qpair failed and we were unable to recover it. 00:30:36.063 [2024-12-05 12:14:10.079777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.063 [2024-12-05 12:14:10.079808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.063 qpair failed and we were unable to recover it. 00:30:36.063 [2024-12-05 12:14:10.080003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.063 [2024-12-05 12:14:10.080035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.063 qpair failed and we were unable to recover it. 00:30:36.063 [2024-12-05 12:14:10.080303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.063 [2024-12-05 12:14:10.080335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.063 qpair failed and we were unable to recover it. 00:30:36.063 [2024-12-05 12:14:10.080579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.063 [2024-12-05 12:14:10.080611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.063 qpair failed and we were unable to recover it. 00:30:36.063 [2024-12-05 12:14:10.080864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.063 [2024-12-05 12:14:10.080896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.063 qpair failed and we were unable to recover it. 00:30:36.063 [2024-12-05 12:14:10.081153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.063 [2024-12-05 12:14:10.081185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.063 qpair failed and we were unable to recover it. 00:30:36.063 [2024-12-05 12:14:10.081436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.063 [2024-12-05 12:14:10.081469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.063 qpair failed and we were unable to recover it. 00:30:36.063 [2024-12-05 12:14:10.081620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.063 [2024-12-05 12:14:10.081652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.063 qpair failed and we were unable to recover it. 00:30:36.063 [2024-12-05 12:14:10.081845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.063 [2024-12-05 12:14:10.081876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.063 qpair failed and we were unable to recover it. 00:30:36.063 [2024-12-05 12:14:10.082182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.063 [2024-12-05 12:14:10.082213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.063 qpair failed and we were unable to recover it. 00:30:36.063 [2024-12-05 12:14:10.082339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.064 [2024-12-05 12:14:10.082378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.064 qpair failed and we were unable to recover it. 00:30:36.064 [2024-12-05 12:14:10.082592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.064 [2024-12-05 12:14:10.082625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.064 qpair failed and we were unable to recover it. 00:30:36.064 [2024-12-05 12:14:10.082804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.064 [2024-12-05 12:14:10.082836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.064 qpair failed and we were unable to recover it. 00:30:36.064 [2024-12-05 12:14:10.083113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.064 [2024-12-05 12:14:10.083144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.064 qpair failed and we were unable to recover it. 00:30:36.064 [2024-12-05 12:14:10.083293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.064 [2024-12-05 12:14:10.083324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.064 qpair failed and we were unable to recover it. 00:30:36.064 [2024-12-05 12:14:10.083529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.064 [2024-12-05 12:14:10.083561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.064 qpair failed and we were unable to recover it. 00:30:36.064 [2024-12-05 12:14:10.083757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.064 [2024-12-05 12:14:10.083788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.064 qpair failed and we were unable to recover it. 00:30:36.064 [2024-12-05 12:14:10.084007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.064 [2024-12-05 12:14:10.084044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.064 qpair failed and we were unable to recover it. 00:30:36.064 [2024-12-05 12:14:10.084294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.064 [2024-12-05 12:14:10.084325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.064 qpair failed and we were unable to recover it. 00:30:36.064 [2024-12-05 12:14:10.084538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.064 [2024-12-05 12:14:10.084571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.064 qpair failed and we were unable to recover it. 00:30:36.064 [2024-12-05 12:14:10.084716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.064 [2024-12-05 12:14:10.084746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.064 qpair failed and we were unable to recover it. 00:30:36.064 [2024-12-05 12:14:10.084928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.064 [2024-12-05 12:14:10.084959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.064 qpair failed and we were unable to recover it. 00:30:36.064 [2024-12-05 12:14:10.085172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.064 [2024-12-05 12:14:10.085204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.064 qpair failed and we were unable to recover it. 00:30:36.064 [2024-12-05 12:14:10.085405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.064 [2024-12-05 12:14:10.085438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.064 qpair failed and we were unable to recover it. 00:30:36.064 [2024-12-05 12:14:10.085688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.064 [2024-12-05 12:14:10.085719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.064 qpair failed and we were unable to recover it. 00:30:36.064 [2024-12-05 12:14:10.085908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.064 [2024-12-05 12:14:10.085940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.064 qpair failed and we were unable to recover it. 00:30:36.064 [2024-12-05 12:14:10.086138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.064 [2024-12-05 12:14:10.086170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.064 qpair failed and we were unable to recover it. 00:30:36.064 [2024-12-05 12:14:10.086377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.064 [2024-12-05 12:14:10.086410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.064 qpair failed and we were unable to recover it. 00:30:36.064 [2024-12-05 12:14:10.086654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.064 [2024-12-05 12:14:10.086687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.064 qpair failed and we were unable to recover it. 00:30:36.064 [2024-12-05 12:14:10.086957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.064 [2024-12-05 12:14:10.086989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.064 qpair failed and we were unable to recover it. 00:30:36.064 [2024-12-05 12:14:10.087202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.064 [2024-12-05 12:14:10.087234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.064 qpair failed and we were unable to recover it. 00:30:36.064 [2024-12-05 12:14:10.087519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.064 [2024-12-05 12:14:10.087552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.064 qpair failed and we were unable to recover it. 00:30:36.064 [2024-12-05 12:14:10.087840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.064 [2024-12-05 12:14:10.087873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.064 qpair failed and we were unable to recover it. 00:30:36.064 [2024-12-05 12:14:10.088007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.064 [2024-12-05 12:14:10.088060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.064 qpair failed and we were unable to recover it. 00:30:36.064 [2024-12-05 12:14:10.088309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.064 [2024-12-05 12:14:10.088340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.064 qpair failed and we were unable to recover it. 00:30:36.064 [2024-12-05 12:14:10.088598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.064 [2024-12-05 12:14:10.088630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.064 qpair failed and we were unable to recover it. 00:30:36.064 [2024-12-05 12:14:10.088879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.064 [2024-12-05 12:14:10.088911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.064 qpair failed and we were unable to recover it. 00:30:36.064 [2024-12-05 12:14:10.089229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.064 [2024-12-05 12:14:10.089261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.064 qpair failed and we were unable to recover it. 00:30:36.064 [2024-12-05 12:14:10.089526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.064 [2024-12-05 12:14:10.089560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.064 qpair failed and we were unable to recover it. 00:30:36.064 [2024-12-05 12:14:10.089699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.064 [2024-12-05 12:14:10.089732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.064 qpair failed and we were unable to recover it. 00:30:36.064 [2024-12-05 12:14:10.089878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.064 [2024-12-05 12:14:10.089910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.064 qpair failed and we were unable to recover it. 00:30:36.064 [2024-12-05 12:14:10.090038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.064 [2024-12-05 12:14:10.090070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.064 qpair failed and we were unable to recover it. 00:30:36.065 [2024-12-05 12:14:10.090279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.065 [2024-12-05 12:14:10.090309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.065 qpair failed and we were unable to recover it. 00:30:36.065 [2024-12-05 12:14:10.090541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.065 [2024-12-05 12:14:10.090575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.065 qpair failed and we were unable to recover it. 00:30:36.065 [2024-12-05 12:14:10.090776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.065 [2024-12-05 12:14:10.090807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.065 qpair failed and we were unable to recover it. 00:30:36.065 [2024-12-05 12:14:10.091018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.065 [2024-12-05 12:14:10.091049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.065 qpair failed and we were unable to recover it. 00:30:36.065 [2024-12-05 12:14:10.091312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.065 [2024-12-05 12:14:10.091344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.065 qpair failed and we were unable to recover it. 00:30:36.065 [2024-12-05 12:14:10.091578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.065 [2024-12-05 12:14:10.091610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.065 qpair failed and we were unable to recover it. 00:30:36.065 [2024-12-05 12:14:10.091800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.065 [2024-12-05 12:14:10.091832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.065 qpair failed and we were unable to recover it. 00:30:36.065 [2024-12-05 12:14:10.092110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.065 [2024-12-05 12:14:10.092141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.065 qpair failed and we were unable to recover it. 00:30:36.065 [2024-12-05 12:14:10.092360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.065 [2024-12-05 12:14:10.092400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.065 qpair failed and we were unable to recover it. 00:30:36.065 [2024-12-05 12:14:10.092672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.065 [2024-12-05 12:14:10.092704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.065 qpair failed and we were unable to recover it. 00:30:36.065 [2024-12-05 12:14:10.092862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.065 [2024-12-05 12:14:10.092895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.065 qpair failed and we were unable to recover it. 00:30:36.065 [2024-12-05 12:14:10.093165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.065 [2024-12-05 12:14:10.093197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.065 qpair failed and we were unable to recover it. 00:30:36.065 [2024-12-05 12:14:10.093450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.065 [2024-12-05 12:14:10.093484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.065 qpair failed and we were unable to recover it. 00:30:36.065 [2024-12-05 12:14:10.093683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.065 [2024-12-05 12:14:10.093715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.065 qpair failed and we were unable to recover it. 00:30:36.065 [2024-12-05 12:14:10.094013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.065 [2024-12-05 12:14:10.094045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.065 qpair failed and we were unable to recover it. 00:30:36.065 [2024-12-05 12:14:10.094278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.065 [2024-12-05 12:14:10.094315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.065 qpair failed and we were unable to recover it. 00:30:36.065 [2024-12-05 12:14:10.094526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.065 [2024-12-05 12:14:10.094559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.065 qpair failed and we were unable to recover it. 00:30:36.065 [2024-12-05 12:14:10.094831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.065 [2024-12-05 12:14:10.094863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.065 qpair failed and we were unable to recover it. 00:30:36.065 [2024-12-05 12:14:10.094994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.065 [2024-12-05 12:14:10.095026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.065 qpair failed and we were unable to recover it. 00:30:36.065 [2024-12-05 12:14:10.095280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.065 [2024-12-05 12:14:10.095312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.065 qpair failed and we were unable to recover it. 00:30:36.065 [2024-12-05 12:14:10.095509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.065 [2024-12-05 12:14:10.095541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.065 qpair failed and we were unable to recover it. 00:30:36.065 [2024-12-05 12:14:10.095744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.065 [2024-12-05 12:14:10.095775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.065 qpair failed and we were unable to recover it. 00:30:36.065 [2024-12-05 12:14:10.095970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.065 [2024-12-05 12:14:10.096001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.065 qpair failed and we were unable to recover it. 00:30:36.065 [2024-12-05 12:14:10.096270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.065 [2024-12-05 12:14:10.096302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.065 qpair failed and we were unable to recover it. 00:30:36.065 [2024-12-05 12:14:10.096604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.065 [2024-12-05 12:14:10.096638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.065 qpair failed and we were unable to recover it. 00:30:36.065 [2024-12-05 12:14:10.096775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.065 [2024-12-05 12:14:10.096807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.065 qpair failed and we were unable to recover it. 00:30:36.065 [2024-12-05 12:14:10.096944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.065 [2024-12-05 12:14:10.096976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.065 qpair failed and we were unable to recover it. 00:30:36.065 [2024-12-05 12:14:10.097227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.065 [2024-12-05 12:14:10.097259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.065 qpair failed and we were unable to recover it. 00:30:36.065 [2024-12-05 12:14:10.097502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.065 [2024-12-05 12:14:10.097536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.065 qpair failed and we were unable to recover it. 00:30:36.065 [2024-12-05 12:14:10.097697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.065 [2024-12-05 12:14:10.097729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.065 qpair failed and we were unable to recover it. 00:30:36.065 [2024-12-05 12:14:10.098003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-12-05 12:14:10.098035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-12-05 12:14:10.098293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-12-05 12:14:10.098325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-12-05 12:14:10.098498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-12-05 12:14:10.098531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-12-05 12:14:10.098757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-12-05 12:14:10.098789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-12-05 12:14:10.098938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-12-05 12:14:10.098970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-12-05 12:14:10.099242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-12-05 12:14:10.099274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-12-05 12:14:10.099494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-12-05 12:14:10.099528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-12-05 12:14:10.099803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-12-05 12:14:10.099834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-12-05 12:14:10.100132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-12-05 12:14:10.100164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-12-05 12:14:10.100437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-12-05 12:14:10.100471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-12-05 12:14:10.100746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-12-05 12:14:10.100779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-12-05 12:14:10.100929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-12-05 12:14:10.100960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-12-05 12:14:10.101237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-12-05 12:14:10.101269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-12-05 12:14:10.101559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-12-05 12:14:10.101591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-12-05 12:14:10.101868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-12-05 12:14:10.101899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-12-05 12:14:10.102092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-12-05 12:14:10.102124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-12-05 12:14:10.102396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-12-05 12:14:10.102430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-12-05 12:14:10.102720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-12-05 12:14:10.102752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-12-05 12:14:10.103053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-12-05 12:14:10.103085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-12-05 12:14:10.103257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-12-05 12:14:10.103288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-12-05 12:14:10.103491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-12-05 12:14:10.103523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-12-05 12:14:10.103740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-12-05 12:14:10.103772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-12-05 12:14:10.103907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-12-05 12:14:10.103938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-12-05 12:14:10.104162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-12-05 12:14:10.104193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-12-05 12:14:10.104421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-12-05 12:14:10.104453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-12-05 12:14:10.104793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-12-05 12:14:10.104837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-12-05 12:14:10.105074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-12-05 12:14:10.105105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-12-05 12:14:10.105406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-12-05 12:14:10.105438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-12-05 12:14:10.105703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-12-05 12:14:10.105735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-12-05 12:14:10.106043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-12-05 12:14:10.106074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-12-05 12:14:10.106274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-12-05 12:14:10.106306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-12-05 12:14:10.106571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.066 [2024-12-05 12:14:10.106604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.066 qpair failed and we were unable to recover it. 00:30:36.066 [2024-12-05 12:14:10.106713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-12-05 12:14:10.106745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-12-05 12:14:10.106997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-12-05 12:14:10.107028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-12-05 12:14:10.107295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-12-05 12:14:10.107327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-12-05 12:14:10.107602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-12-05 12:14:10.107635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-12-05 12:14:10.107785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-12-05 12:14:10.107817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-12-05 12:14:10.107950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-12-05 12:14:10.107982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-12-05 12:14:10.108276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-12-05 12:14:10.108308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-12-05 12:14:10.108460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-12-05 12:14:10.108492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-12-05 12:14:10.108746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-12-05 12:14:10.108779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-12-05 12:14:10.109088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-12-05 12:14:10.109120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-12-05 12:14:10.109387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-12-05 12:14:10.109422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-12-05 12:14:10.109673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-12-05 12:14:10.109705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-12-05 12:14:10.109944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-12-05 12:14:10.109976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-12-05 12:14:10.110249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-12-05 12:14:10.110281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-12-05 12:14:10.110481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-12-05 12:14:10.110514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-12-05 12:14:10.110713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-12-05 12:14:10.110745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-12-05 12:14:10.111012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-12-05 12:14:10.111043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-12-05 12:14:10.111256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-12-05 12:14:10.111288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-12-05 12:14:10.111543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-12-05 12:14:10.111575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-12-05 12:14:10.111717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-12-05 12:14:10.111750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-12-05 12:14:10.111901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-12-05 12:14:10.111934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-12-05 12:14:10.112158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-12-05 12:14:10.112190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-12-05 12:14:10.112408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-12-05 12:14:10.112442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-12-05 12:14:10.112690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-12-05 12:14:10.112723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-12-05 12:14:10.112869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-12-05 12:14:10.112900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-12-05 12:14:10.113100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.067 [2024-12-05 12:14:10.113132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.067 qpair failed and we were unable to recover it. 00:30:36.067 [2024-12-05 12:14:10.113343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-12-05 12:14:10.113384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-12-05 12:14:10.113586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-12-05 12:14:10.113618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-12-05 12:14:10.113746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-12-05 12:14:10.113778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-12-05 12:14:10.113980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-12-05 12:14:10.114012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-12-05 12:14:10.114153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-12-05 12:14:10.114184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-12-05 12:14:10.114408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-12-05 12:14:10.114441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-12-05 12:14:10.114653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-12-05 12:14:10.114684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-12-05 12:14:10.114883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-12-05 12:14:10.114921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-12-05 12:14:10.115153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-12-05 12:14:10.115185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-12-05 12:14:10.115413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-12-05 12:14:10.115446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-12-05 12:14:10.115696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-12-05 12:14:10.115728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-12-05 12:14:10.115973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-12-05 12:14:10.116005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-12-05 12:14:10.116278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-12-05 12:14:10.116309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-12-05 12:14:10.116461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-12-05 12:14:10.116495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-12-05 12:14:10.116693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-12-05 12:14:10.116726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-12-05 12:14:10.116916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-12-05 12:14:10.116947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-12-05 12:14:10.117161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-12-05 12:14:10.117193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-12-05 12:14:10.117455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-12-05 12:14:10.117488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-12-05 12:14:10.117643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-12-05 12:14:10.117674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-12-05 12:14:10.117921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-12-05 12:14:10.117953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-12-05 12:14:10.118161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-12-05 12:14:10.118192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-12-05 12:14:10.118401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-12-05 12:14:10.118435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-12-05 12:14:10.118674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-12-05 12:14:10.118706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-12-05 12:14:10.118904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-12-05 12:14:10.118936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-12-05 12:14:10.119206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-12-05 12:14:10.119237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-12-05 12:14:10.119471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-12-05 12:14:10.119504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-12-05 12:14:10.119691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-12-05 12:14:10.119723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-12-05 12:14:10.120023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-12-05 12:14:10.120055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-12-05 12:14:10.120265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-12-05 12:14:10.120298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-12-05 12:14:10.120555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-12-05 12:14:10.120588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.068 [2024-12-05 12:14:10.120787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.068 [2024-12-05 12:14:10.120820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.068 qpair failed and we were unable to recover it. 00:30:36.069 [2024-12-05 12:14:10.121102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-12-05 12:14:10.121133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-12-05 12:14:10.121411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-12-05 12:14:10.121443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-12-05 12:14:10.121649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-12-05 12:14:10.121681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-12-05 12:14:10.121913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-12-05 12:14:10.121945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-12-05 12:14:10.122198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-12-05 12:14:10.122230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-12-05 12:14:10.122425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-12-05 12:14:10.122458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-12-05 12:14:10.122711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-12-05 12:14:10.122743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-12-05 12:14:10.122935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-12-05 12:14:10.122967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-12-05 12:14:10.123208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-12-05 12:14:10.123240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-12-05 12:14:10.123433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-12-05 12:14:10.123465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-12-05 12:14:10.123674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-12-05 12:14:10.123707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-12-05 12:14:10.123886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-12-05 12:14:10.123917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-12-05 12:14:10.124119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-12-05 12:14:10.124151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-12-05 12:14:10.124360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-12-05 12:14:10.124402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-12-05 12:14:10.124599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-12-05 12:14:10.124631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-12-05 12:14:10.124785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-12-05 12:14:10.124817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-12-05 12:14:10.124939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-12-05 12:14:10.124977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-12-05 12:14:10.125115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-12-05 12:14:10.125147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-12-05 12:14:10.125333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-12-05 12:14:10.125365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-12-05 12:14:10.125619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-12-05 12:14:10.125650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-12-05 12:14:10.125836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-12-05 12:14:10.125868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-12-05 12:14:10.126044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-12-05 12:14:10.126077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-12-05 12:14:10.126345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-12-05 12:14:10.126387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-12-05 12:14:10.126605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-12-05 12:14:10.126638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-12-05 12:14:10.126866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-12-05 12:14:10.126898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-12-05 12:14:10.127089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-12-05 12:14:10.127121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-12-05 12:14:10.127346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-12-05 12:14:10.127387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-12-05 12:14:10.127598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-12-05 12:14:10.127630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-12-05 12:14:10.127772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-12-05 12:14:10.127804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-12-05 12:14:10.128045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-12-05 12:14:10.128076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-12-05 12:14:10.128331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-12-05 12:14:10.128363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-12-05 12:14:10.128577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.069 [2024-12-05 12:14:10.128608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.069 qpair failed and we were unable to recover it. 00:30:36.069 [2024-12-05 12:14:10.128744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-12-05 12:14:10.128777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-12-05 12:14:10.128880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-12-05 12:14:10.128912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-12-05 12:14:10.129031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-12-05 12:14:10.129062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-12-05 12:14:10.129199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-12-05 12:14:10.129231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-12-05 12:14:10.129424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-12-05 12:14:10.129458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-12-05 12:14:10.129662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-12-05 12:14:10.129694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-12-05 12:14:10.129956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-12-05 12:14:10.129989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-12-05 12:14:10.130170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-12-05 12:14:10.130202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-12-05 12:14:10.130349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-12-05 12:14:10.130390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-12-05 12:14:10.130584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-12-05 12:14:10.130617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-12-05 12:14:10.130806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-12-05 12:14:10.130839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-12-05 12:14:10.131127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-12-05 12:14:10.131158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-12-05 12:14:10.131408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-12-05 12:14:10.131441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-12-05 12:14:10.131572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-12-05 12:14:10.131604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-12-05 12:14:10.131831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-12-05 12:14:10.131863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-12-05 12:14:10.132095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-12-05 12:14:10.132127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-12-05 12:14:10.132401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-12-05 12:14:10.132434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-12-05 12:14:10.132574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-12-05 12:14:10.132606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-12-05 12:14:10.132781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-12-05 12:14:10.132812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-12-05 12:14:10.133040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-12-05 12:14:10.133071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-12-05 12:14:10.133213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-12-05 12:14:10.133245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-12-05 12:14:10.133430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-12-05 12:14:10.133463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-12-05 12:14:10.133708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-12-05 12:14:10.133740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-12-05 12:14:10.134002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-12-05 12:14:10.134034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-12-05 12:14:10.134250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-12-05 12:14:10.134287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-12-05 12:14:10.134544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-12-05 12:14:10.134577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-12-05 12:14:10.134707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-12-05 12:14:10.134738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-12-05 12:14:10.134942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-12-05 12:14:10.134973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-12-05 12:14:10.135153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-12-05 12:14:10.135185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-12-05 12:14:10.135429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-12-05 12:14:10.135462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-12-05 12:14:10.135710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-12-05 12:14:10.135742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.070 [2024-12-05 12:14:10.135896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.070 [2024-12-05 12:14:10.135927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.070 qpair failed and we were unable to recover it. 00:30:36.071 [2024-12-05 12:14:10.136135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-12-05 12:14:10.136167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-12-05 12:14:10.136365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-12-05 12:14:10.136404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-12-05 12:14:10.136651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-12-05 12:14:10.136682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-12-05 12:14:10.136908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-12-05 12:14:10.136940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-12-05 12:14:10.137133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-12-05 12:14:10.137164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-12-05 12:14:10.137422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-12-05 12:14:10.137457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-12-05 12:14:10.137646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-12-05 12:14:10.137679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-12-05 12:14:10.137822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-12-05 12:14:10.137853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-12-05 12:14:10.138083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-12-05 12:14:10.138114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-12-05 12:14:10.138308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-12-05 12:14:10.138340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-12-05 12:14:10.138697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-12-05 12:14:10.138771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-12-05 12:14:10.138995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-12-05 12:14:10.139031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-12-05 12:14:10.139307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-12-05 12:14:10.139339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-12-05 12:14:10.139554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-12-05 12:14:10.139590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-12-05 12:14:10.139791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-12-05 12:14:10.139822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-12-05 12:14:10.140088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-12-05 12:14:10.140120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-12-05 12:14:10.140248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-12-05 12:14:10.140279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-12-05 12:14:10.140510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-12-05 12:14:10.140542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-12-05 12:14:10.140740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-12-05 12:14:10.140771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-12-05 12:14:10.140922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-12-05 12:14:10.140953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-12-05 12:14:10.141155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-12-05 12:14:10.141187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-12-05 12:14:10.141457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-12-05 12:14:10.141491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-12-05 12:14:10.141641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-12-05 12:14:10.141673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-12-05 12:14:10.141871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-12-05 12:14:10.141902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-12-05 12:14:10.142106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-12-05 12:14:10.142139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-12-05 12:14:10.142326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-12-05 12:14:10.142359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-12-05 12:14:10.142518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-12-05 12:14:10.142550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-12-05 12:14:10.142749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-12-05 12:14:10.142781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-12-05 12:14:10.142926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-12-05 12:14:10.142957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-12-05 12:14:10.143206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-12-05 12:14:10.143238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-12-05 12:14:10.143494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-12-05 12:14:10.143527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-12-05 12:14:10.143727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.071 [2024-12-05 12:14:10.143759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.071 qpair failed and we were unable to recover it. 00:30:36.071 [2024-12-05 12:14:10.143952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-12-05 12:14:10.143988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-12-05 12:14:10.144257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-12-05 12:14:10.144289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-12-05 12:14:10.144473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-12-05 12:14:10.144506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-12-05 12:14:10.144658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-12-05 12:14:10.144689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-12-05 12:14:10.144889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-12-05 12:14:10.144920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-12-05 12:14:10.145235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-12-05 12:14:10.145268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-12-05 12:14:10.145466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-12-05 12:14:10.145499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-12-05 12:14:10.145758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-12-05 12:14:10.145789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-12-05 12:14:10.146033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-12-05 12:14:10.146067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-12-05 12:14:10.146255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-12-05 12:14:10.146287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-12-05 12:14:10.146474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-12-05 12:14:10.146507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-12-05 12:14:10.146632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-12-05 12:14:10.146664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-12-05 12:14:10.146859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-12-05 12:14:10.146890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-12-05 12:14:10.147026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-12-05 12:14:10.147057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-12-05 12:14:10.147336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-12-05 12:14:10.147375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-12-05 12:14:10.147594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-12-05 12:14:10.147626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-12-05 12:14:10.147860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-12-05 12:14:10.147892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-12-05 12:14:10.148133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-12-05 12:14:10.148165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-12-05 12:14:10.148423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-12-05 12:14:10.148455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-12-05 12:14:10.148645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-12-05 12:14:10.148676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-12-05 12:14:10.148879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-12-05 12:14:10.148911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-12-05 12:14:10.149120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-12-05 12:14:10.149152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-12-05 12:14:10.149425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-12-05 12:14:10.149459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-12-05 12:14:10.149738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-12-05 12:14:10.149770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-12-05 12:14:10.149971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-12-05 12:14:10.150002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-12-05 12:14:10.150190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-12-05 12:14:10.150221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-12-05 12:14:10.150497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-12-05 12:14:10.150529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-12-05 12:14:10.150744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-12-05 12:14:10.150776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-12-05 12:14:10.150968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-12-05 12:14:10.151000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-12-05 12:14:10.151188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.072 [2024-12-05 12:14:10.151220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.072 qpair failed and we were unable to recover it. 00:30:36.072 [2024-12-05 12:14:10.151468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-12-05 12:14:10.151501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-12-05 12:14:10.151825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-12-05 12:14:10.151857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-12-05 12:14:10.152107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-12-05 12:14:10.152139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-12-05 12:14:10.152322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-12-05 12:14:10.152353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-12-05 12:14:10.152630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-12-05 12:14:10.152665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-12-05 12:14:10.152874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-12-05 12:14:10.152904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-12-05 12:14:10.153184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-12-05 12:14:10.153214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-12-05 12:14:10.153429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-12-05 12:14:10.153462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-12-05 12:14:10.153695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-12-05 12:14:10.153726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-12-05 12:14:10.153876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-12-05 12:14:10.153908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-12-05 12:14:10.154054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-12-05 12:14:10.154097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-12-05 12:14:10.154383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-12-05 12:14:10.154416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-12-05 12:14:10.154551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-12-05 12:14:10.154583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-12-05 12:14:10.154781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-12-05 12:14:10.154813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-12-05 12:14:10.155054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-12-05 12:14:10.155086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-12-05 12:14:10.155383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-12-05 12:14:10.155416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-12-05 12:14:10.155615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-12-05 12:14:10.155647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-12-05 12:14:10.155834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-12-05 12:14:10.155866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-12-05 12:14:10.156089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-12-05 12:14:10.156120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-12-05 12:14:10.156343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-12-05 12:14:10.156381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-12-05 12:14:10.156535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-12-05 12:14:10.156566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-12-05 12:14:10.156811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-12-05 12:14:10.156843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-12-05 12:14:10.157066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-12-05 12:14:10.157098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-12-05 12:14:10.157358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-12-05 12:14:10.157410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-12-05 12:14:10.157559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-12-05 12:14:10.157592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-12-05 12:14:10.157723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-12-05 12:14:10.157755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-12-05 12:14:10.158007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.073 [2024-12-05 12:14:10.158039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.073 qpair failed and we were unable to recover it. 00:30:36.073 [2024-12-05 12:14:10.158169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-12-05 12:14:10.158201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-12-05 12:14:10.158495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-12-05 12:14:10.158529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-12-05 12:14:10.158732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-12-05 12:14:10.158764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-12-05 12:14:10.159035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-12-05 12:14:10.159068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-12-05 12:14:10.159321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-12-05 12:14:10.159355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-12-05 12:14:10.159588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-12-05 12:14:10.159620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-12-05 12:14:10.159820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-12-05 12:14:10.159853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-12-05 12:14:10.160057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-12-05 12:14:10.160090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-12-05 12:14:10.160276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-12-05 12:14:10.160307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-12-05 12:14:10.160526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-12-05 12:14:10.160559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-12-05 12:14:10.160804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-12-05 12:14:10.160880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-12-05 12:14:10.161065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-12-05 12:14:10.161100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-12-05 12:14:10.161397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-12-05 12:14:10.161434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-12-05 12:14:10.161638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-12-05 12:14:10.161670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-12-05 12:14:10.161859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-12-05 12:14:10.161892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-12-05 12:14:10.162101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-12-05 12:14:10.162132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-12-05 12:14:10.162338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-12-05 12:14:10.162383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-12-05 12:14:10.162526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-12-05 12:14:10.162558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-12-05 12:14:10.162738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-12-05 12:14:10.162769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-12-05 12:14:10.163074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-12-05 12:14:10.163107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-12-05 12:14:10.163283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-12-05 12:14:10.163317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-12-05 12:14:10.163582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-12-05 12:14:10.163616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-12-05 12:14:10.163758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-12-05 12:14:10.163790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-12-05 12:14:10.163945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-12-05 12:14:10.163977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-12-05 12:14:10.164206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-12-05 12:14:10.164239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-12-05 12:14:10.164391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-12-05 12:14:10.164426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-12-05 12:14:10.164583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-12-05 12:14:10.164614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-12-05 12:14:10.164768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-12-05 12:14:10.164800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-12-05 12:14:10.165091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-12-05 12:14:10.165123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-12-05 12:14:10.165404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-12-05 12:14:10.165438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-12-05 12:14:10.165595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-12-05 12:14:10.165627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.074 [2024-12-05 12:14:10.165776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.074 [2024-12-05 12:14:10.165809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.074 qpair failed and we were unable to recover it. 00:30:36.075 [2024-12-05 12:14:10.165951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-12-05 12:14:10.165982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-12-05 12:14:10.166235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-12-05 12:14:10.166268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-12-05 12:14:10.166485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-12-05 12:14:10.166519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-12-05 12:14:10.166718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-12-05 12:14:10.166750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-12-05 12:14:10.166945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-12-05 12:14:10.166978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-12-05 12:14:10.167175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-12-05 12:14:10.167216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-12-05 12:14:10.167415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-12-05 12:14:10.167448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-12-05 12:14:10.167578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-12-05 12:14:10.167610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-12-05 12:14:10.167807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-12-05 12:14:10.167839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-12-05 12:14:10.168037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-12-05 12:14:10.168068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-12-05 12:14:10.168327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-12-05 12:14:10.168360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-12-05 12:14:10.168585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-12-05 12:14:10.168619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-12-05 12:14:10.170669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-12-05 12:14:10.170736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-12-05 12:14:10.170931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-12-05 12:14:10.170964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-12-05 12:14:10.171162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-12-05 12:14:10.171195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-12-05 12:14:10.171345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-12-05 12:14:10.171389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-12-05 12:14:10.171515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-12-05 12:14:10.171547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-12-05 12:14:10.171700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-12-05 12:14:10.171733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-12-05 12:14:10.171951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-12-05 12:14:10.171982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-12-05 12:14:10.172195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-12-05 12:14:10.172228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-12-05 12:14:10.172478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-12-05 12:14:10.172512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-12-05 12:14:10.172661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-12-05 12:14:10.172693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-12-05 12:14:10.172857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-12-05 12:14:10.172889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-12-05 12:14:10.173220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-12-05 12:14:10.173252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-12-05 12:14:10.173524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-12-05 12:14:10.173557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-12-05 12:14:10.173751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-12-05 12:14:10.173784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-12-05 12:14:10.173964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-12-05 12:14:10.173996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-12-05 12:14:10.174224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-12-05 12:14:10.174256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-12-05 12:14:10.174404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-12-05 12:14:10.174436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-12-05 12:14:10.174636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.075 [2024-12-05 12:14:10.174670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.075 qpair failed and we were unable to recover it. 00:30:36.075 [2024-12-05 12:14:10.174809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-12-05 12:14:10.174841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-12-05 12:14:10.175077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-12-05 12:14:10.175110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-12-05 12:14:10.175292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-12-05 12:14:10.175331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-12-05 12:14:10.175606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-12-05 12:14:10.175639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-12-05 12:14:10.175798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-12-05 12:14:10.175829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-12-05 12:14:10.176086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-12-05 12:14:10.176117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-12-05 12:14:10.176392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-12-05 12:14:10.176426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-12-05 12:14:10.176697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-12-05 12:14:10.176728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-12-05 12:14:10.176913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-12-05 12:14:10.176945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-12-05 12:14:10.177079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-12-05 12:14:10.177110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-12-05 12:14:10.177317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-12-05 12:14:10.177348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-12-05 12:14:10.177525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-12-05 12:14:10.177557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-12-05 12:14:10.177745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-12-05 12:14:10.177779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-12-05 12:14:10.178068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-12-05 12:14:10.178102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-12-05 12:14:10.178387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-12-05 12:14:10.178421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-12-05 12:14:10.178624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-12-05 12:14:10.178656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-12-05 12:14:10.178899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-12-05 12:14:10.178932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-12-05 12:14:10.179069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-12-05 12:14:10.179103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-12-05 12:14:10.179394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-12-05 12:14:10.179426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-12-05 12:14:10.179641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-12-05 12:14:10.179674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-12-05 12:14:10.179976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-12-05 12:14:10.180008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-12-05 12:14:10.180214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-12-05 12:14:10.180246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-12-05 12:14:10.180365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-12-05 12:14:10.180409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-12-05 12:14:10.180573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-12-05 12:14:10.180606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-12-05 12:14:10.180806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-12-05 12:14:10.180838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-12-05 12:14:10.181049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-12-05 12:14:10.181080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-12-05 12:14:10.181360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-12-05 12:14:10.181405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-12-05 12:14:10.181550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-12-05 12:14:10.181582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-12-05 12:14:10.181788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-12-05 12:14:10.181820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-12-05 12:14:10.182034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-12-05 12:14:10.182066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-12-05 12:14:10.182327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-12-05 12:14:10.182359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-12-05 12:14:10.182503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.076 [2024-12-05 12:14:10.182535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.076 qpair failed and we were unable to recover it. 00:30:36.076 [2024-12-05 12:14:10.182715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-12-05 12:14:10.182748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-12-05 12:14:10.183041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-12-05 12:14:10.183073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-12-05 12:14:10.183292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-12-05 12:14:10.183324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-12-05 12:14:10.183568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-12-05 12:14:10.183600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-12-05 12:14:10.183719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-12-05 12:14:10.183751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-12-05 12:14:10.183954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-12-05 12:14:10.183989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-12-05 12:14:10.184250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-12-05 12:14:10.184280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-12-05 12:14:10.184426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-12-05 12:14:10.184460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-12-05 12:14:10.184716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-12-05 12:14:10.184751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-12-05 12:14:10.184897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-12-05 12:14:10.184930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-12-05 12:14:10.185123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-12-05 12:14:10.185158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-12-05 12:14:10.185391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-12-05 12:14:10.185430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-12-05 12:14:10.185636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-12-05 12:14:10.185668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-12-05 12:14:10.185894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-12-05 12:14:10.185926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-12-05 12:14:10.186084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-12-05 12:14:10.186117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-12-05 12:14:10.186244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-12-05 12:14:10.186277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-12-05 12:14:10.186452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-12-05 12:14:10.186486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-12-05 12:14:10.186613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-12-05 12:14:10.186645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-12-05 12:14:10.186943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-12-05 12:14:10.186978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-12-05 12:14:10.187284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-12-05 12:14:10.187315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-12-05 12:14:10.187605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-12-05 12:14:10.187638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-12-05 12:14:10.187774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-12-05 12:14:10.187808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-12-05 12:14:10.188037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-12-05 12:14:10.188069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-12-05 12:14:10.188253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-12-05 12:14:10.188286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-12-05 12:14:10.188469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-12-05 12:14:10.188506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-12-05 12:14:10.188710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-12-05 12:14:10.188744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-12-05 12:14:10.188941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-12-05 12:14:10.188972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-12-05 12:14:10.189196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-12-05 12:14:10.189232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-12-05 12:14:10.189442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-12-05 12:14:10.189474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-12-05 12:14:10.189598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-12-05 12:14:10.189633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-12-05 12:14:10.189831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.077 [2024-12-05 12:14:10.189865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.077 qpair failed and we were unable to recover it. 00:30:36.077 [2024-12-05 12:14:10.190079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-12-05 12:14:10.190112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-12-05 12:14:10.190391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-12-05 12:14:10.190426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-12-05 12:14:10.190715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-12-05 12:14:10.190746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-12-05 12:14:10.190948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-12-05 12:14:10.190981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-12-05 12:14:10.191172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-12-05 12:14:10.191206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-12-05 12:14:10.191443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-12-05 12:14:10.191476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-12-05 12:14:10.191629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-12-05 12:14:10.191661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-12-05 12:14:10.191865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-12-05 12:14:10.191898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-12-05 12:14:10.192225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-12-05 12:14:10.192259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-12-05 12:14:10.192533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-12-05 12:14:10.192567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-12-05 12:14:10.192717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-12-05 12:14:10.192749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-12-05 12:14:10.192951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-12-05 12:14:10.192983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-12-05 12:14:10.193189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-12-05 12:14:10.193221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-12-05 12:14:10.193499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-12-05 12:14:10.193532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-12-05 12:14:10.193720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-12-05 12:14:10.193752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-12-05 12:14:10.193914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-12-05 12:14:10.193946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-12-05 12:14:10.194180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-12-05 12:14:10.194213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-12-05 12:14:10.194510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-12-05 12:14:10.194543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-12-05 12:14:10.194677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-12-05 12:14:10.194709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-12-05 12:14:10.194843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-12-05 12:14:10.194876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-12-05 12:14:10.195081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-12-05 12:14:10.195113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-12-05 12:14:10.195464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-12-05 12:14:10.195543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-12-05 12:14:10.195776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-12-05 12:14:10.195814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-12-05 12:14:10.196008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-12-05 12:14:10.196042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-12-05 12:14:10.196264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-12-05 12:14:10.196297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-12-05 12:14:10.196563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-12-05 12:14:10.196599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-12-05 12:14:10.196812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.078 [2024-12-05 12:14:10.196845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.078 qpair failed and we were unable to recover it. 00:30:36.078 [2024-12-05 12:14:10.197126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-12-05 12:14:10.197159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-12-05 12:14:10.197382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-12-05 12:14:10.197417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-12-05 12:14:10.197558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-12-05 12:14:10.197590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-12-05 12:14:10.197810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-12-05 12:14:10.197843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-12-05 12:14:10.197980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-12-05 12:14:10.198012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-12-05 12:14:10.198217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-12-05 12:14:10.198251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-12-05 12:14:10.198526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-12-05 12:14:10.198560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-12-05 12:14:10.198747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-12-05 12:14:10.198791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-12-05 12:14:10.198923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-12-05 12:14:10.198955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-12-05 12:14:10.199178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-12-05 12:14:10.199210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-12-05 12:14:10.199487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-12-05 12:14:10.199520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-12-05 12:14:10.199824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-12-05 12:14:10.199856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-12-05 12:14:10.200102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-12-05 12:14:10.200135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-12-05 12:14:10.200406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-12-05 12:14:10.200440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-12-05 12:14:10.200580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-12-05 12:14:10.200613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-12-05 12:14:10.200784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-12-05 12:14:10.200819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-12-05 12:14:10.201057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-12-05 12:14:10.201088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-12-05 12:14:10.201309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-12-05 12:14:10.201343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-12-05 12:14:10.201517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-12-05 12:14:10.201551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-12-05 12:14:10.201747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-12-05 12:14:10.201780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-12-05 12:14:10.201923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-12-05 12:14:10.201956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-12-05 12:14:10.202244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-12-05 12:14:10.202278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-12-05 12:14:10.202490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-12-05 12:14:10.202526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-12-05 12:14:10.202815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-12-05 12:14:10.202849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-12-05 12:14:10.202985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-12-05 12:14:10.203019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-12-05 12:14:10.203147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-12-05 12:14:10.203181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-12-05 12:14:10.203389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-12-05 12:14:10.203423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-12-05 12:14:10.203612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-12-05 12:14:10.203644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-12-05 12:14:10.203829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-12-05 12:14:10.203862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-12-05 12:14:10.204091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-12-05 12:14:10.204126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.079 qpair failed and we were unable to recover it. 00:30:36.079 [2024-12-05 12:14:10.204328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.079 [2024-12-05 12:14:10.204361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-12-05 12:14:10.204585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-12-05 12:14:10.204620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-12-05 12:14:10.204878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-12-05 12:14:10.204911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-12-05 12:14:10.205043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-12-05 12:14:10.205078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-12-05 12:14:10.205355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-12-05 12:14:10.205400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-12-05 12:14:10.205617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-12-05 12:14:10.205653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-12-05 12:14:10.205843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-12-05 12:14:10.205877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-12-05 12:14:10.206129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-12-05 12:14:10.206162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-12-05 12:14:10.206358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-12-05 12:14:10.206404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-12-05 12:14:10.206618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-12-05 12:14:10.206651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-12-05 12:14:10.206774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-12-05 12:14:10.206807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-12-05 12:14:10.207066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-12-05 12:14:10.207100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-12-05 12:14:10.207389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-12-05 12:14:10.207425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-12-05 12:14:10.207560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-12-05 12:14:10.207597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-12-05 12:14:10.207828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-12-05 12:14:10.207861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-12-05 12:14:10.208086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-12-05 12:14:10.208119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-12-05 12:14:10.208475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-12-05 12:14:10.208512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-12-05 12:14:10.208647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-12-05 12:14:10.208680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-12-05 12:14:10.208869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-12-05 12:14:10.208911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-12-05 12:14:10.209145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-12-05 12:14:10.209189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-12-05 12:14:10.209457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-12-05 12:14:10.209532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-12-05 12:14:10.209703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-12-05 12:14:10.209740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-12-05 12:14:10.209951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-12-05 12:14:10.209987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-12-05 12:14:10.210116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-12-05 12:14:10.210160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-12-05 12:14:10.210322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-12-05 12:14:10.210355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-12-05 12:14:10.210557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-12-05 12:14:10.210591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-12-05 12:14:10.210827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-12-05 12:14:10.210874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-12-05 12:14:10.211200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-12-05 12:14:10.211254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-12-05 12:14:10.211578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-12-05 12:14:10.211624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-12-05 12:14:10.211809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-12-05 12:14:10.211848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-12-05 12:14:10.212170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-12-05 12:14:10.212204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-12-05 12:14:10.212409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.080 [2024-12-05 12:14:10.212444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.080 qpair failed and we were unable to recover it. 00:30:36.080 [2024-12-05 12:14:10.212570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.081 [2024-12-05 12:14:10.212604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.081 qpair failed and we were unable to recover it. 00:30:36.081 [2024-12-05 12:14:10.212805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.081 [2024-12-05 12:14:10.212839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.081 qpair failed and we were unable to recover it. 00:30:36.081 [2024-12-05 12:14:10.213074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.081 [2024-12-05 12:14:10.213106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.081 qpair failed and we were unable to recover it. 00:30:36.081 [2024-12-05 12:14:10.213411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.081 [2024-12-05 12:14:10.213449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.081 qpair failed and we were unable to recover it. 00:30:36.081 [2024-12-05 12:14:10.213680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.081 [2024-12-05 12:14:10.213713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.081 qpair failed and we were unable to recover it. 00:30:36.081 [2024-12-05 12:14:10.213925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.081 [2024-12-05 12:14:10.213959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.081 qpair failed and we were unable to recover it. 00:30:36.081 [2024-12-05 12:14:10.214168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.360 [2024-12-05 12:14:10.214201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.360 qpair failed and we were unable to recover it. 00:30:36.360 [2024-12-05 12:14:10.214407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.360 [2024-12-05 12:14:10.214442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.360 qpair failed and we were unable to recover it. 00:30:36.360 [2024-12-05 12:14:10.214587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.360 [2024-12-05 12:14:10.214621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.360 qpair failed and we were unable to recover it. 00:30:36.360 [2024-12-05 12:14:10.214836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.360 [2024-12-05 12:14:10.214870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.360 qpair failed and we were unable to recover it. 00:30:36.360 [2024-12-05 12:14:10.215088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.360 [2024-12-05 12:14:10.215121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.360 qpair failed and we were unable to recover it. 00:30:36.360 [2024-12-05 12:14:10.215325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.360 [2024-12-05 12:14:10.215360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.360 qpair failed and we were unable to recover it. 00:30:36.360 [2024-12-05 12:14:10.215540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.360 [2024-12-05 12:14:10.215579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.360 qpair failed and we were unable to recover it. 00:30:36.360 [2024-12-05 12:14:10.215727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.360 [2024-12-05 12:14:10.215760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.360 qpair failed and we were unable to recover it. 00:30:36.360 [2024-12-05 12:14:10.215881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.360 [2024-12-05 12:14:10.215915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.360 qpair failed and we were unable to recover it. 00:30:36.360 [2024-12-05 12:14:10.216134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.360 [2024-12-05 12:14:10.216167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.360 qpair failed and we were unable to recover it. 00:30:36.360 [2024-12-05 12:14:10.216399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.360 [2024-12-05 12:14:10.216437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.360 qpair failed and we were unable to recover it. 00:30:36.360 [2024-12-05 12:14:10.216740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.360 [2024-12-05 12:14:10.216772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.360 qpair failed and we were unable to recover it. 00:30:36.360 [2024-12-05 12:14:10.216981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.360 [2024-12-05 12:14:10.217015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.360 qpair failed and we were unable to recover it. 00:30:36.360 [2024-12-05 12:14:10.217242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.360 [2024-12-05 12:14:10.217274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.360 qpair failed and we were unable to recover it. 00:30:36.360 [2024-12-05 12:14:10.217500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.360 [2024-12-05 12:14:10.217536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.360 qpair failed and we were unable to recover it. 00:30:36.360 [2024-12-05 12:14:10.217792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.360 [2024-12-05 12:14:10.217826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.360 qpair failed and we were unable to recover it. 00:30:36.360 [2024-12-05 12:14:10.218114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.360 [2024-12-05 12:14:10.218147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.360 qpair failed and we were unable to recover it. 00:30:36.360 [2024-12-05 12:14:10.218441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.360 [2024-12-05 12:14:10.218477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.360 qpair failed and we were unable to recover it. 00:30:36.360 [2024-12-05 12:14:10.218673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.360 [2024-12-05 12:14:10.218707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.360 qpair failed and we were unable to recover it. 00:30:36.360 [2024-12-05 12:14:10.218906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.360 [2024-12-05 12:14:10.218939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.360 qpair failed and we were unable to recover it. 00:30:36.360 [2024-12-05 12:14:10.219068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-12-05 12:14:10.219102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-12-05 12:14:10.219225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-12-05 12:14:10.219259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-12-05 12:14:10.219552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-12-05 12:14:10.219588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-12-05 12:14:10.219784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-12-05 12:14:10.219817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-12-05 12:14:10.220027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-12-05 12:14:10.220061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-12-05 12:14:10.220203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-12-05 12:14:10.220236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-12-05 12:14:10.220382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-12-05 12:14:10.220416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-12-05 12:14:10.220575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-12-05 12:14:10.220608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-12-05 12:14:10.220772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-12-05 12:14:10.220804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-12-05 12:14:10.221022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-12-05 12:14:10.221054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-12-05 12:14:10.221390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-12-05 12:14:10.221426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-12-05 12:14:10.221581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-12-05 12:14:10.221616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-12-05 12:14:10.221819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-12-05 12:14:10.221852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-12-05 12:14:10.222069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-12-05 12:14:10.222102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-12-05 12:14:10.222317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-12-05 12:14:10.222350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-12-05 12:14:10.222622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-12-05 12:14:10.222655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-12-05 12:14:10.222787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-12-05 12:14:10.222821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-12-05 12:14:10.222950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-12-05 12:14:10.222983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-12-05 12:14:10.223278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-12-05 12:14:10.223311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-12-05 12:14:10.223538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-12-05 12:14:10.223572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-12-05 12:14:10.223727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-12-05 12:14:10.223761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-12-05 12:14:10.223946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-12-05 12:14:10.223980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-12-05 12:14:10.224117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-12-05 12:14:10.224150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-12-05 12:14:10.224447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-12-05 12:14:10.224486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-12-05 12:14:10.224634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-12-05 12:14:10.224666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-12-05 12:14:10.224859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-12-05 12:14:10.224894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-12-05 12:14:10.225091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-12-05 12:14:10.225131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-12-05 12:14:10.225340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-12-05 12:14:10.225388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-12-05 12:14:10.225534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-12-05 12:14:10.225567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-12-05 12:14:10.225710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-12-05 12:14:10.225743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-12-05 12:14:10.225880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-12-05 12:14:10.225913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-12-05 12:14:10.226057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-12-05 12:14:10.226090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-12-05 12:14:10.226227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-12-05 12:14:10.226262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-12-05 12:14:10.226464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-12-05 12:14:10.226498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-12-05 12:14:10.226680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-12-05 12:14:10.226713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-12-05 12:14:10.226867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-12-05 12:14:10.226903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-12-05 12:14:10.227038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-12-05 12:14:10.227073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-12-05 12:14:10.227204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.361 [2024-12-05 12:14:10.227236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.361 qpair failed and we were unable to recover it. 00:30:36.361 [2024-12-05 12:14:10.227377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-12-05 12:14:10.227415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-12-05 12:14:10.227604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-12-05 12:14:10.227639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-12-05 12:14:10.227766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-12-05 12:14:10.227800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-12-05 12:14:10.227924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-12-05 12:14:10.227958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-12-05 12:14:10.228073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-12-05 12:14:10.228106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-12-05 12:14:10.228271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-12-05 12:14:10.228304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-12-05 12:14:10.228451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-12-05 12:14:10.228490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-12-05 12:14:10.228618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-12-05 12:14:10.228652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-12-05 12:14:10.228806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-12-05 12:14:10.228840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-12-05 12:14:10.229037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-12-05 12:14:10.229070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-12-05 12:14:10.229204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-12-05 12:14:10.229237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-12-05 12:14:10.229421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-12-05 12:14:10.229457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-12-05 12:14:10.229653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-12-05 12:14:10.229688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-12-05 12:14:10.229893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-12-05 12:14:10.229926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-12-05 12:14:10.230041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-12-05 12:14:10.230075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-12-05 12:14:10.230279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-12-05 12:14:10.230314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-12-05 12:14:10.230513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-12-05 12:14:10.230547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-12-05 12:14:10.230668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-12-05 12:14:10.230702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-12-05 12:14:10.230816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-12-05 12:14:10.230848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-12-05 12:14:10.231031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-12-05 12:14:10.231064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-12-05 12:14:10.231197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-12-05 12:14:10.231230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-12-05 12:14:10.231496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-12-05 12:14:10.231532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-12-05 12:14:10.231739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-12-05 12:14:10.231773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-12-05 12:14:10.231886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-12-05 12:14:10.231919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-12-05 12:14:10.232071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-12-05 12:14:10.232104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-12-05 12:14:10.232387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-12-05 12:14:10.232422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-12-05 12:14:10.232639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-12-05 12:14:10.232672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-12-05 12:14:10.232801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-12-05 12:14:10.232834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-12-05 12:14:10.232954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-12-05 12:14:10.232993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-12-05 12:14:10.233189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-12-05 12:14:10.233223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-12-05 12:14:10.233383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-12-05 12:14:10.233423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-12-05 12:14:10.233561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-12-05 12:14:10.233594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-12-05 12:14:10.233800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-12-05 12:14:10.233832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-12-05 12:14:10.233955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-12-05 12:14:10.233989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-12-05 12:14:10.234188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-12-05 12:14:10.234220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-12-05 12:14:10.234386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-12-05 12:14:10.234421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-12-05 12:14:10.234601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-12-05 12:14:10.234635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-12-05 12:14:10.234770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.362 [2024-12-05 12:14:10.234803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.362 qpair failed and we were unable to recover it. 00:30:36.362 [2024-12-05 12:14:10.235068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-12-05 12:14:10.235101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-12-05 12:14:10.235227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-12-05 12:14:10.235260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-12-05 12:14:10.235404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-12-05 12:14:10.235439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-12-05 12:14:10.235559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-12-05 12:14:10.235592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-12-05 12:14:10.235717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-12-05 12:14:10.235752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-12-05 12:14:10.236030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-12-05 12:14:10.236064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-12-05 12:14:10.236186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-12-05 12:14:10.236220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-12-05 12:14:10.236474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-12-05 12:14:10.236510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-12-05 12:14:10.236652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-12-05 12:14:10.236685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-12-05 12:14:10.236865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-12-05 12:14:10.236898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-12-05 12:14:10.237047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-12-05 12:14:10.237081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-12-05 12:14:10.237188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-12-05 12:14:10.237220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-12-05 12:14:10.237339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-12-05 12:14:10.237385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-12-05 12:14:10.237542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-12-05 12:14:10.237575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-12-05 12:14:10.237697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-12-05 12:14:10.237730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-12-05 12:14:10.237985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-12-05 12:14:10.238019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-12-05 12:14:10.238323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-12-05 12:14:10.238356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-12-05 12:14:10.238619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-12-05 12:14:10.238653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-12-05 12:14:10.238799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-12-05 12:14:10.238834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-12-05 12:14:10.238977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-12-05 12:14:10.239009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-12-05 12:14:10.239136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-12-05 12:14:10.239171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-12-05 12:14:10.239411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-12-05 12:14:10.239446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-12-05 12:14:10.239650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-12-05 12:14:10.239685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-12-05 12:14:10.239949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-12-05 12:14:10.239982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-12-05 12:14:10.240179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-12-05 12:14:10.240212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-12-05 12:14:10.240339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-12-05 12:14:10.240383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-12-05 12:14:10.240577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-12-05 12:14:10.240614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-12-05 12:14:10.240797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-12-05 12:14:10.240831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-12-05 12:14:10.240980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-12-05 12:14:10.241016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-12-05 12:14:10.241156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-12-05 12:14:10.241189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-12-05 12:14:10.241409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-12-05 12:14:10.241452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-12-05 12:14:10.241573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-12-05 12:14:10.241607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-12-05 12:14:10.241813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-12-05 12:14:10.241847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-12-05 12:14:10.242046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-12-05 12:14:10.242081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-12-05 12:14:10.242204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-12-05 12:14:10.242237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-12-05 12:14:10.242424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-12-05 12:14:10.242458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-12-05 12:14:10.242575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-12-05 12:14:10.242610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-12-05 12:14:10.242909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.363 [2024-12-05 12:14:10.242943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.363 qpair failed and we were unable to recover it. 00:30:36.363 [2024-12-05 12:14:10.243083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-12-05 12:14:10.243119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-12-05 12:14:10.243386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-12-05 12:14:10.243428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-12-05 12:14:10.243626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-12-05 12:14:10.243659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-12-05 12:14:10.243847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-12-05 12:14:10.243882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-12-05 12:14:10.244016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-12-05 12:14:10.244049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-12-05 12:14:10.244168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-12-05 12:14:10.244202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-12-05 12:14:10.244426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-12-05 12:14:10.244462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-12-05 12:14:10.244648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-12-05 12:14:10.244681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-12-05 12:14:10.244891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-12-05 12:14:10.244924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-12-05 12:14:10.245071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-12-05 12:14:10.245104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-12-05 12:14:10.245222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-12-05 12:14:10.245255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-12-05 12:14:10.245397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-12-05 12:14:10.245436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-12-05 12:14:10.245549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-12-05 12:14:10.245581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-12-05 12:14:10.245784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-12-05 12:14:10.245817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-12-05 12:14:10.246079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-12-05 12:14:10.246113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-12-05 12:14:10.246250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-12-05 12:14:10.246286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-12-05 12:14:10.246512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-12-05 12:14:10.246549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-12-05 12:14:10.246677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-12-05 12:14:10.246713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-12-05 12:14:10.246847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-12-05 12:14:10.246881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-12-05 12:14:10.247085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-12-05 12:14:10.247119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-12-05 12:14:10.247236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-12-05 12:14:10.247269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-12-05 12:14:10.247453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-12-05 12:14:10.247490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-12-05 12:14:10.247693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-12-05 12:14:10.247726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-12-05 12:14:10.247955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-12-05 12:14:10.247989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-12-05 12:14:10.248248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-12-05 12:14:10.248281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-12-05 12:14:10.248406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-12-05 12:14:10.248441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-12-05 12:14:10.248554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-12-05 12:14:10.248586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-12-05 12:14:10.248708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-12-05 12:14:10.248743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-12-05 12:14:10.248995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-12-05 12:14:10.249027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-12-05 12:14:10.249151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-12-05 12:14:10.249185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-12-05 12:14:10.249348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-12-05 12:14:10.249419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-12-05 12:14:10.249683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-12-05 12:14:10.249718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-12-05 12:14:10.249820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-12-05 12:14:10.249861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-12-05 12:14:10.249982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-12-05 12:14:10.250016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-12-05 12:14:10.250266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.364 [2024-12-05 12:14:10.250299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.364 qpair failed and we were unable to recover it. 00:30:36.364 [2024-12-05 12:14:10.250429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.365 [2024-12-05 12:14:10.250463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.365 qpair failed and we were unable to recover it. 00:30:36.365 [2024-12-05 12:14:10.250714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.365 [2024-12-05 12:14:10.250746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.365 qpair failed and we were unable to recover it. 00:30:36.365 [2024-12-05 12:14:10.250954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.365 [2024-12-05 12:14:10.250988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.365 qpair failed and we were unable to recover it. 00:30:36.365 [2024-12-05 12:14:10.251110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.365 [2024-12-05 12:14:10.251141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.365 qpair failed and we were unable to recover it. 00:30:36.365 [2024-12-05 12:14:10.251350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.365 [2024-12-05 12:14:10.251395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.365 qpair failed and we were unable to recover it. 00:30:36.365 [2024-12-05 12:14:10.251657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.365 [2024-12-05 12:14:10.251689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.365 qpair failed and we were unable to recover it. 00:30:36.365 [2024-12-05 12:14:10.251866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.365 [2024-12-05 12:14:10.251899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.365 qpair failed and we were unable to recover it. 00:30:36.365 [2024-12-05 12:14:10.252023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.365 [2024-12-05 12:14:10.252055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.365 qpair failed and we were unable to recover it. 00:30:36.365 [2024-12-05 12:14:10.252257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.365 [2024-12-05 12:14:10.252293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.365 qpair failed and we were unable to recover it. 00:30:36.365 [2024-12-05 12:14:10.252546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.365 [2024-12-05 12:14:10.252583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.365 qpair failed and we were unable to recover it. 00:30:36.365 [2024-12-05 12:14:10.252797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.365 [2024-12-05 12:14:10.252829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.365 qpair failed and we were unable to recover it. 00:30:36.365 [2024-12-05 12:14:10.253054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.365 [2024-12-05 12:14:10.253087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.365 qpair failed and we were unable to recover it. 00:30:36.365 [2024-12-05 12:14:10.253198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.365 [2024-12-05 12:14:10.253230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.365 qpair failed and we were unable to recover it. 00:30:36.365 [2024-12-05 12:14:10.253416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.365 [2024-12-05 12:14:10.253450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.365 qpair failed and we were unable to recover it. 00:30:36.365 [2024-12-05 12:14:10.253732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.365 [2024-12-05 12:14:10.253764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.365 qpair failed and we were unable to recover it. 00:30:36.365 [2024-12-05 12:14:10.254018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.365 [2024-12-05 12:14:10.254054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.365 qpair failed and we were unable to recover it. 00:30:36.365 [2024-12-05 12:14:10.254281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.365 [2024-12-05 12:14:10.254316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.365 qpair failed and we were unable to recover it. 00:30:36.365 [2024-12-05 12:14:10.254526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.365 [2024-12-05 12:14:10.254560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.365 qpair failed and we were unable to recover it. 00:30:36.365 [2024-12-05 12:14:10.254778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.365 [2024-12-05 12:14:10.254809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.365 qpair failed and we were unable to recover it. 00:30:36.365 [2024-12-05 12:14:10.255050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.365 [2024-12-05 12:14:10.255085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.365 qpair failed and we were unable to recover it. 00:30:36.365 [2024-12-05 12:14:10.255340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.365 [2024-12-05 12:14:10.255388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.365 qpair failed and we were unable to recover it. 00:30:36.365 [2024-12-05 12:14:10.255578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.365 [2024-12-05 12:14:10.255612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.365 qpair failed and we were unable to recover it. 00:30:36.365 [2024-12-05 12:14:10.255847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.365 [2024-12-05 12:14:10.255880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.365 qpair failed and we were unable to recover it. 00:30:36.365 [2024-12-05 12:14:10.256061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.365 [2024-12-05 12:14:10.256092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.365 qpair failed and we were unable to recover it. 00:30:36.365 [2024-12-05 12:14:10.256296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.365 [2024-12-05 12:14:10.256328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.365 qpair failed and we were unable to recover it. 00:30:36.365 [2024-12-05 12:14:10.256507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.365 [2024-12-05 12:14:10.256542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.365 qpair failed and we were unable to recover it. 00:30:36.365 [2024-12-05 12:14:10.256666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.365 [2024-12-05 12:14:10.256697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.365 qpair failed and we were unable to recover it. 00:30:36.365 [2024-12-05 12:14:10.256835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.365 [2024-12-05 12:14:10.256867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.365 qpair failed and we were unable to recover it. 00:30:36.365 [2024-12-05 12:14:10.257003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.365 [2024-12-05 12:14:10.257036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.365 qpair failed and we were unable to recover it. 00:30:36.365 [2024-12-05 12:14:10.257178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.365 [2024-12-05 12:14:10.257212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.365 qpair failed and we were unable to recover it. 00:30:36.365 [2024-12-05 12:14:10.257343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.365 [2024-12-05 12:14:10.257398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.365 qpair failed and we were unable to recover it. 00:30:36.365 [2024-12-05 12:14:10.257702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.365 [2024-12-05 12:14:10.257734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.365 qpair failed and we were unable to recover it. 00:30:36.365 [2024-12-05 12:14:10.257919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.365 [2024-12-05 12:14:10.257956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.365 qpair failed and we were unable to recover it. 00:30:36.365 [2024-12-05 12:14:10.258155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.365 [2024-12-05 12:14:10.258188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.365 qpair failed and we were unable to recover it. 00:30:36.365 [2024-12-05 12:14:10.258296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.365 [2024-12-05 12:14:10.258331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.365 qpair failed and we were unable to recover it. 00:30:36.365 [2024-12-05 12:14:10.258549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-12-05 12:14:10.258582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-12-05 12:14:10.258697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-12-05 12:14:10.258732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-12-05 12:14:10.258909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-12-05 12:14:10.258948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-12-05 12:14:10.259066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-12-05 12:14:10.259101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-12-05 12:14:10.259207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-12-05 12:14:10.259256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-12-05 12:14:10.259382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-12-05 12:14:10.259422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-12-05 12:14:10.259545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-12-05 12:14:10.259576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-12-05 12:14:10.259786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-12-05 12:14:10.259819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-12-05 12:14:10.260035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-12-05 12:14:10.260066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-12-05 12:14:10.260206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-12-05 12:14:10.260237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-12-05 12:14:10.260365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-12-05 12:14:10.260407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-12-05 12:14:10.260538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-12-05 12:14:10.260570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-12-05 12:14:10.260703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-12-05 12:14:10.260734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-12-05 12:14:10.260846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-12-05 12:14:10.260878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-12-05 12:14:10.261159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-12-05 12:14:10.261194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-12-05 12:14:10.261323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-12-05 12:14:10.261355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-12-05 12:14:10.261481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-12-05 12:14:10.261517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-12-05 12:14:10.261735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-12-05 12:14:10.261768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-12-05 12:14:10.261903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-12-05 12:14:10.261936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-12-05 12:14:10.262116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-12-05 12:14:10.262148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-12-05 12:14:10.262279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-12-05 12:14:10.262312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-12-05 12:14:10.262515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-12-05 12:14:10.262550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-12-05 12:14:10.262801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-12-05 12:14:10.262835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-12-05 12:14:10.262954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-12-05 12:14:10.262986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-12-05 12:14:10.263162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-12-05 12:14:10.263194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-12-05 12:14:10.263389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-12-05 12:14:10.263424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-12-05 12:14:10.263569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-12-05 12:14:10.263600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-12-05 12:14:10.263729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-12-05 12:14:10.263761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-12-05 12:14:10.263941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-12-05 12:14:10.263972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-12-05 12:14:10.264115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-12-05 12:14:10.264148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-12-05 12:14:10.264330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-12-05 12:14:10.264363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-12-05 12:14:10.264505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-12-05 12:14:10.264537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-12-05 12:14:10.264716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-12-05 12:14:10.264748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-12-05 12:14:10.264963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-12-05 12:14:10.264997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-12-05 12:14:10.265141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-12-05 12:14:10.265174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-12-05 12:14:10.265277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-12-05 12:14:10.265310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-12-05 12:14:10.265476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-12-05 12:14:10.265510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-12-05 12:14:10.265638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.366 [2024-12-05 12:14:10.265671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.366 qpair failed and we were unable to recover it. 00:30:36.366 [2024-12-05 12:14:10.265866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-12-05 12:14:10.265901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-12-05 12:14:10.266020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-12-05 12:14:10.266053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-12-05 12:14:10.266184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-12-05 12:14:10.266216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-12-05 12:14:10.266326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-12-05 12:14:10.266356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-12-05 12:14:10.266551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-12-05 12:14:10.266589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-12-05 12:14:10.266771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-12-05 12:14:10.266802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-12-05 12:14:10.266933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-12-05 12:14:10.266965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-12-05 12:14:10.267102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-12-05 12:14:10.267135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-12-05 12:14:10.267267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-12-05 12:14:10.267299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-12-05 12:14:10.267467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-12-05 12:14:10.267501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-12-05 12:14:10.267617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-12-05 12:14:10.267649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-12-05 12:14:10.267833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-12-05 12:14:10.267867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-12-05 12:14:10.268008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-12-05 12:14:10.268041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-12-05 12:14:10.268225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-12-05 12:14:10.268257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-12-05 12:14:10.268381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-12-05 12:14:10.268414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-12-05 12:14:10.268609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-12-05 12:14:10.268643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-12-05 12:14:10.268749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-12-05 12:14:10.268784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-12-05 12:14:10.269049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-12-05 12:14:10.269082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-12-05 12:14:10.269212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-12-05 12:14:10.269244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-12-05 12:14:10.269408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-12-05 12:14:10.269442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-12-05 12:14:10.269595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-12-05 12:14:10.269627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-12-05 12:14:10.269744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-12-05 12:14:10.269776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-12-05 12:14:10.269900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-12-05 12:14:10.269931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-12-05 12:14:10.270127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-12-05 12:14:10.270159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-12-05 12:14:10.270280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-12-05 12:14:10.270312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-12-05 12:14:10.270516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-12-05 12:14:10.270549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-12-05 12:14:10.270686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-12-05 12:14:10.270718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-12-05 12:14:10.270835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-12-05 12:14:10.270867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-12-05 12:14:10.270991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-12-05 12:14:10.271023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-12-05 12:14:10.271246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-12-05 12:14:10.271277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-12-05 12:14:10.271391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-12-05 12:14:10.271424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-12-05 12:14:10.271621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-12-05 12:14:10.271654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-12-05 12:14:10.271764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-12-05 12:14:10.271796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-12-05 12:14:10.271977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-12-05 12:14:10.272008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-12-05 12:14:10.272197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-12-05 12:14:10.272229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-12-05 12:14:10.272340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-12-05 12:14:10.272381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-12-05 12:14:10.272489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-12-05 12:14:10.272520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-12-05 12:14:10.272698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-12-05 12:14:10.272730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.367 qpair failed and we were unable to recover it. 00:30:36.367 [2024-12-05 12:14:10.272906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.367 [2024-12-05 12:14:10.272938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-12-05 12:14:10.273135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-12-05 12:14:10.273167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-12-05 12:14:10.273387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-12-05 12:14:10.273420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-12-05 12:14:10.273567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-12-05 12:14:10.273600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-12-05 12:14:10.273727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-12-05 12:14:10.273759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-12-05 12:14:10.273878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-12-05 12:14:10.273909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-12-05 12:14:10.274080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-12-05 12:14:10.274117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-12-05 12:14:10.274310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-12-05 12:14:10.274342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-12-05 12:14:10.274536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-12-05 12:14:10.274567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-12-05 12:14:10.274779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-12-05 12:14:10.274811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-12-05 12:14:10.274958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-12-05 12:14:10.274990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-12-05 12:14:10.275141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-12-05 12:14:10.275172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-12-05 12:14:10.275319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-12-05 12:14:10.275351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-12-05 12:14:10.275550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-12-05 12:14:10.275584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-12-05 12:14:10.275819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-12-05 12:14:10.275850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-12-05 12:14:10.276147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-12-05 12:14:10.276179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-12-05 12:14:10.276460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-12-05 12:14:10.276493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-12-05 12:14:10.276694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-12-05 12:14:10.276725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-12-05 12:14:10.276939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-12-05 12:14:10.276971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-12-05 12:14:10.277160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-12-05 12:14:10.277192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-12-05 12:14:10.277412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-12-05 12:14:10.277445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-12-05 12:14:10.277569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-12-05 12:14:10.277602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-12-05 12:14:10.277773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-12-05 12:14:10.277805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-12-05 12:14:10.278085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-12-05 12:14:10.278116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-12-05 12:14:10.278310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-12-05 12:14:10.278341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-12-05 12:14:10.278555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-12-05 12:14:10.278588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-12-05 12:14:10.278775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-12-05 12:14:10.278806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-12-05 12:14:10.278986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-12-05 12:14:10.279017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-12-05 12:14:10.279264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-12-05 12:14:10.279296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-12-05 12:14:10.279495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-12-05 12:14:10.279529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-12-05 12:14:10.279745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-12-05 12:14:10.279776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-12-05 12:14:10.279913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-12-05 12:14:10.279944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-12-05 12:14:10.280145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-12-05 12:14:10.280176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-12-05 12:14:10.280384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-12-05 12:14:10.280418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-12-05 12:14:10.280616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-12-05 12:14:10.280648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-12-05 12:14:10.280792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-12-05 12:14:10.280823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-12-05 12:14:10.280954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-12-05 12:14:10.280985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-12-05 12:14:10.281125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.368 [2024-12-05 12:14:10.281156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.368 qpair failed and we were unable to recover it. 00:30:36.368 [2024-12-05 12:14:10.281281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.369 [2024-12-05 12:14:10.281311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.369 qpair failed and we were unable to recover it. 00:30:36.369 [2024-12-05 12:14:10.281500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.369 [2024-12-05 12:14:10.281533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.369 qpair failed and we were unable to recover it. 00:30:36.369 [2024-12-05 12:14:10.281726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.369 [2024-12-05 12:14:10.281758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.369 qpair failed and we were unable to recover it. 00:30:36.369 [2024-12-05 12:14:10.281937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.369 [2024-12-05 12:14:10.281967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.369 qpair failed and we were unable to recover it. 00:30:36.369 [2024-12-05 12:14:10.282166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.369 [2024-12-05 12:14:10.282198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.369 qpair failed and we were unable to recover it. 00:30:36.369 [2024-12-05 12:14:10.282448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.369 [2024-12-05 12:14:10.282480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.369 qpair failed and we were unable to recover it. 00:30:36.369 [2024-12-05 12:14:10.282591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.369 [2024-12-05 12:14:10.282622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.369 qpair failed and we were unable to recover it. 00:30:36.369 [2024-12-05 12:14:10.282820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.369 [2024-12-05 12:14:10.282851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.369 qpair failed and we were unable to recover it. 00:30:36.369 [2024-12-05 12:14:10.283167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.369 [2024-12-05 12:14:10.283204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.369 qpair failed and we were unable to recover it. 00:30:36.369 [2024-12-05 12:14:10.283500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.369 [2024-12-05 12:14:10.283532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.369 qpair failed and we were unable to recover it. 00:30:36.369 [2024-12-05 12:14:10.283755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.369 [2024-12-05 12:14:10.283786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.369 qpair failed and we were unable to recover it. 00:30:36.369 [2024-12-05 12:14:10.283913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.369 [2024-12-05 12:14:10.283944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.369 qpair failed and we were unable to recover it. 00:30:36.369 [2024-12-05 12:14:10.284147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.369 [2024-12-05 12:14:10.284178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.369 qpair failed and we were unable to recover it. 00:30:36.369 [2024-12-05 12:14:10.284316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.369 [2024-12-05 12:14:10.284347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.369 qpair failed and we were unable to recover it. 00:30:36.369 [2024-12-05 12:14:10.284624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.369 [2024-12-05 12:14:10.284658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.369 qpair failed and we were unable to recover it. 00:30:36.369 [2024-12-05 12:14:10.284777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.369 [2024-12-05 12:14:10.284807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.369 qpair failed and we were unable to recover it. 00:30:36.369 [2024-12-05 12:14:10.285000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.369 [2024-12-05 12:14:10.285032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.369 qpair failed and we were unable to recover it. 00:30:36.369 [2024-12-05 12:14:10.285224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.369 [2024-12-05 12:14:10.285255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.369 qpair failed and we were unable to recover it. 00:30:36.369 [2024-12-05 12:14:10.285440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.369 [2024-12-05 12:14:10.285473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.369 qpair failed and we were unable to recover it. 00:30:36.369 [2024-12-05 12:14:10.285660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.369 [2024-12-05 12:14:10.285692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.369 qpair failed and we were unable to recover it. 00:30:36.369 [2024-12-05 12:14:10.285826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.369 [2024-12-05 12:14:10.285857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.369 qpair failed and we were unable to recover it. 00:30:36.369 [2024-12-05 12:14:10.286108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.369 [2024-12-05 12:14:10.286140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.369 qpair failed and we were unable to recover it. 00:30:36.369 [2024-12-05 12:14:10.286340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.369 [2024-12-05 12:14:10.286380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.369 qpair failed and we were unable to recover it. 00:30:36.369 [2024-12-05 12:14:10.286567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.369 [2024-12-05 12:14:10.286599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.369 qpair failed and we were unable to recover it. 00:30:36.369 [2024-12-05 12:14:10.286774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.369 [2024-12-05 12:14:10.286806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.369 qpair failed and we were unable to recover it. 00:30:36.369 [2024-12-05 12:14:10.286948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.369 [2024-12-05 12:14:10.286980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.369 qpair failed and we were unable to recover it. 00:30:36.369 [2024-12-05 12:14:10.287105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.369 [2024-12-05 12:14:10.287136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.369 qpair failed and we were unable to recover it. 00:30:36.369 [2024-12-05 12:14:10.287406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.369 [2024-12-05 12:14:10.287439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.369 qpair failed and we were unable to recover it. 00:30:36.369 [2024-12-05 12:14:10.287688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.369 [2024-12-05 12:14:10.287720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.369 qpair failed and we were unable to recover it. 00:30:36.369 [2024-12-05 12:14:10.287865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.369 [2024-12-05 12:14:10.287896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.369 qpair failed and we were unable to recover it. 00:30:36.369 [2024-12-05 12:14:10.288110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.369 [2024-12-05 12:14:10.288143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.369 qpair failed and we were unable to recover it. 00:30:36.369 [2024-12-05 12:14:10.288276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.369 [2024-12-05 12:14:10.288308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.369 qpair failed and we were unable to recover it. 00:30:36.369 [2024-12-05 12:14:10.288469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.369 [2024-12-05 12:14:10.288503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.369 qpair failed and we were unable to recover it. 00:30:36.369 [2024-12-05 12:14:10.288656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.369 [2024-12-05 12:14:10.288688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.369 qpair failed and we were unable to recover it. 00:30:36.369 [2024-12-05 12:14:10.288870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.370 [2024-12-05 12:14:10.288902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.370 qpair failed and we were unable to recover it. 00:30:36.370 [2024-12-05 12:14:10.289124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.370 [2024-12-05 12:14:10.289156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.370 qpair failed and we were unable to recover it. 00:30:36.370 [2024-12-05 12:14:10.289295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.370 [2024-12-05 12:14:10.289326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.370 qpair failed and we were unable to recover it. 00:30:36.370 [2024-12-05 12:14:10.289560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.370 [2024-12-05 12:14:10.289593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.370 qpair failed and we were unable to recover it. 00:30:36.370 [2024-12-05 12:14:10.289759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.370 [2024-12-05 12:14:10.289790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.370 qpair failed and we were unable to recover it. 00:30:36.370 [2024-12-05 12:14:10.290037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.370 [2024-12-05 12:14:10.290070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.370 qpair failed and we were unable to recover it. 00:30:36.370 [2024-12-05 12:14:10.290313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.370 [2024-12-05 12:14:10.290345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.370 qpair failed and we were unable to recover it. 00:30:36.370 [2024-12-05 12:14:10.290515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.370 [2024-12-05 12:14:10.290548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.370 qpair failed and we were unable to recover it. 00:30:36.370 [2024-12-05 12:14:10.290678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.370 [2024-12-05 12:14:10.290710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.370 qpair failed and we were unable to recover it. 00:30:36.370 [2024-12-05 12:14:10.290858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.370 [2024-12-05 12:14:10.290889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.370 qpair failed and we were unable to recover it. 00:30:36.370 [2024-12-05 12:14:10.291115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.370 [2024-12-05 12:14:10.291147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.370 qpair failed and we were unable to recover it. 00:30:36.370 [2024-12-05 12:14:10.291363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.370 [2024-12-05 12:14:10.291406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.370 qpair failed and we were unable to recover it. 00:30:36.370 [2024-12-05 12:14:10.291604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.370 [2024-12-05 12:14:10.291636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.370 qpair failed and we were unable to recover it. 00:30:36.370 [2024-12-05 12:14:10.291760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.370 [2024-12-05 12:14:10.291791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.370 qpair failed and we were unable to recover it. 00:30:36.370 [2024-12-05 12:14:10.291940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.370 [2024-12-05 12:14:10.291977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.370 qpair failed and we were unable to recover it. 00:30:36.370 [2024-12-05 12:14:10.292172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.370 [2024-12-05 12:14:10.292203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.370 qpair failed and we were unable to recover it. 00:30:36.370 [2024-12-05 12:14:10.292534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.370 [2024-12-05 12:14:10.292568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.370 qpair failed and we were unable to recover it. 00:30:36.370 [2024-12-05 12:14:10.292723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.370 [2024-12-05 12:14:10.292755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.370 qpair failed and we were unable to recover it. 00:30:36.370 [2024-12-05 12:14:10.292900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.370 [2024-12-05 12:14:10.292931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.370 qpair failed and we were unable to recover it. 00:30:36.370 [2024-12-05 12:14:10.293126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.370 [2024-12-05 12:14:10.293158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.370 qpair failed and we were unable to recover it. 00:30:36.370 [2024-12-05 12:14:10.293415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.370 [2024-12-05 12:14:10.293448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.370 qpair failed and we were unable to recover it. 00:30:36.370 [2024-12-05 12:14:10.293592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.370 [2024-12-05 12:14:10.293623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.370 qpair failed and we were unable to recover it. 00:30:36.370 [2024-12-05 12:14:10.293831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.370 [2024-12-05 12:14:10.293863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.370 qpair failed and we were unable to recover it. 00:30:36.370 [2024-12-05 12:14:10.294143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.370 [2024-12-05 12:14:10.294175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.370 qpair failed and we were unable to recover it. 00:30:36.370 [2024-12-05 12:14:10.294448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.370 [2024-12-05 12:14:10.294480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.370 qpair failed and we were unable to recover it. 00:30:36.370 [2024-12-05 12:14:10.294605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.370 [2024-12-05 12:14:10.294637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.370 qpair failed and we were unable to recover it. 00:30:36.370 [2024-12-05 12:14:10.294828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.370 [2024-12-05 12:14:10.294860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.370 qpair failed and we were unable to recover it. 00:30:36.370 [2024-12-05 12:14:10.295102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.370 [2024-12-05 12:14:10.295134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.370 qpair failed and we were unable to recover it. 00:30:36.370 [2024-12-05 12:14:10.295393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.370 [2024-12-05 12:14:10.295426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.370 qpair failed and we were unable to recover it. 00:30:36.370 [2024-12-05 12:14:10.295671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.370 [2024-12-05 12:14:10.295704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.370 qpair failed and we were unable to recover it. 00:30:36.370 [2024-12-05 12:14:10.295850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.370 [2024-12-05 12:14:10.295881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.370 qpair failed and we were unable to recover it. 00:30:36.370 [2024-12-05 12:14:10.296103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.370 [2024-12-05 12:14:10.296134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.370 qpair failed and we were unable to recover it. 00:30:36.370 [2024-12-05 12:14:10.296386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.370 [2024-12-05 12:14:10.296419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.370 qpair failed and we were unable to recover it. 00:30:36.370 [2024-12-05 12:14:10.296570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.370 [2024-12-05 12:14:10.296602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.370 qpair failed and we were unable to recover it. 00:30:36.370 [2024-12-05 12:14:10.296735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.370 [2024-12-05 12:14:10.296765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.370 qpair failed and we were unable to recover it. 00:30:36.370 [2024-12-05 12:14:10.296893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.370 [2024-12-05 12:14:10.296925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.370 qpair failed and we were unable to recover it. 00:30:36.370 [2024-12-05 12:14:10.297207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.370 [2024-12-05 12:14:10.297239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.370 qpair failed and we were unable to recover it. 00:30:36.370 [2024-12-05 12:14:10.297465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.370 [2024-12-05 12:14:10.297497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.370 qpair failed and we were unable to recover it. 00:30:36.370 [2024-12-05 12:14:10.297681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-12-05 12:14:10.297717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-12-05 12:14:10.297848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-12-05 12:14:10.297880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-12-05 12:14:10.298032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-12-05 12:14:10.298084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-12-05 12:14:10.298292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-12-05 12:14:10.298324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-12-05 12:14:10.298564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-12-05 12:14:10.298597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-12-05 12:14:10.298849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-12-05 12:14:10.298882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-12-05 12:14:10.299135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-12-05 12:14:10.299167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-12-05 12:14:10.299421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-12-05 12:14:10.299454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-12-05 12:14:10.299666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-12-05 12:14:10.299697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-12-05 12:14:10.299969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-12-05 12:14:10.300001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-12-05 12:14:10.300154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-12-05 12:14:10.300186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-12-05 12:14:10.300305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-12-05 12:14:10.300337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-12-05 12:14:10.300528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-12-05 12:14:10.300562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-12-05 12:14:10.300703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-12-05 12:14:10.300735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-12-05 12:14:10.300889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-12-05 12:14:10.300920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-12-05 12:14:10.301130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-12-05 12:14:10.301162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-12-05 12:14:10.301474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-12-05 12:14:10.301513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-12-05 12:14:10.301712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-12-05 12:14:10.301744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-12-05 12:14:10.301944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-12-05 12:14:10.301976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-12-05 12:14:10.302247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-12-05 12:14:10.302278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-12-05 12:14:10.302479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-12-05 12:14:10.302512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-12-05 12:14:10.302766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-12-05 12:14:10.302798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-12-05 12:14:10.302993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-12-05 12:14:10.303025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-12-05 12:14:10.303209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-12-05 12:14:10.303241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-12-05 12:14:10.303444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-12-05 12:14:10.303476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-12-05 12:14:10.303718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-12-05 12:14:10.303751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-12-05 12:14:10.304076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-12-05 12:14:10.304108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-12-05 12:14:10.304330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-12-05 12:14:10.304363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-12-05 12:14:10.304583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-12-05 12:14:10.304616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-12-05 12:14:10.304770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-12-05 12:14:10.304801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-12-05 12:14:10.305084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-12-05 12:14:10.305116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-12-05 12:14:10.305309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-12-05 12:14:10.305342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-12-05 12:14:10.305532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-12-05 12:14:10.305565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-12-05 12:14:10.305822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-12-05 12:14:10.305855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-12-05 12:14:10.306000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-12-05 12:14:10.306033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-12-05 12:14:10.306302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-12-05 12:14:10.306334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-12-05 12:14:10.306488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-12-05 12:14:10.306521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-12-05 12:14:10.306718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-12-05 12:14:10.306752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.371 [2024-12-05 12:14:10.306896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.371 [2024-12-05 12:14:10.306927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.371 qpair failed and we were unable to recover it. 00:30:36.372 [2024-12-05 12:14:10.307131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-12-05 12:14:10.307163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-12-05 12:14:10.307346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-12-05 12:14:10.307406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-12-05 12:14:10.307626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-12-05 12:14:10.307657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-12-05 12:14:10.307957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-12-05 12:14:10.307990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-12-05 12:14:10.308257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-12-05 12:14:10.308289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-12-05 12:14:10.308484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-12-05 12:14:10.308517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-12-05 12:14:10.308712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-12-05 12:14:10.308744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-12-05 12:14:10.309034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-12-05 12:14:10.309065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-12-05 12:14:10.309380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-12-05 12:14:10.309414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-12-05 12:14:10.309617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-12-05 12:14:10.309648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-12-05 12:14:10.309787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-12-05 12:14:10.309820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-12-05 12:14:10.310118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-12-05 12:14:10.310151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-12-05 12:14:10.310380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-12-05 12:14:10.310414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-12-05 12:14:10.310690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-12-05 12:14:10.310721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-12-05 12:14:10.310975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-12-05 12:14:10.311008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-12-05 12:14:10.311259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-12-05 12:14:10.311290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-12-05 12:14:10.311518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-12-05 12:14:10.311551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-12-05 12:14:10.311750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-12-05 12:14:10.311789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-12-05 12:14:10.311919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-12-05 12:14:10.311951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-12-05 12:14:10.312224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-12-05 12:14:10.312257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-12-05 12:14:10.312447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-12-05 12:14:10.312481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-12-05 12:14:10.312624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-12-05 12:14:10.312656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-12-05 12:14:10.312836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-12-05 12:14:10.312867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-12-05 12:14:10.313082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-12-05 12:14:10.313115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-12-05 12:14:10.313306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-12-05 12:14:10.313338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-12-05 12:14:10.313514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-12-05 12:14:10.313547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-12-05 12:14:10.313700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-12-05 12:14:10.313732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-12-05 12:14:10.314026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-12-05 12:14:10.314058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-12-05 12:14:10.314249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-12-05 12:14:10.314281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-12-05 12:14:10.314550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-12-05 12:14:10.314583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-12-05 12:14:10.314771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-12-05 12:14:10.314803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-12-05 12:14:10.315004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-12-05 12:14:10.315036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-12-05 12:14:10.315234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-12-05 12:14:10.315266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-12-05 12:14:10.315465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-12-05 12:14:10.315498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-12-05 12:14:10.315640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-12-05 12:14:10.315672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-12-05 12:14:10.315855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-12-05 12:14:10.315887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-12-05 12:14:10.316118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-12-05 12:14:10.316151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-12-05 12:14:10.316354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.372 [2024-12-05 12:14:10.316396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.372 qpair failed and we were unable to recover it. 00:30:36.372 [2024-12-05 12:14:10.316590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-12-05 12:14:10.316622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-12-05 12:14:10.316755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-12-05 12:14:10.316788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-12-05 12:14:10.316984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-12-05 12:14:10.317017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-12-05 12:14:10.317136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-12-05 12:14:10.317168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-12-05 12:14:10.317365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-12-05 12:14:10.317407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-12-05 12:14:10.317644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-12-05 12:14:10.317676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-12-05 12:14:10.317875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-12-05 12:14:10.317908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-12-05 12:14:10.318021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-12-05 12:14:10.318054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-12-05 12:14:10.318336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-12-05 12:14:10.318377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-12-05 12:14:10.318603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-12-05 12:14:10.318635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-12-05 12:14:10.318839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-12-05 12:14:10.318870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-12-05 12:14:10.319078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-12-05 12:14:10.319109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-12-05 12:14:10.319303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-12-05 12:14:10.319334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-12-05 12:14:10.319591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-12-05 12:14:10.319625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-12-05 12:14:10.319757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-12-05 12:14:10.319790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-12-05 12:14:10.320064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-12-05 12:14:10.320095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-12-05 12:14:10.320275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-12-05 12:14:10.320307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-12-05 12:14:10.320614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-12-05 12:14:10.320648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-12-05 12:14:10.320882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-12-05 12:14:10.320914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-12-05 12:14:10.321138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-12-05 12:14:10.321169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-12-05 12:14:10.321461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-12-05 12:14:10.321494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-12-05 12:14:10.321723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-12-05 12:14:10.321755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-12-05 12:14:10.322059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-12-05 12:14:10.322092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-12-05 12:14:10.322381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-12-05 12:14:10.322415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-12-05 12:14:10.322713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-12-05 12:14:10.322745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-12-05 12:14:10.322889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-12-05 12:14:10.322920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-12-05 12:14:10.323132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-12-05 12:14:10.323164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-12-05 12:14:10.323358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-12-05 12:14:10.323398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-12-05 12:14:10.323522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-12-05 12:14:10.323554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-12-05 12:14:10.323735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-12-05 12:14:10.323768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-12-05 12:14:10.323921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-12-05 12:14:10.323954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-12-05 12:14:10.324172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-12-05 12:14:10.324203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-12-05 12:14:10.324547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-12-05 12:14:10.324580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-12-05 12:14:10.324780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-12-05 12:14:10.324812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-12-05 12:14:10.324972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-12-05 12:14:10.325005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-12-05 12:14:10.325263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-12-05 12:14:10.325294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-12-05 12:14:10.325557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-12-05 12:14:10.325590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-12-05 12:14:10.325811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.373 [2024-12-05 12:14:10.325845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.373 qpair failed and we were unable to recover it. 00:30:36.373 [2024-12-05 12:14:10.326137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.374 [2024-12-05 12:14:10.326170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.374 qpair failed and we were unable to recover it. 00:30:36.374 [2024-12-05 12:14:10.326375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.374 [2024-12-05 12:14:10.326409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.374 qpair failed and we were unable to recover it. 00:30:36.374 [2024-12-05 12:14:10.326612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.374 [2024-12-05 12:14:10.326645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.374 qpair failed and we were unable to recover it. 00:30:36.374 [2024-12-05 12:14:10.326784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.374 [2024-12-05 12:14:10.326817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.374 qpair failed and we were unable to recover it. 00:30:36.374 [2024-12-05 12:14:10.327061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.374 [2024-12-05 12:14:10.327093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.374 qpair failed and we were unable to recover it. 00:30:36.374 [2024-12-05 12:14:10.327340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.374 [2024-12-05 12:14:10.327394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.374 qpair failed and we were unable to recover it. 00:30:36.374 [2024-12-05 12:14:10.327532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.374 [2024-12-05 12:14:10.327564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.374 qpair failed and we were unable to recover it. 00:30:36.374 [2024-12-05 12:14:10.327759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.374 [2024-12-05 12:14:10.327791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.374 qpair failed and we were unable to recover it. 00:30:36.374 [2024-12-05 12:14:10.327992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.374 [2024-12-05 12:14:10.328030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.374 qpair failed and we were unable to recover it. 00:30:36.374 [2024-12-05 12:14:10.328309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.374 [2024-12-05 12:14:10.328341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.374 qpair failed and we were unable to recover it. 00:30:36.374 [2024-12-05 12:14:10.328480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.374 [2024-12-05 12:14:10.328512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.374 qpair failed and we were unable to recover it. 00:30:36.374 [2024-12-05 12:14:10.328720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.374 [2024-12-05 12:14:10.328752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.374 qpair failed and we were unable to recover it. 00:30:36.374 [2024-12-05 12:14:10.328875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.374 [2024-12-05 12:14:10.328906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.374 qpair failed and we were unable to recover it. 00:30:36.374 [2024-12-05 12:14:10.329053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.374 [2024-12-05 12:14:10.329085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.374 qpair failed and we were unable to recover it. 00:30:36.374 [2024-12-05 12:14:10.329311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.374 [2024-12-05 12:14:10.329343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.374 qpair failed and we were unable to recover it. 00:30:36.374 [2024-12-05 12:14:10.329474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.374 [2024-12-05 12:14:10.329507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.374 qpair failed and we were unable to recover it. 00:30:36.374 [2024-12-05 12:14:10.329784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.374 [2024-12-05 12:14:10.329818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.374 qpair failed and we were unable to recover it. 00:30:36.374 [2024-12-05 12:14:10.329948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.374 [2024-12-05 12:14:10.329980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.374 qpair failed and we were unable to recover it. 00:30:36.374 [2024-12-05 12:14:10.330182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.374 [2024-12-05 12:14:10.330214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.374 qpair failed and we were unable to recover it. 00:30:36.374 [2024-12-05 12:14:10.330416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.374 [2024-12-05 12:14:10.330449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.374 qpair failed and we were unable to recover it. 00:30:36.374 [2024-12-05 12:14:10.330725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.374 [2024-12-05 12:14:10.330758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.374 qpair failed and we were unable to recover it. 00:30:36.374 [2024-12-05 12:14:10.330946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.374 [2024-12-05 12:14:10.330978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.374 qpair failed and we were unable to recover it. 00:30:36.374 [2024-12-05 12:14:10.331189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.374 [2024-12-05 12:14:10.331221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.374 qpair failed and we were unable to recover it. 00:30:36.374 [2024-12-05 12:14:10.331415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.374 [2024-12-05 12:14:10.331449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.374 qpair failed and we were unable to recover it. 00:30:36.374 [2024-12-05 12:14:10.331643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.374 [2024-12-05 12:14:10.331675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.374 qpair failed and we were unable to recover it. 00:30:36.374 [2024-12-05 12:14:10.331797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.374 [2024-12-05 12:14:10.331829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.374 qpair failed and we were unable to recover it. 00:30:36.374 [2024-12-05 12:14:10.332127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.374 [2024-12-05 12:14:10.332160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.374 qpair failed and we were unable to recover it. 00:30:36.374 [2024-12-05 12:14:10.332421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.374 [2024-12-05 12:14:10.332454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.374 qpair failed and we were unable to recover it. 00:30:36.374 [2024-12-05 12:14:10.332660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.374 [2024-12-05 12:14:10.332692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.375 [2024-12-05 12:14:10.332814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.375 [2024-12-05 12:14:10.332846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.375 [2024-12-05 12:14:10.332978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.375 [2024-12-05 12:14:10.333010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.375 [2024-12-05 12:14:10.333202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.375 [2024-12-05 12:14:10.333234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.375 [2024-12-05 12:14:10.333519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.375 [2024-12-05 12:14:10.333551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.375 [2024-12-05 12:14:10.333753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.375 [2024-12-05 12:14:10.333786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.375 [2024-12-05 12:14:10.333972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.375 [2024-12-05 12:14:10.334005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.375 [2024-12-05 12:14:10.334149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.375 [2024-12-05 12:14:10.334181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.375 [2024-12-05 12:14:10.334387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.375 [2024-12-05 12:14:10.334422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.375 [2024-12-05 12:14:10.334652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.375 [2024-12-05 12:14:10.334684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.375 [2024-12-05 12:14:10.334815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.375 [2024-12-05 12:14:10.334847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.375 [2024-12-05 12:14:10.335109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.375 [2024-12-05 12:14:10.335142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.375 [2024-12-05 12:14:10.335451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.375 [2024-12-05 12:14:10.335485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.375 [2024-12-05 12:14:10.335760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.375 [2024-12-05 12:14:10.335794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.375 [2024-12-05 12:14:10.335918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.375 [2024-12-05 12:14:10.335951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.375 [2024-12-05 12:14:10.336168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.375 [2024-12-05 12:14:10.336199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.375 [2024-12-05 12:14:10.336352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.375 [2024-12-05 12:14:10.336391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.375 [2024-12-05 12:14:10.336620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.375 [2024-12-05 12:14:10.336652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.375 [2024-12-05 12:14:10.336878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.375 [2024-12-05 12:14:10.336910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.375 [2024-12-05 12:14:10.337067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.375 [2024-12-05 12:14:10.337098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.375 [2024-12-05 12:14:10.337381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.375 [2024-12-05 12:14:10.337420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.375 [2024-12-05 12:14:10.337560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.375 [2024-12-05 12:14:10.337591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.375 [2024-12-05 12:14:10.337893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.375 [2024-12-05 12:14:10.337926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.375 [2024-12-05 12:14:10.338197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.375 [2024-12-05 12:14:10.338229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.375 [2024-12-05 12:14:10.338364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.375 [2024-12-05 12:14:10.338421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.375 [2024-12-05 12:14:10.338626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.375 [2024-12-05 12:14:10.338658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.375 [2024-12-05 12:14:10.338790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.375 [2024-12-05 12:14:10.338821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.375 [2024-12-05 12:14:10.339097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.375 [2024-12-05 12:14:10.339129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.375 [2024-12-05 12:14:10.339345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.375 [2024-12-05 12:14:10.339399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.375 [2024-12-05 12:14:10.339678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.375 [2024-12-05 12:14:10.339710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.375 [2024-12-05 12:14:10.339864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.375 [2024-12-05 12:14:10.339896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.375 [2024-12-05 12:14:10.340106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.375 [2024-12-05 12:14:10.340136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.375 [2024-12-05 12:14:10.340337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.375 [2024-12-05 12:14:10.340380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.375 [2024-12-05 12:14:10.340587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.375 [2024-12-05 12:14:10.340619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.375 [2024-12-05 12:14:10.340870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.375 [2024-12-05 12:14:10.340902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.375 [2024-12-05 12:14:10.341113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.375 [2024-12-05 12:14:10.341144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.375 [2024-12-05 12:14:10.341258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.375 [2024-12-05 12:14:10.341290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.375 [2024-12-05 12:14:10.341479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.375 [2024-12-05 12:14:10.341512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.375 [2024-12-05 12:14:10.341715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.375 [2024-12-05 12:14:10.341747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.375 qpair failed and we were unable to recover it. 00:30:36.376 [2024-12-05 12:14:10.342030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.376 [2024-12-05 12:14:10.342062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.376 qpair failed and we were unable to recover it. 00:30:36.376 [2024-12-05 12:14:10.342389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.376 [2024-12-05 12:14:10.342423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.376 qpair failed and we were unable to recover it. 00:30:36.376 [2024-12-05 12:14:10.342628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.376 [2024-12-05 12:14:10.342660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.376 qpair failed and we were unable to recover it. 00:30:36.376 [2024-12-05 12:14:10.342918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.376 [2024-12-05 12:14:10.342949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.376 qpair failed and we were unable to recover it. 00:30:36.376 [2024-12-05 12:14:10.343150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.376 [2024-12-05 12:14:10.343181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.376 qpair failed and we were unable to recover it. 00:30:36.376 [2024-12-05 12:14:10.343365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.376 [2024-12-05 12:14:10.343409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.376 qpair failed and we were unable to recover it. 00:30:36.376 [2024-12-05 12:14:10.343615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.376 [2024-12-05 12:14:10.343647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.376 qpair failed and we were unable to recover it. 00:30:36.376 [2024-12-05 12:14:10.343794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.376 [2024-12-05 12:14:10.343826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.376 qpair failed and we were unable to recover it. 00:30:36.376 [2024-12-05 12:14:10.343985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.376 [2024-12-05 12:14:10.344017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.376 qpair failed and we were unable to recover it. 00:30:36.376 [2024-12-05 12:14:10.344273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.376 [2024-12-05 12:14:10.344306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.376 qpair failed and we were unable to recover it. 00:30:36.376 [2024-12-05 12:14:10.344457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.376 [2024-12-05 12:14:10.344491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.376 qpair failed and we were unable to recover it. 00:30:36.376 [2024-12-05 12:14:10.344653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.376 [2024-12-05 12:14:10.344686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.376 qpair failed and we were unable to recover it. 00:30:36.376 [2024-12-05 12:14:10.344902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.376 [2024-12-05 12:14:10.344934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.376 qpair failed and we were unable to recover it. 00:30:36.376 [2024-12-05 12:14:10.345236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.376 [2024-12-05 12:14:10.345268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.376 qpair failed and we were unable to recover it. 00:30:36.376 [2024-12-05 12:14:10.345404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.376 [2024-12-05 12:14:10.345438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.376 qpair failed and we were unable to recover it. 00:30:36.376 [2024-12-05 12:14:10.345631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.376 [2024-12-05 12:14:10.345663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.376 qpair failed and we were unable to recover it. 00:30:36.376 [2024-12-05 12:14:10.345861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.376 [2024-12-05 12:14:10.345894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.376 qpair failed and we were unable to recover it. 00:30:36.376 [2024-12-05 12:14:10.346106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.376 [2024-12-05 12:14:10.346138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.376 qpair failed and we were unable to recover it. 00:30:36.376 [2024-12-05 12:14:10.346417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.376 [2024-12-05 12:14:10.346451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.376 qpair failed and we were unable to recover it. 00:30:36.376 [2024-12-05 12:14:10.346584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.376 [2024-12-05 12:14:10.346615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.376 qpair failed and we were unable to recover it. 00:30:36.376 [2024-12-05 12:14:10.346806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.376 [2024-12-05 12:14:10.346837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.376 qpair failed and we were unable to recover it. 00:30:36.376 [2024-12-05 12:14:10.347112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.376 [2024-12-05 12:14:10.347150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.376 qpair failed and we were unable to recover it. 00:30:36.376 [2024-12-05 12:14:10.347350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.376 [2024-12-05 12:14:10.347394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.376 qpair failed and we were unable to recover it. 00:30:36.376 [2024-12-05 12:14:10.347667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.376 [2024-12-05 12:14:10.347700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.376 qpair failed and we were unable to recover it. 00:30:36.376 [2024-12-05 12:14:10.347845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.376 [2024-12-05 12:14:10.347877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.376 qpair failed and we were unable to recover it. 00:30:36.376 [2024-12-05 12:14:10.348173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.376 [2024-12-05 12:14:10.348205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.376 qpair failed and we were unable to recover it. 00:30:36.376 [2024-12-05 12:14:10.348397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.376 [2024-12-05 12:14:10.348431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.376 qpair failed and we were unable to recover it. 00:30:36.376 [2024-12-05 12:14:10.348571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.376 [2024-12-05 12:14:10.348604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.376 qpair failed and we were unable to recover it. 00:30:36.376 [2024-12-05 12:14:10.348833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.376 [2024-12-05 12:14:10.348865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.376 qpair failed and we were unable to recover it. 00:30:36.376 [2024-12-05 12:14:10.349191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.376 [2024-12-05 12:14:10.349222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.376 qpair failed and we were unable to recover it. 00:30:36.376 [2024-12-05 12:14:10.349361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.376 [2024-12-05 12:14:10.349409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.376 qpair failed and we were unable to recover it. 00:30:36.376 [2024-12-05 12:14:10.349683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.376 [2024-12-05 12:14:10.349715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.376 qpair failed and we were unable to recover it. 00:30:36.376 [2024-12-05 12:14:10.349948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.376 [2024-12-05 12:14:10.349981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.376 qpair failed and we were unable to recover it. 00:30:36.376 [2024-12-05 12:14:10.350283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.376 [2024-12-05 12:14:10.350314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.376 qpair failed and we were unable to recover it. 00:30:36.376 [2024-12-05 12:14:10.350522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.376 [2024-12-05 12:14:10.350555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.376 qpair failed and we were unable to recover it. 00:30:36.376 [2024-12-05 12:14:10.350837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.376 [2024-12-05 12:14:10.350869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.376 qpair failed and we were unable to recover it. 00:30:36.376 [2024-12-05 12:14:10.351125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.376 [2024-12-05 12:14:10.351157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.376 qpair failed and we were unable to recover it. 00:30:36.376 [2024-12-05 12:14:10.351338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.376 [2024-12-05 12:14:10.351385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.376 qpair failed and we were unable to recover it. 00:30:36.376 [2024-12-05 12:14:10.351609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.377 [2024-12-05 12:14:10.351641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.377 qpair failed and we were unable to recover it. 00:30:36.377 [2024-12-05 12:14:10.351894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.377 [2024-12-05 12:14:10.351926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.377 qpair failed and we were unable to recover it. 00:30:36.377 [2024-12-05 12:14:10.352194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.377 [2024-12-05 12:14:10.352226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.377 qpair failed and we were unable to recover it. 00:30:36.377 [2024-12-05 12:14:10.352431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.377 [2024-12-05 12:14:10.352464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.377 qpair failed and we were unable to recover it. 00:30:36.377 [2024-12-05 12:14:10.352621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.377 [2024-12-05 12:14:10.352654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.377 qpair failed and we were unable to recover it. 00:30:36.377 [2024-12-05 12:14:10.352867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.377 [2024-12-05 12:14:10.352900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.377 qpair failed and we were unable to recover it. 00:30:36.377 [2024-12-05 12:14:10.353181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.377 [2024-12-05 12:14:10.353213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.377 qpair failed and we were unable to recover it. 00:30:36.377 [2024-12-05 12:14:10.353395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.377 [2024-12-05 12:14:10.353428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.377 qpair failed and we were unable to recover it. 00:30:36.377 [2024-12-05 12:14:10.353576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.377 [2024-12-05 12:14:10.353608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.377 qpair failed and we were unable to recover it. 00:30:36.377 [2024-12-05 12:14:10.353738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.377 [2024-12-05 12:14:10.353771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.377 qpair failed and we were unable to recover it. 00:30:36.377 [2024-12-05 12:14:10.353977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.377 [2024-12-05 12:14:10.354010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.377 qpair failed and we were unable to recover it. 00:30:36.377 [2024-12-05 12:14:10.354214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.377 [2024-12-05 12:14:10.354245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.377 qpair failed and we were unable to recover it. 00:30:36.377 [2024-12-05 12:14:10.354452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.377 [2024-12-05 12:14:10.354485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.377 qpair failed and we were unable to recover it. 00:30:36.377 [2024-12-05 12:14:10.354710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.377 [2024-12-05 12:14:10.354741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.377 qpair failed and we were unable to recover it. 00:30:36.377 [2024-12-05 12:14:10.354859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.377 [2024-12-05 12:14:10.354891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.377 qpair failed and we were unable to recover it. 00:30:36.377 [2024-12-05 12:14:10.355207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.377 [2024-12-05 12:14:10.355239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.377 qpair failed and we were unable to recover it. 00:30:36.377 [2024-12-05 12:14:10.355516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.377 [2024-12-05 12:14:10.355550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.377 qpair failed and we were unable to recover it. 00:30:36.377 [2024-12-05 12:14:10.355779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.377 [2024-12-05 12:14:10.355811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.377 qpair failed and we were unable to recover it. 00:30:36.377 [2024-12-05 12:14:10.356013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.377 [2024-12-05 12:14:10.356045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.377 qpair failed and we were unable to recover it. 00:30:36.377 [2024-12-05 12:14:10.356299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.377 [2024-12-05 12:14:10.356331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.377 qpair failed and we were unable to recover it. 00:30:36.377 [2024-12-05 12:14:10.356542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.377 [2024-12-05 12:14:10.356576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.377 qpair failed and we were unable to recover it. 00:30:36.377 [2024-12-05 12:14:10.356774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.377 [2024-12-05 12:14:10.356805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.377 qpair failed and we were unable to recover it. 00:30:36.377 [2024-12-05 12:14:10.356933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.377 [2024-12-05 12:14:10.356964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.377 qpair failed and we were unable to recover it. 00:30:36.377 [2024-12-05 12:14:10.357174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.377 [2024-12-05 12:14:10.357211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.377 qpair failed and we were unable to recover it. 00:30:36.377 [2024-12-05 12:14:10.357505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.377 [2024-12-05 12:14:10.357540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.377 qpair failed and we were unable to recover it. 00:30:36.377 [2024-12-05 12:14:10.357684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.377 [2024-12-05 12:14:10.357716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.377 qpair failed and we were unable to recover it. 00:30:36.377 [2024-12-05 12:14:10.357966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.377 [2024-12-05 12:14:10.357999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.377 qpair failed and we were unable to recover it. 00:30:36.377 [2024-12-05 12:14:10.358258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.377 [2024-12-05 12:14:10.358291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.377 qpair failed and we were unable to recover it. 00:30:36.377 [2024-12-05 12:14:10.358518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.377 [2024-12-05 12:14:10.358552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.377 qpair failed and we were unable to recover it. 00:30:36.377 [2024-12-05 12:14:10.358752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.377 [2024-12-05 12:14:10.358784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.377 qpair failed and we were unable to recover it. 00:30:36.377 [2024-12-05 12:14:10.358932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.377 [2024-12-05 12:14:10.358963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.377 qpair failed and we were unable to recover it. 00:30:36.377 [2024-12-05 12:14:10.359168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.377 [2024-12-05 12:14:10.359200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.377 qpair failed and we were unable to recover it. 00:30:36.377 [2024-12-05 12:14:10.359462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.377 [2024-12-05 12:14:10.359496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.377 qpair failed and we were unable to recover it. 00:30:36.377 [2024-12-05 12:14:10.359626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.377 [2024-12-05 12:14:10.359658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.377 qpair failed and we were unable to recover it. 00:30:36.377 [2024-12-05 12:14:10.359801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.377 [2024-12-05 12:14:10.359834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.377 qpair failed and we were unable to recover it. 00:30:36.377 [2024-12-05 12:14:10.359966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.377 [2024-12-05 12:14:10.359997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.377 qpair failed and we were unable to recover it. 00:30:36.377 [2024-12-05 12:14:10.360206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.377 [2024-12-05 12:14:10.360238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.377 qpair failed and we were unable to recover it. 00:30:36.377 [2024-12-05 12:14:10.360392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.377 [2024-12-05 12:14:10.360448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.377 qpair failed and we were unable to recover it. 00:30:36.377 [2024-12-05 12:14:10.360710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.377 [2024-12-05 12:14:10.360742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.378 qpair failed and we were unable to recover it. 00:30:36.378 [2024-12-05 12:14:10.360950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.378 [2024-12-05 12:14:10.360983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.378 qpair failed and we were unable to recover it. 00:30:36.378 [2024-12-05 12:14:10.361261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.378 [2024-12-05 12:14:10.361293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.378 qpair failed and we were unable to recover it. 00:30:36.378 [2024-12-05 12:14:10.361479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.378 [2024-12-05 12:14:10.361512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.378 qpair failed and we were unable to recover it. 00:30:36.378 [2024-12-05 12:14:10.361656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.378 [2024-12-05 12:14:10.361689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.378 qpair failed and we were unable to recover it. 00:30:36.378 [2024-12-05 12:14:10.361823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.378 [2024-12-05 12:14:10.361856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.378 qpair failed and we were unable to recover it. 00:30:36.378 [2024-12-05 12:14:10.362034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.378 [2024-12-05 12:14:10.362065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.378 qpair failed and we were unable to recover it. 00:30:36.378 [2024-12-05 12:14:10.362287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.378 [2024-12-05 12:14:10.362318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.378 qpair failed and we were unable to recover it. 00:30:36.378 [2024-12-05 12:14:10.362509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.378 [2024-12-05 12:14:10.362543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.378 qpair failed and we were unable to recover it. 00:30:36.378 [2024-12-05 12:14:10.362686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.378 [2024-12-05 12:14:10.362718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.378 qpair failed and we were unable to recover it. 00:30:36.378 [2024-12-05 12:14:10.362977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.378 [2024-12-05 12:14:10.363009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.378 qpair failed and we were unable to recover it. 00:30:36.378 [2024-12-05 12:14:10.363212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.378 [2024-12-05 12:14:10.363243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.378 qpair failed and we were unable to recover it. 00:30:36.378 [2024-12-05 12:14:10.363450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.378 [2024-12-05 12:14:10.363483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.378 qpair failed and we were unable to recover it. 00:30:36.378 [2024-12-05 12:14:10.363635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.378 [2024-12-05 12:14:10.363667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.378 qpair failed and we were unable to recover it. 00:30:36.378 [2024-12-05 12:14:10.363875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.378 [2024-12-05 12:14:10.363906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.378 qpair failed and we were unable to recover it. 00:30:36.378 [2024-12-05 12:14:10.364131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.378 [2024-12-05 12:14:10.364164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.378 qpair failed and we were unable to recover it. 00:30:36.378 [2024-12-05 12:14:10.364380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.378 [2024-12-05 12:14:10.364413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.378 qpair failed and we were unable to recover it. 00:30:36.378 [2024-12-05 12:14:10.364565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.378 [2024-12-05 12:14:10.364596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.378 qpair failed and we were unable to recover it. 00:30:36.378 [2024-12-05 12:14:10.364821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.378 [2024-12-05 12:14:10.364853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.378 qpair failed and we were unable to recover it. 00:30:36.378 [2024-12-05 12:14:10.365163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.378 [2024-12-05 12:14:10.365195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.378 qpair failed and we were unable to recover it. 00:30:36.378 [2024-12-05 12:14:10.365491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.378 [2024-12-05 12:14:10.365525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.378 qpair failed and we were unable to recover it. 00:30:36.378 [2024-12-05 12:14:10.365682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.378 [2024-12-05 12:14:10.365715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.378 qpair failed and we were unable to recover it. 00:30:36.378 [2024-12-05 12:14:10.365945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.378 [2024-12-05 12:14:10.365976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.378 qpair failed and we were unable to recover it. 00:30:36.378 [2024-12-05 12:14:10.366255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.378 [2024-12-05 12:14:10.366288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.378 qpair failed and we were unable to recover it. 00:30:36.378 [2024-12-05 12:14:10.366476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.378 [2024-12-05 12:14:10.366509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.378 qpair failed and we were unable to recover it. 00:30:36.378 [2024-12-05 12:14:10.366715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.378 [2024-12-05 12:14:10.366756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.378 qpair failed and we were unable to recover it. 00:30:36.378 [2024-12-05 12:14:10.366906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.378 [2024-12-05 12:14:10.366938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.378 qpair failed and we were unable to recover it. 00:30:36.378 [2024-12-05 12:14:10.367211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.378 [2024-12-05 12:14:10.367243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.378 qpair failed and we were unable to recover it. 00:30:36.378 [2024-12-05 12:14:10.367454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.378 [2024-12-05 12:14:10.367487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.378 qpair failed and we were unable to recover it. 00:30:36.378 [2024-12-05 12:14:10.367714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.378 [2024-12-05 12:14:10.367746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.378 qpair failed and we were unable to recover it. 00:30:36.378 [2024-12-05 12:14:10.367952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.378 [2024-12-05 12:14:10.367984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.378 qpair failed and we were unable to recover it. 00:30:36.378 [2024-12-05 12:14:10.368236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.378 [2024-12-05 12:14:10.368267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.378 qpair failed and we were unable to recover it. 00:30:36.378 [2024-12-05 12:14:10.368503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.378 [2024-12-05 12:14:10.368536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.378 qpair failed and we were unable to recover it. 00:30:36.378 [2024-12-05 12:14:10.368728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.378 [2024-12-05 12:14:10.368760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.378 qpair failed and we were unable to recover it. 00:30:36.378 [2024-12-05 12:14:10.369034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.378 [2024-12-05 12:14:10.369066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.378 qpair failed and we were unable to recover it. 00:30:36.379 [2024-12-05 12:14:10.369266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.379 [2024-12-05 12:14:10.369298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.379 qpair failed and we were unable to recover it. 00:30:36.379 [2024-12-05 12:14:10.369505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.379 [2024-12-05 12:14:10.369540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.379 qpair failed and we were unable to recover it. 00:30:36.379 [2024-12-05 12:14:10.369693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.379 [2024-12-05 12:14:10.369725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.379 qpair failed and we were unable to recover it. 00:30:36.379 [2024-12-05 12:14:10.370048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.379 [2024-12-05 12:14:10.370080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.379 qpair failed and we were unable to recover it. 00:30:36.379 [2024-12-05 12:14:10.370241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.379 [2024-12-05 12:14:10.370273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.379 qpair failed and we were unable to recover it. 00:30:36.379 [2024-12-05 12:14:10.370472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.379 [2024-12-05 12:14:10.370506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.379 qpair failed and we were unable to recover it. 00:30:36.379 [2024-12-05 12:14:10.370705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.379 [2024-12-05 12:14:10.370736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.379 qpair failed and we were unable to recover it. 00:30:36.379 [2024-12-05 12:14:10.370934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.379 [2024-12-05 12:14:10.370965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.379 qpair failed and we were unable to recover it. 00:30:36.379 [2024-12-05 12:14:10.371183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.379 [2024-12-05 12:14:10.371214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.379 qpair failed and we were unable to recover it. 00:30:36.379 [2024-12-05 12:14:10.371366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.379 [2024-12-05 12:14:10.371411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.379 qpair failed and we were unable to recover it. 00:30:36.379 [2024-12-05 12:14:10.371703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.379 [2024-12-05 12:14:10.371736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.379 qpair failed and we were unable to recover it. 00:30:36.379 [2024-12-05 12:14:10.372005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.379 [2024-12-05 12:14:10.372037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.379 qpair failed and we were unable to recover it. 00:30:36.379 [2024-12-05 12:14:10.372391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.379 [2024-12-05 12:14:10.372425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.379 qpair failed and we were unable to recover it. 00:30:36.379 [2024-12-05 12:14:10.372699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.379 [2024-12-05 12:14:10.372731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.379 qpair failed and we were unable to recover it. 00:30:36.379 [2024-12-05 12:14:10.372924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.379 [2024-12-05 12:14:10.372956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.379 qpair failed and we were unable to recover it. 00:30:36.379 [2024-12-05 12:14:10.373089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.379 [2024-12-05 12:14:10.373121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.379 qpair failed and we were unable to recover it. 00:30:36.379 [2024-12-05 12:14:10.373426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.379 [2024-12-05 12:14:10.373460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.379 qpair failed and we were unable to recover it. 00:30:36.379 [2024-12-05 12:14:10.373671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.379 [2024-12-05 12:14:10.373704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.379 qpair failed and we were unable to recover it. 00:30:36.379 [2024-12-05 12:14:10.373862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.379 [2024-12-05 12:14:10.373893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.379 qpair failed and we were unable to recover it. 00:30:36.379 [2024-12-05 12:14:10.374125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.379 [2024-12-05 12:14:10.374156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.379 qpair failed and we were unable to recover it. 00:30:36.379 [2024-12-05 12:14:10.374444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.379 [2024-12-05 12:14:10.374477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.379 qpair failed and we were unable to recover it. 00:30:36.379 [2024-12-05 12:14:10.374613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.379 [2024-12-05 12:14:10.374644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.379 qpair failed and we were unable to recover it. 00:30:36.379 [2024-12-05 12:14:10.374843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.379 [2024-12-05 12:14:10.374876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.379 qpair failed and we were unable to recover it. 00:30:36.379 [2024-12-05 12:14:10.375170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.379 [2024-12-05 12:14:10.375203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.379 qpair failed and we were unable to recover it. 00:30:36.379 [2024-12-05 12:14:10.375394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.379 [2024-12-05 12:14:10.375428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.379 qpair failed and we were unable to recover it. 00:30:36.379 [2024-12-05 12:14:10.375626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.379 [2024-12-05 12:14:10.375659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.379 qpair failed and we were unable to recover it. 00:30:36.379 [2024-12-05 12:14:10.375838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.379 [2024-12-05 12:14:10.375870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.379 qpair failed and we were unable to recover it. 00:30:36.379 [2024-12-05 12:14:10.376163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.379 [2024-12-05 12:14:10.376195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.379 qpair failed and we were unable to recover it. 00:30:36.379 [2024-12-05 12:14:10.376494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.379 [2024-12-05 12:14:10.376527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.379 qpair failed and we were unable to recover it. 00:30:36.379 [2024-12-05 12:14:10.376715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.379 [2024-12-05 12:14:10.376748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.379 qpair failed and we were unable to recover it. 00:30:36.379 [2024-12-05 12:14:10.376904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.379 [2024-12-05 12:14:10.376942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.379 qpair failed and we were unable to recover it. 00:30:36.379 [2024-12-05 12:14:10.377149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.379 [2024-12-05 12:14:10.377181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.379 qpair failed and we were unable to recover it. 00:30:36.379 [2024-12-05 12:14:10.377365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.379 [2024-12-05 12:14:10.377412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.379 qpair failed and we were unable to recover it. 00:30:36.379 [2024-12-05 12:14:10.377595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.379 [2024-12-05 12:14:10.377627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.379 qpair failed and we were unable to recover it. 00:30:36.379 [2024-12-05 12:14:10.377833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.379 [2024-12-05 12:14:10.377865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.379 qpair failed and we were unable to recover it. 00:30:36.379 [2024-12-05 12:14:10.378145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.379 [2024-12-05 12:14:10.378175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.379 qpair failed and we were unable to recover it. 00:30:36.379 [2024-12-05 12:14:10.378364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.379 [2024-12-05 12:14:10.378410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.379 qpair failed and we were unable to recover it. 00:30:36.379 [2024-12-05 12:14:10.378669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.379 [2024-12-05 12:14:10.378700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.379 qpair failed and we were unable to recover it. 00:30:36.379 [2024-12-05 12:14:10.378835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.380 [2024-12-05 12:14:10.378867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.380 qpair failed and we were unable to recover it. 00:30:36.380 [2024-12-05 12:14:10.379161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.380 [2024-12-05 12:14:10.379193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.380 qpair failed and we were unable to recover it. 00:30:36.380 [2024-12-05 12:14:10.379489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.380 [2024-12-05 12:14:10.379523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.380 qpair failed and we were unable to recover it. 00:30:36.380 [2024-12-05 12:14:10.379748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.380 [2024-12-05 12:14:10.379779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.380 qpair failed and we were unable to recover it. 00:30:36.380 [2024-12-05 12:14:10.380033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.380 [2024-12-05 12:14:10.380065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.380 qpair failed and we were unable to recover it. 00:30:36.380 [2024-12-05 12:14:10.380207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.380 [2024-12-05 12:14:10.380239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.380 qpair failed and we were unable to recover it. 00:30:36.380 [2024-12-05 12:14:10.380448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.380 [2024-12-05 12:14:10.380482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.380 qpair failed and we were unable to recover it. 00:30:36.380 [2024-12-05 12:14:10.380668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.380 [2024-12-05 12:14:10.380701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.380 qpair failed and we were unable to recover it. 00:30:36.380 [2024-12-05 12:14:10.380898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.380 [2024-12-05 12:14:10.380930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.380 qpair failed and we were unable to recover it. 00:30:36.380 [2024-12-05 12:14:10.381202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.380 [2024-12-05 12:14:10.381235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.380 qpair failed and we were unable to recover it. 00:30:36.380 [2024-12-05 12:14:10.381512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.380 [2024-12-05 12:14:10.381546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.380 qpair failed and we were unable to recover it. 00:30:36.380 [2024-12-05 12:14:10.381748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.380 [2024-12-05 12:14:10.381782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.380 qpair failed and we were unable to recover it. 00:30:36.380 [2024-12-05 12:14:10.382073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.380 [2024-12-05 12:14:10.382105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.380 qpair failed and we were unable to recover it. 00:30:36.380 [2024-12-05 12:14:10.382304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.380 [2024-12-05 12:14:10.382337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.380 qpair failed and we were unable to recover it. 00:30:36.380 [2024-12-05 12:14:10.382520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.380 [2024-12-05 12:14:10.382553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.380 qpair failed and we were unable to recover it. 00:30:36.380 [2024-12-05 12:14:10.382757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.380 [2024-12-05 12:14:10.382790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.380 qpair failed and we were unable to recover it. 00:30:36.380 [2024-12-05 12:14:10.383042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.380 [2024-12-05 12:14:10.383074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.380 qpair failed and we were unable to recover it. 00:30:36.380 [2024-12-05 12:14:10.383350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.380 [2024-12-05 12:14:10.383394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.380 qpair failed and we were unable to recover it. 00:30:36.380 [2024-12-05 12:14:10.383603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.380 [2024-12-05 12:14:10.383636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.380 qpair failed and we were unable to recover it. 00:30:36.380 [2024-12-05 12:14:10.383930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.380 [2024-12-05 12:14:10.383963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.380 qpair failed and we were unable to recover it. 00:30:36.380 [2024-12-05 12:14:10.384169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.380 [2024-12-05 12:14:10.384200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.380 qpair failed and we were unable to recover it. 00:30:36.380 [2024-12-05 12:14:10.384403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.380 [2024-12-05 12:14:10.384436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.380 qpair failed and we were unable to recover it. 00:30:36.380 [2024-12-05 12:14:10.384564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.380 [2024-12-05 12:14:10.384596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.380 qpair failed and we were unable to recover it. 00:30:36.380 [2024-12-05 12:14:10.384802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.380 [2024-12-05 12:14:10.384834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.380 qpair failed and we were unable to recover it. 00:30:36.380 [2024-12-05 12:14:10.385038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.380 [2024-12-05 12:14:10.385070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.380 qpair failed and we were unable to recover it. 00:30:36.380 [2024-12-05 12:14:10.385256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.380 [2024-12-05 12:14:10.385288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.380 qpair failed and we were unable to recover it. 00:30:36.380 [2024-12-05 12:14:10.385569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.380 [2024-12-05 12:14:10.385603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.380 qpair failed and we were unable to recover it. 00:30:36.380 [2024-12-05 12:14:10.385805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.380 [2024-12-05 12:14:10.385837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.380 qpair failed and we were unable to recover it. 00:30:36.380 [2024-12-05 12:14:10.386091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.380 [2024-12-05 12:14:10.386123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.380 qpair failed and we were unable to recover it. 00:30:36.380 [2024-12-05 12:14:10.386338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.380 [2024-12-05 12:14:10.386379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.380 qpair failed and we were unable to recover it. 00:30:36.380 [2024-12-05 12:14:10.386562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.380 [2024-12-05 12:14:10.386594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.380 qpair failed and we were unable to recover it. 00:30:36.380 [2024-12-05 12:14:10.386774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.380 [2024-12-05 12:14:10.386806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.380 qpair failed and we were unable to recover it. 00:30:36.380 [2024-12-05 12:14:10.387016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.380 [2024-12-05 12:14:10.387054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.380 qpair failed and we were unable to recover it. 00:30:36.380 [2024-12-05 12:14:10.387192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.380 [2024-12-05 12:14:10.387225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.380 qpair failed and we were unable to recover it. 00:30:36.380 [2024-12-05 12:14:10.387425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.380 [2024-12-05 12:14:10.387458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.380 qpair failed and we were unable to recover it. 00:30:36.380 [2024-12-05 12:14:10.387661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.380 [2024-12-05 12:14:10.387693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.380 qpair failed and we were unable to recover it. 00:30:36.380 [2024-12-05 12:14:10.387991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.380 [2024-12-05 12:14:10.388023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.380 qpair failed and we were unable to recover it. 00:30:36.380 [2024-12-05 12:14:10.388292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.380 [2024-12-05 12:14:10.388323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.380 qpair failed and we were unable to recover it. 00:30:36.380 [2024-12-05 12:14:10.388549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.381 [2024-12-05 12:14:10.388583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.381 qpair failed and we were unable to recover it. 00:30:36.381 [2024-12-05 12:14:10.388778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.381 [2024-12-05 12:14:10.388808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.381 qpair failed and we were unable to recover it. 00:30:36.381 [2024-12-05 12:14:10.389157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.381 [2024-12-05 12:14:10.389189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.381 qpair failed and we were unable to recover it. 00:30:36.381 [2024-12-05 12:14:10.389391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.381 [2024-12-05 12:14:10.389426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.381 qpair failed and we were unable to recover it. 00:30:36.381 [2024-12-05 12:14:10.389579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.381 [2024-12-05 12:14:10.389610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.381 qpair failed and we were unable to recover it. 00:30:36.381 [2024-12-05 12:14:10.389817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.381 [2024-12-05 12:14:10.389850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.381 qpair failed and we were unable to recover it. 00:30:36.381 [2024-12-05 12:14:10.390205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.381 [2024-12-05 12:14:10.390237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.381 qpair failed and we were unable to recover it. 00:30:36.381 [2024-12-05 12:14:10.390476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.381 [2024-12-05 12:14:10.390509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.381 qpair failed and we were unable to recover it. 00:30:36.381 [2024-12-05 12:14:10.390751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.381 [2024-12-05 12:14:10.390783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.381 qpair failed and we were unable to recover it. 00:30:36.381 [2024-12-05 12:14:10.391123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.381 [2024-12-05 12:14:10.391156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.381 qpair failed and we were unable to recover it. 00:30:36.381 [2024-12-05 12:14:10.391386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.381 [2024-12-05 12:14:10.391420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.381 qpair failed and we were unable to recover it. 00:30:36.381 [2024-12-05 12:14:10.391606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.381 [2024-12-05 12:14:10.391638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.381 qpair failed and we were unable to recover it. 00:30:36.381 [2024-12-05 12:14:10.391915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.381 [2024-12-05 12:14:10.391946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.381 qpair failed and we were unable to recover it. 00:30:36.381 [2024-12-05 12:14:10.392210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.381 [2024-12-05 12:14:10.392241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.381 qpair failed and we were unable to recover it. 00:30:36.381 [2024-12-05 12:14:10.392388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.381 [2024-12-05 12:14:10.392422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.381 qpair failed and we were unable to recover it. 00:30:36.381 [2024-12-05 12:14:10.392650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.381 [2024-12-05 12:14:10.392681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.381 qpair failed and we were unable to recover it. 00:30:36.381 [2024-12-05 12:14:10.392883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.381 [2024-12-05 12:14:10.392916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.381 qpair failed and we were unable to recover it. 00:30:36.381 [2024-12-05 12:14:10.393066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.381 [2024-12-05 12:14:10.393098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.381 qpair failed and we were unable to recover it. 00:30:36.381 [2024-12-05 12:14:10.393349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.381 [2024-12-05 12:14:10.393391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.381 qpair failed and we were unable to recover it. 00:30:36.381 [2024-12-05 12:14:10.393672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.381 [2024-12-05 12:14:10.393704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.381 qpair failed and we were unable to recover it. 00:30:36.381 [2024-12-05 12:14:10.393855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.381 [2024-12-05 12:14:10.393887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.381 qpair failed and we were unable to recover it. 00:30:36.381 [2024-12-05 12:14:10.394148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.381 [2024-12-05 12:14:10.394224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.381 qpair failed and we were unable to recover it. 00:30:36.381 [2024-12-05 12:14:10.394540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.381 [2024-12-05 12:14:10.394577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.381 qpair failed and we were unable to recover it. 00:30:36.381 [2024-12-05 12:14:10.394781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.381 [2024-12-05 12:14:10.394814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.381 qpair failed and we were unable to recover it. 00:30:36.381 [2024-12-05 12:14:10.394964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.381 [2024-12-05 12:14:10.394996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.381 qpair failed and we were unable to recover it. 00:30:36.381 [2024-12-05 12:14:10.395135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.381 [2024-12-05 12:14:10.395168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.381 qpair failed and we were unable to recover it. 00:30:36.381 [2024-12-05 12:14:10.395449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.381 [2024-12-05 12:14:10.395484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.381 qpair failed and we were unable to recover it. 00:30:36.381 [2024-12-05 12:14:10.395617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.381 [2024-12-05 12:14:10.395650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.381 qpair failed and we were unable to recover it. 00:30:36.381 [2024-12-05 12:14:10.395859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.381 [2024-12-05 12:14:10.395890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.381 qpair failed and we were unable to recover it. 00:30:36.381 [2024-12-05 12:14:10.396116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.381 [2024-12-05 12:14:10.396148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.381 qpair failed and we were unable to recover it. 00:30:36.381 [2024-12-05 12:14:10.396400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.381 [2024-12-05 12:14:10.396433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.381 qpair failed and we were unable to recover it. 00:30:36.381 [2024-12-05 12:14:10.396625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.381 [2024-12-05 12:14:10.396658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.381 qpair failed and we were unable to recover it. 00:30:36.381 [2024-12-05 12:14:10.396853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.381 [2024-12-05 12:14:10.396885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.381 qpair failed and we were unable to recover it. 00:30:36.381 [2024-12-05 12:14:10.397213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.381 [2024-12-05 12:14:10.397246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.381 qpair failed and we were unable to recover it. 00:30:36.381 [2024-12-05 12:14:10.397404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.381 [2024-12-05 12:14:10.397438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.381 qpair failed and we were unable to recover it. 00:30:36.381 [2024-12-05 12:14:10.397579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.381 [2024-12-05 12:14:10.397611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.381 qpair failed and we were unable to recover it. 00:30:36.381 [2024-12-05 12:14:10.397816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.381 [2024-12-05 12:14:10.397848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.381 qpair failed and we were unable to recover it. 00:30:36.381 [2024-12-05 12:14:10.398054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.381 [2024-12-05 12:14:10.398086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.381 qpair failed and we were unable to recover it. 00:30:36.381 [2024-12-05 12:14:10.398338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.381 [2024-12-05 12:14:10.398379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.381 qpair failed and we were unable to recover it. 00:30:36.382 [2024-12-05 12:14:10.398584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.382 [2024-12-05 12:14:10.398615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.382 qpair failed and we were unable to recover it. 00:30:36.382 [2024-12-05 12:14:10.398867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.382 [2024-12-05 12:14:10.398904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.382 qpair failed and we were unable to recover it. 00:30:36.382 [2024-12-05 12:14:10.399214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.382 [2024-12-05 12:14:10.399246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.382 qpair failed and we were unable to recover it. 00:30:36.382 [2024-12-05 12:14:10.399498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.382 [2024-12-05 12:14:10.399531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.382 qpair failed and we were unable to recover it. 00:30:36.382 [2024-12-05 12:14:10.399807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.382 [2024-12-05 12:14:10.399839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.382 qpair failed and we were unable to recover it. 00:30:36.382 [2024-12-05 12:14:10.399977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.382 [2024-12-05 12:14:10.400008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.382 qpair failed and we were unable to recover it. 00:30:36.382 [2024-12-05 12:14:10.400202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.382 [2024-12-05 12:14:10.400234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.382 qpair failed and we were unable to recover it. 00:30:36.382 [2024-12-05 12:14:10.400431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.382 [2024-12-05 12:14:10.400464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.382 qpair failed and we were unable to recover it. 00:30:36.382 [2024-12-05 12:14:10.400666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.382 [2024-12-05 12:14:10.400698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.382 qpair failed and we were unable to recover it. 00:30:36.382 [2024-12-05 12:14:10.400829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.382 [2024-12-05 12:14:10.400867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.382 qpair failed and we were unable to recover it. 00:30:36.382 [2024-12-05 12:14:10.401161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.382 [2024-12-05 12:14:10.401193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.382 qpair failed and we were unable to recover it. 00:30:36.382 [2024-12-05 12:14:10.401447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.382 [2024-12-05 12:14:10.401480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.382 qpair failed and we were unable to recover it. 00:30:36.382 [2024-12-05 12:14:10.401699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.382 [2024-12-05 12:14:10.401730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.382 qpair failed and we were unable to recover it. 00:30:36.382 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 222941 Killed "${NVMF_APP[@]}" "$@" 00:30:36.382 [2024-12-05 12:14:10.402022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.382 [2024-12-05 12:14:10.402058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.382 qpair failed and we were unable to recover it. 00:30:36.382 [2024-12-05 12:14:10.402379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.382 [2024-12-05 12:14:10.402413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.382 qpair failed and we were unable to recover it. 00:30:36.382 [2024-12-05 12:14:10.402564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.382 [2024-12-05 12:14:10.402599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.382 qpair failed and we were unable to recover it. 00:30:36.382 12:14:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:30:36.382 [2024-12-05 12:14:10.402720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.382 [2024-12-05 12:14:10.402752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.382 qpair failed and we were unable to recover it. 00:30:36.382 [2024-12-05 12:14:10.402968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.382 [2024-12-05 12:14:10.403001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.382 qpair failed and we were unable to recover it. 00:30:36.382 12:14:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:36.382 [2024-12-05 12:14:10.403284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.382 [2024-12-05 12:14:10.403319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.382 qpair failed and we were unable to recover it. 00:30:36.382 12:14:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:30:36.382 [2024-12-05 12:14:10.403541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.382 [2024-12-05 12:14:10.403575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.382 qpair failed and we were unable to recover it. 00:30:36.382 [2024-12-05 12:14:10.403726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.382 12:14:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:36.382 [2024-12-05 12:14:10.403760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.382 qpair failed and we were unable to recover it. 00:30:36.382 [2024-12-05 12:14:10.403901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.382 [2024-12-05 12:14:10.403933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.382 qpair failed and we were unable to recover it. 00:30:36.382 12:14:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:36.382 [2024-12-05 12:14:10.404129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.382 [2024-12-05 12:14:10.404165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.382 qpair failed and we were unable to recover it. 00:30:36.382 [2024-12-05 12:14:10.404317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.382 [2024-12-05 12:14:10.404348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.382 qpair failed and we were unable to recover it. 00:30:36.382 [2024-12-05 12:14:10.404511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.382 [2024-12-05 12:14:10.404545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.382 qpair failed and we were unable to recover it. 00:30:36.382 [2024-12-05 12:14:10.404753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.382 [2024-12-05 12:14:10.404785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.382 qpair failed and we were unable to recover it. 00:30:36.382 [2024-12-05 12:14:10.405053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.382 [2024-12-05 12:14:10.405084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.382 qpair failed and we were unable to recover it. 00:30:36.382 [2024-12-05 12:14:10.405290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.382 [2024-12-05 12:14:10.405326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.382 qpair failed and we were unable to recover it. 00:30:36.382 [2024-12-05 12:14:10.405519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.382 [2024-12-05 12:14:10.405551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.382 qpair failed and we were unable to recover it. 00:30:36.382 [2024-12-05 12:14:10.405700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.382 [2024-12-05 12:14:10.405730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.382 qpair failed and we were unable to recover it. 00:30:36.382 [2024-12-05 12:14:10.405871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.382 [2024-12-05 12:14:10.405902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.382 qpair failed and we were unable to recover it. 00:30:36.382 [2024-12-05 12:14:10.406201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.382 [2024-12-05 12:14:10.406233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.382 qpair failed and we were unable to recover it. 00:30:36.382 [2024-12-05 12:14:10.406498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.382 [2024-12-05 12:14:10.406531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.382 qpair failed and we were unable to recover it. 00:30:36.382 [2024-12-05 12:14:10.406670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.382 [2024-12-05 12:14:10.406701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.382 qpair failed and we were unable to recover it. 00:30:36.382 [2024-12-05 12:14:10.406853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.382 [2024-12-05 12:14:10.406885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.382 qpair failed and we were unable to recover it. 00:30:36.382 [2024-12-05 12:14:10.407118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.382 [2024-12-05 12:14:10.407151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.382 qpair failed and we were unable to recover it. 00:30:36.382 [2024-12-05 12:14:10.407380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.383 [2024-12-05 12:14:10.407412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.383 qpair failed and we were unable to recover it. 00:30:36.383 [2024-12-05 12:14:10.407580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.383 [2024-12-05 12:14:10.407612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.383 qpair failed and we were unable to recover it. 00:30:36.383 [2024-12-05 12:14:10.407763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.383 [2024-12-05 12:14:10.407795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.383 qpair failed and we were unable to recover it. 00:30:36.383 [2024-12-05 12:14:10.408021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.383 [2024-12-05 12:14:10.408051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.383 qpair failed and we were unable to recover it. 00:30:36.383 [2024-12-05 12:14:10.408183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.383 [2024-12-05 12:14:10.408216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.383 qpair failed and we were unable to recover it. 00:30:36.383 [2024-12-05 12:14:10.408432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.383 [2024-12-05 12:14:10.408464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.383 qpair failed and we were unable to recover it. 00:30:36.383 [2024-12-05 12:14:10.408611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.383 [2024-12-05 12:14:10.408643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.383 qpair failed and we were unable to recover it. 00:30:36.383 [2024-12-05 12:14:10.408796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.383 [2024-12-05 12:14:10.408828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.383 qpair failed and we were unable to recover it. 00:30:36.383 [2024-12-05 12:14:10.408967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.383 [2024-12-05 12:14:10.408999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.383 qpair failed and we were unable to recover it. 00:30:36.383 [2024-12-05 12:14:10.409259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.383 [2024-12-05 12:14:10.409293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.383 qpair failed and we were unable to recover it. 00:30:36.383 [2024-12-05 12:14:10.409485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.383 [2024-12-05 12:14:10.409520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.383 qpair failed and we were unable to recover it. 00:30:36.383 [2024-12-05 12:14:10.409721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.383 [2024-12-05 12:14:10.409760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.383 qpair failed and we were unable to recover it. 00:30:36.383 [2024-12-05 12:14:10.410056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.383 [2024-12-05 12:14:10.410089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.383 qpair failed and we were unable to recover it. 00:30:36.383 [2024-12-05 12:14:10.410405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.383 [2024-12-05 12:14:10.410438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.383 qpair failed and we were unable to recover it. 00:30:36.383 [2024-12-05 12:14:10.410699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.383 [2024-12-05 12:14:10.410731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.383 qpair failed and we were unable to recover it. 00:30:36.383 [2024-12-05 12:14:10.410881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.383 [2024-12-05 12:14:10.410911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.383 qpair failed and we were unable to recover it. 00:30:36.383 12:14:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@328 -- # nvmfpid=223693 00:30:36.383 [2024-12-05 12:14:10.411189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.383 [2024-12-05 12:14:10.411225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.383 qpair failed and we were unable to recover it. 00:30:36.383 [2024-12-05 12:14:10.411441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.383 [2024-12-05 12:14:10.411474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.383 qpair failed and we were unable to recover it. 00:30:36.383 [2024-12-05 12:14:10.411632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.383 [2024-12-05 12:14:10.411664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.383 12:14:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@329 -- # waitforlisten 223693 00:30:36.383 qpair failed and we were unable to recover it. 00:30:36.383 12:14:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:36.383 [2024-12-05 12:14:10.411806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.383 [2024-12-05 12:14:10.411837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.383 qpair failed and we were unable to recover it. 00:30:36.383 [2024-12-05 12:14:10.411975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.383 [2024-12-05 12:14:10.412005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.383 qpair failed and we were unable to recover it. 00:30:36.383 12:14:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 223693 ']' 00:30:36.383 [2024-12-05 12:14:10.412220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.383 [2024-12-05 12:14:10.412253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.383 qpair failed and we were unable to recover it. 00:30:36.383 12:14:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:36.383 [2024-12-05 12:14:10.412489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.383 [2024-12-05 12:14:10.412530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.383 qpair failed and we were unable to recover it. 00:30:36.383 [2024-12-05 12:14:10.412685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.383 [2024-12-05 12:14:10.412727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.383 qpair failed and we were unable to recover it. 00:30:36.383 12:14:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:36.383 [2024-12-05 12:14:10.412878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.383 [2024-12-05 12:14:10.412912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.383 qpair failed and we were unable to recover it. 00:30:36.383 12:14:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:36.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:36.383 [2024-12-05 12:14:10.413112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.383 [2024-12-05 12:14:10.413147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.383 qpair failed and we were unable to recover it. 00:30:36.383 [2024-12-05 12:14:10.413353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.383 12:14:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:36.383 [2024-12-05 12:14:10.413398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.383 qpair failed and we were unable to recover it. 00:30:36.383 [2024-12-05 12:14:10.413557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.383 [2024-12-05 12:14:10.413589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.383 qpair failed and we were unable to recover it. 00:30:36.383 12:14:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:36.383 [2024-12-05 12:14:10.413800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.383 [2024-12-05 12:14:10.413836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.383 qpair failed and we were unable to recover it. 00:30:36.383 [2024-12-05 12:14:10.414110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.383 [2024-12-05 12:14:10.414142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.383 qpair failed and we were unable to recover it. 00:30:36.383 [2024-12-05 12:14:10.414326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.384 [2024-12-05 12:14:10.414358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.384 qpair failed and we were unable to recover it. 00:30:36.384 [2024-12-05 12:14:10.414524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.384 [2024-12-05 12:14:10.414559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.384 qpair failed and we were unable to recover it. 00:30:36.384 [2024-12-05 12:14:10.414769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.384 [2024-12-05 12:14:10.414802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.384 qpair failed and we were unable to recover it. 00:30:36.384 [2024-12-05 12:14:10.415043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.384 [2024-12-05 12:14:10.415076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.384 qpair failed and we were unable to recover it. 00:30:36.384 [2024-12-05 12:14:10.415346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.384 [2024-12-05 12:14:10.415399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.384 qpair failed and we were unable to recover it. 00:30:36.384 [2024-12-05 12:14:10.415628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.384 [2024-12-05 12:14:10.415665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.384 qpair failed and we were unable to recover it. 00:30:36.384 [2024-12-05 12:14:10.415872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.384 [2024-12-05 12:14:10.415905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.384 qpair failed and we were unable to recover it. 00:30:36.384 [2024-12-05 12:14:10.416097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.384 [2024-12-05 12:14:10.416129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.384 qpair failed and we were unable to recover it. 00:30:36.384 [2024-12-05 12:14:10.416339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.384 [2024-12-05 12:14:10.416384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.384 qpair failed and we were unable to recover it. 00:30:36.384 [2024-12-05 12:14:10.416613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.384 [2024-12-05 12:14:10.416645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.384 qpair failed and we were unable to recover it. 00:30:36.384 [2024-12-05 12:14:10.416842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.384 [2024-12-05 12:14:10.416877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.384 qpair failed and we were unable to recover it. 00:30:36.384 [2024-12-05 12:14:10.417012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.384 [2024-12-05 12:14:10.417045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.384 qpair failed and we were unable to recover it. 00:30:36.384 [2024-12-05 12:14:10.417255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.384 [2024-12-05 12:14:10.417287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.384 qpair failed and we were unable to recover it. 00:30:36.384 [2024-12-05 12:14:10.417554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.384 [2024-12-05 12:14:10.417590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.384 qpair failed and we were unable to recover it. 00:30:36.384 [2024-12-05 12:14:10.417740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.384 [2024-12-05 12:14:10.417773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.384 qpair failed and we were unable to recover it. 00:30:36.384 [2024-12-05 12:14:10.417980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.384 [2024-12-05 12:14:10.418012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.384 qpair failed and we were unable to recover it. 00:30:36.384 [2024-12-05 12:14:10.418266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.384 [2024-12-05 12:14:10.418299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.384 qpair failed and we were unable to recover it. 00:30:36.384 [2024-12-05 12:14:10.418551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.384 [2024-12-05 12:14:10.418591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.384 qpair failed and we were unable to recover it. 00:30:36.384 [2024-12-05 12:14:10.418819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.384 [2024-12-05 12:14:10.418854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.384 qpair failed and we were unable to recover it. 00:30:36.384 [2024-12-05 12:14:10.419002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.384 [2024-12-05 12:14:10.419033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.384 qpair failed and we were unable to recover it. 00:30:36.384 [2024-12-05 12:14:10.419215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.384 [2024-12-05 12:14:10.419247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.384 qpair failed and we were unable to recover it. 00:30:36.384 [2024-12-05 12:14:10.419468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.384 [2024-12-05 12:14:10.419502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.384 qpair failed and we were unable to recover it. 00:30:36.384 [2024-12-05 12:14:10.419697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.384 [2024-12-05 12:14:10.419729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.384 qpair failed and we were unable to recover it. 00:30:36.384 [2024-12-05 12:14:10.420042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.384 [2024-12-05 12:14:10.420076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.384 qpair failed and we were unable to recover it. 00:30:36.384 [2024-12-05 12:14:10.420255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.384 [2024-12-05 12:14:10.420286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.384 qpair failed and we were unable to recover it. 00:30:36.384 [2024-12-05 12:14:10.420495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.384 [2024-12-05 12:14:10.420528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.384 qpair failed and we were unable to recover it. 00:30:36.384 [2024-12-05 12:14:10.420820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.384 [2024-12-05 12:14:10.420853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.384 qpair failed and we were unable to recover it. 00:30:36.384 [2024-12-05 12:14:10.421007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.384 [2024-12-05 12:14:10.421039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.384 qpair failed and we were unable to recover it. 00:30:36.384 [2024-12-05 12:14:10.421173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.384 [2024-12-05 12:14:10.421204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.384 qpair failed and we were unable to recover it. 00:30:36.384 [2024-12-05 12:14:10.421432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.384 [2024-12-05 12:14:10.421467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.384 qpair failed and we were unable to recover it. 00:30:36.384 [2024-12-05 12:14:10.421754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.384 [2024-12-05 12:14:10.421786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.384 qpair failed and we were unable to recover it. 00:30:36.384 [2024-12-05 12:14:10.421973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.384 [2024-12-05 12:14:10.422008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.384 qpair failed and we were unable to recover it. 00:30:36.384 [2024-12-05 12:14:10.422266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.384 [2024-12-05 12:14:10.422299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.384 qpair failed and we were unable to recover it. 00:30:36.384 [2024-12-05 12:14:10.422636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.384 [2024-12-05 12:14:10.422669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.384 qpair failed and we were unable to recover it. 00:30:36.384 [2024-12-05 12:14:10.422827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.384 [2024-12-05 12:14:10.422860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.384 qpair failed and we were unable to recover it. 00:30:36.384 [2024-12-05 12:14:10.422986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.384 [2024-12-05 12:14:10.423017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.384 qpair failed and we were unable to recover it. 00:30:36.384 [2024-12-05 12:14:10.423305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.384 [2024-12-05 12:14:10.423340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.384 qpair failed and we were unable to recover it. 00:30:36.384 [2024-12-05 12:14:10.423504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.384 [2024-12-05 12:14:10.423539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.384 qpair failed and we were unable to recover it. 00:30:36.384 [2024-12-05 12:14:10.423731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.384 [2024-12-05 12:14:10.423764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.384 qpair failed and we were unable to recover it. 00:30:36.385 [2024-12-05 12:14:10.424018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.385 [2024-12-05 12:14:10.424052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.385 qpair failed and we were unable to recover it. 00:30:36.385 [2024-12-05 12:14:10.424181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.385 [2024-12-05 12:14:10.424213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.385 qpair failed and we were unable to recover it. 00:30:36.385 [2024-12-05 12:14:10.424426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.385 [2024-12-05 12:14:10.424458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.385 qpair failed and we were unable to recover it. 00:30:36.385 [2024-12-05 12:14:10.424624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.385 [2024-12-05 12:14:10.424656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.385 qpair failed and we were unable to recover it. 00:30:36.385 [2024-12-05 12:14:10.424851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.385 [2024-12-05 12:14:10.424889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.385 qpair failed and we were unable to recover it. 00:30:36.385 [2024-12-05 12:14:10.425154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.385 [2024-12-05 12:14:10.425185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.385 qpair failed and we were unable to recover it. 00:30:36.385 [2024-12-05 12:14:10.425329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.385 [2024-12-05 12:14:10.425360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.385 qpair failed and we were unable to recover it. 00:30:36.385 [2024-12-05 12:14:10.425607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.385 [2024-12-05 12:14:10.425641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.385 qpair failed and we were unable to recover it. 00:30:36.385 [2024-12-05 12:14:10.425846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.385 [2024-12-05 12:14:10.425877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.385 qpair failed and we were unable to recover it. 00:30:36.385 [2024-12-05 12:14:10.426102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.385 [2024-12-05 12:14:10.426134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.385 qpair failed and we were unable to recover it. 00:30:36.385 [2024-12-05 12:14:10.426396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.385 [2024-12-05 12:14:10.426431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.385 qpair failed and we were unable to recover it. 00:30:36.385 [2024-12-05 12:14:10.426632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.385 [2024-12-05 12:14:10.426664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.385 qpair failed and we were unable to recover it. 00:30:36.385 [2024-12-05 12:14:10.426853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.385 [2024-12-05 12:14:10.426886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.385 qpair failed and we were unable to recover it. 00:30:36.385 [2024-12-05 12:14:10.427068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.385 [2024-12-05 12:14:10.427101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.385 qpair failed and we were unable to recover it. 00:30:36.385 [2024-12-05 12:14:10.427301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.385 [2024-12-05 12:14:10.427335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.385 qpair failed and we were unable to recover it. 00:30:36.385 [2024-12-05 12:14:10.427640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.385 [2024-12-05 12:14:10.427674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.385 qpair failed and we were unable to recover it. 00:30:36.385 [2024-12-05 12:14:10.427829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.385 [2024-12-05 12:14:10.427862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.385 qpair failed and we were unable to recover it. 00:30:36.385 [2024-12-05 12:14:10.428043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.385 [2024-12-05 12:14:10.428078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.385 qpair failed and we were unable to recover it. 00:30:36.385 [2024-12-05 12:14:10.428307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.385 [2024-12-05 12:14:10.428339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.385 qpair failed and we were unable to recover it. 00:30:36.385 [2024-12-05 12:14:10.428615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.385 [2024-12-05 12:14:10.428705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.385 qpair failed and we were unable to recover it. 00:30:36.385 [2024-12-05 12:14:10.428947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.385 [2024-12-05 12:14:10.428986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.385 qpair failed and we were unable to recover it. 00:30:36.385 [2024-12-05 12:14:10.429250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.385 [2024-12-05 12:14:10.429286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.385 qpair failed and we were unable to recover it. 00:30:36.385 [2024-12-05 12:14:10.429565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.385 [2024-12-05 12:14:10.429603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.385 qpair failed and we were unable to recover it. 00:30:36.385 [2024-12-05 12:14:10.429901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.385 [2024-12-05 12:14:10.429933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.385 qpair failed and we were unable to recover it. 00:30:36.385 [2024-12-05 12:14:10.430086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.385 [2024-12-05 12:14:10.430118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.385 qpair failed and we were unable to recover it. 00:30:36.385 [2024-12-05 12:14:10.430301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.385 [2024-12-05 12:14:10.430334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.385 qpair failed and we were unable to recover it. 00:30:36.385 [2024-12-05 12:14:10.430573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.385 [2024-12-05 12:14:10.430607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.385 qpair failed and we were unable to recover it. 00:30:36.385 [2024-12-05 12:14:10.430800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.385 [2024-12-05 12:14:10.430832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.385 qpair failed and we were unable to recover it. 00:30:36.385 [2024-12-05 12:14:10.430957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.385 [2024-12-05 12:14:10.430988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.385 qpair failed and we were unable to recover it. 00:30:36.385 [2024-12-05 12:14:10.431254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.385 [2024-12-05 12:14:10.431286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.385 qpair failed and we were unable to recover it. 00:30:36.385 [2024-12-05 12:14:10.431511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.385 [2024-12-05 12:14:10.431546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.385 qpair failed and we were unable to recover it. 00:30:36.385 [2024-12-05 12:14:10.431824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.385 [2024-12-05 12:14:10.431856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.385 qpair failed and we were unable to recover it. 00:30:36.385 [2024-12-05 12:14:10.432042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.385 [2024-12-05 12:14:10.432073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.385 qpair failed and we were unable to recover it. 00:30:36.385 [2024-12-05 12:14:10.432325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.385 [2024-12-05 12:14:10.432358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.385 qpair failed and we were unable to recover it. 00:30:36.385 [2024-12-05 12:14:10.432563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.385 [2024-12-05 12:14:10.432595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.385 qpair failed and we were unable to recover it. 00:30:36.385 [2024-12-05 12:14:10.432803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.385 [2024-12-05 12:14:10.432837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.385 qpair failed and we were unable to recover it. 00:30:36.385 [2024-12-05 12:14:10.433043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.385 [2024-12-05 12:14:10.433076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.385 qpair failed and we were unable to recover it. 00:30:36.385 [2024-12-05 12:14:10.433298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.385 [2024-12-05 12:14:10.433331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.385 qpair failed and we were unable to recover it. 00:30:36.386 [2024-12-05 12:14:10.433573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.386 [2024-12-05 12:14:10.433607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.386 qpair failed and we were unable to recover it. 00:30:36.386 [2024-12-05 12:14:10.433878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.386 [2024-12-05 12:14:10.433912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.386 qpair failed and we were unable to recover it. 00:30:36.386 [2024-12-05 12:14:10.434052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.386 [2024-12-05 12:14:10.434084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.386 qpair failed and we were unable to recover it. 00:30:36.386 [2024-12-05 12:14:10.434390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.386 [2024-12-05 12:14:10.434425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.386 qpair failed and we were unable to recover it. 00:30:36.386 [2024-12-05 12:14:10.434621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.386 [2024-12-05 12:14:10.434654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.386 qpair failed and we were unable to recover it. 00:30:36.386 [2024-12-05 12:14:10.434773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.386 [2024-12-05 12:14:10.434806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.386 qpair failed and we were unable to recover it. 00:30:36.386 [2024-12-05 12:14:10.434989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.386 [2024-12-05 12:14:10.435020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.386 qpair failed and we were unable to recover it. 00:30:36.386 [2024-12-05 12:14:10.435232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.386 [2024-12-05 12:14:10.435266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.386 qpair failed and we were unable to recover it. 00:30:36.386 [2024-12-05 12:14:10.435577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.386 [2024-12-05 12:14:10.435611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.386 qpair failed and we were unable to recover it. 00:30:36.386 [2024-12-05 12:14:10.435892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.386 [2024-12-05 12:14:10.435925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.386 qpair failed and we were unable to recover it. 00:30:36.386 [2024-12-05 12:14:10.436223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.386 [2024-12-05 12:14:10.436256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.386 qpair failed and we were unable to recover it. 00:30:36.386 [2024-12-05 12:14:10.436455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.386 [2024-12-05 12:14:10.436489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.386 qpair failed and we were unable to recover it. 00:30:36.386 [2024-12-05 12:14:10.436809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.386 [2024-12-05 12:14:10.436841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.386 qpair failed and we were unable to recover it. 00:30:36.386 [2024-12-05 12:14:10.436969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.386 [2024-12-05 12:14:10.437001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.386 qpair failed and we were unable to recover it. 00:30:36.386 [2024-12-05 12:14:10.437213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.386 [2024-12-05 12:14:10.437245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.386 qpair failed and we were unable to recover it. 00:30:36.386 [2024-12-05 12:14:10.437519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.386 [2024-12-05 12:14:10.437553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.386 qpair failed and we were unable to recover it. 00:30:36.386 [2024-12-05 12:14:10.437756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.386 [2024-12-05 12:14:10.437789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.386 qpair failed and we were unable to recover it. 00:30:36.386 [2024-12-05 12:14:10.437997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.386 [2024-12-05 12:14:10.438029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.386 qpair failed and we were unable to recover it. 00:30:36.386 [2024-12-05 12:14:10.438304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.386 [2024-12-05 12:14:10.438339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.386 qpair failed and we were unable to recover it. 00:30:36.386 [2024-12-05 12:14:10.438514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.386 [2024-12-05 12:14:10.438550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.386 qpair failed and we were unable to recover it. 00:30:36.386 [2024-12-05 12:14:10.438832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.386 [2024-12-05 12:14:10.438866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.386 qpair failed and we were unable to recover it. 00:30:36.386 [2024-12-05 12:14:10.439093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.386 [2024-12-05 12:14:10.439132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.386 qpair failed and we were unable to recover it. 00:30:36.386 [2024-12-05 12:14:10.439340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.386 [2024-12-05 12:14:10.439387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.386 qpair failed and we were unable to recover it. 00:30:36.386 [2024-12-05 12:14:10.439605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.386 [2024-12-05 12:14:10.439638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.386 qpair failed and we were unable to recover it. 00:30:36.386 [2024-12-05 12:14:10.439835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.386 [2024-12-05 12:14:10.439867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.386 qpair failed and we were unable to recover it. 00:30:36.386 [2024-12-05 12:14:10.440170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.386 [2024-12-05 12:14:10.440201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.386 qpair failed and we were unable to recover it. 00:30:36.386 [2024-12-05 12:14:10.440403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.386 [2024-12-05 12:14:10.440436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.386 qpair failed and we were unable to recover it. 00:30:36.386 [2024-12-05 12:14:10.440652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.386 [2024-12-05 12:14:10.440685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.386 qpair failed and we were unable to recover it. 00:30:36.386 [2024-12-05 12:14:10.440894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.386 [2024-12-05 12:14:10.440927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.386 qpair failed and we were unable to recover it. 00:30:36.386 [2024-12-05 12:14:10.441070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.386 [2024-12-05 12:14:10.441102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.386 qpair failed and we were unable to recover it. 00:30:36.386 [2024-12-05 12:14:10.441292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.386 [2024-12-05 12:14:10.441332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.386 qpair failed and we were unable to recover it. 00:30:36.386 [2024-12-05 12:14:10.441535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.386 [2024-12-05 12:14:10.441569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.386 qpair failed and we were unable to recover it. 00:30:36.386 [2024-12-05 12:14:10.441789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.386 [2024-12-05 12:14:10.441825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.386 qpair failed and we were unable to recover it. 00:30:36.386 [2024-12-05 12:14:10.441965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.386 [2024-12-05 12:14:10.441999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.386 qpair failed and we were unable to recover it. 00:30:36.386 [2024-12-05 12:14:10.442196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.386 [2024-12-05 12:14:10.442228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.386 qpair failed and we were unable to recover it. 00:30:36.386 [2024-12-05 12:14:10.442378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.386 [2024-12-05 12:14:10.442414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.386 qpair failed and we were unable to recover it. 00:30:36.386 [2024-12-05 12:14:10.442542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.386 [2024-12-05 12:14:10.442577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.386 qpair failed and we were unable to recover it. 00:30:36.386 [2024-12-05 12:14:10.442857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.386 [2024-12-05 12:14:10.442890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.386 qpair failed and we were unable to recover it. 00:30:36.386 [2024-12-05 12:14:10.443094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.387 [2024-12-05 12:14:10.443126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.387 qpair failed and we were unable to recover it. 00:30:36.387 [2024-12-05 12:14:10.443268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.387 [2024-12-05 12:14:10.443302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.387 qpair failed and we were unable to recover it. 00:30:36.387 [2024-12-05 12:14:10.443527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.387 [2024-12-05 12:14:10.443561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.387 qpair failed and we were unable to recover it. 00:30:36.387 [2024-12-05 12:14:10.443713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.387 [2024-12-05 12:14:10.443753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.387 qpair failed and we were unable to recover it. 00:30:36.387 [2024-12-05 12:14:10.443953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.387 [2024-12-05 12:14:10.443990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.387 qpair failed and we were unable to recover it. 00:30:36.387 [2024-12-05 12:14:10.444185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.387 [2024-12-05 12:14:10.444218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.387 qpair failed and we were unable to recover it. 00:30:36.387 [2024-12-05 12:14:10.444473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.387 [2024-12-05 12:14:10.444507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.387 qpair failed and we were unable to recover it. 00:30:36.387 [2024-12-05 12:14:10.444707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.387 [2024-12-05 12:14:10.444742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.387 qpair failed and we were unable to recover it. 00:30:36.387 [2024-12-05 12:14:10.445015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.387 [2024-12-05 12:14:10.445049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.387 qpair failed and we were unable to recover it. 00:30:36.387 [2024-12-05 12:14:10.445237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.387 [2024-12-05 12:14:10.445270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.387 qpair failed and we were unable to recover it. 00:30:36.387 [2024-12-05 12:14:10.445413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.387 [2024-12-05 12:14:10.445448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.387 qpair failed and we were unable to recover it. 00:30:36.387 [2024-12-05 12:14:10.445665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.387 [2024-12-05 12:14:10.445698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.387 qpair failed and we were unable to recover it. 00:30:36.387 [2024-12-05 12:14:10.445899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.387 [2024-12-05 12:14:10.445933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.387 qpair failed and we were unable to recover it. 00:30:36.387 [2024-12-05 12:14:10.446151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.387 [2024-12-05 12:14:10.446184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.387 qpair failed and we were unable to recover it. 00:30:36.387 [2024-12-05 12:14:10.446390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.387 [2024-12-05 12:14:10.446425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.387 qpair failed and we were unable to recover it. 00:30:36.387 [2024-12-05 12:14:10.446627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.387 [2024-12-05 12:14:10.446661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.387 qpair failed and we were unable to recover it. 00:30:36.387 [2024-12-05 12:14:10.446787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.387 [2024-12-05 12:14:10.446820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.387 qpair failed and we were unable to recover it. 00:30:36.387 [2024-12-05 12:14:10.447013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.387 [2024-12-05 12:14:10.447047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.387 qpair failed and we were unable to recover it. 00:30:36.387 [2024-12-05 12:14:10.447228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.387 [2024-12-05 12:14:10.447262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.387 qpair failed and we were unable to recover it. 00:30:36.387 [2024-12-05 12:14:10.447460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.387 [2024-12-05 12:14:10.447493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.387 qpair failed and we were unable to recover it. 00:30:36.387 [2024-12-05 12:14:10.447607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.387 [2024-12-05 12:14:10.447638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.387 qpair failed and we were unable to recover it. 00:30:36.387 [2024-12-05 12:14:10.447869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.387 [2024-12-05 12:14:10.447902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.387 qpair failed and we were unable to recover it. 00:30:36.387 [2024-12-05 12:14:10.448051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.387 [2024-12-05 12:14:10.448083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.387 qpair failed and we were unable to recover it. 00:30:36.387 [2024-12-05 12:14:10.448278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.387 [2024-12-05 12:14:10.448317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.387 qpair failed and we were unable to recover it. 00:30:36.387 [2024-12-05 12:14:10.448518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.387 [2024-12-05 12:14:10.448555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.387 qpair failed and we were unable to recover it. 00:30:36.387 [2024-12-05 12:14:10.448803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.387 [2024-12-05 12:14:10.448839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.387 qpair failed and we were unable to recover it. 00:30:36.387 [2024-12-05 12:14:10.448968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.387 [2024-12-05 12:14:10.449000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.387 qpair failed and we were unable to recover it. 00:30:36.387 [2024-12-05 12:14:10.449178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.387 [2024-12-05 12:14:10.449212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.387 qpair failed and we were unable to recover it. 00:30:36.387 [2024-12-05 12:14:10.449490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.387 [2024-12-05 12:14:10.449527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.387 qpair failed and we were unable to recover it. 00:30:36.387 [2024-12-05 12:14:10.449670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.387 [2024-12-05 12:14:10.449702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.387 qpair failed and we were unable to recover it. 00:30:36.387 [2024-12-05 12:14:10.449814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.387 [2024-12-05 12:14:10.449847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.387 qpair failed and we were unable to recover it. 00:30:36.387 [2024-12-05 12:14:10.449964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.387 [2024-12-05 12:14:10.449999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.387 qpair failed and we were unable to recover it. 00:30:36.387 [2024-12-05 12:14:10.450122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.387 [2024-12-05 12:14:10.450154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.387 qpair failed and we were unable to recover it. 00:30:36.387 [2024-12-05 12:14:10.450272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.387 [2024-12-05 12:14:10.450304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.387 qpair failed and we were unable to recover it. 00:30:36.387 [2024-12-05 12:14:10.450532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.387 [2024-12-05 12:14:10.450566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.387 qpair failed and we were unable to recover it. 00:30:36.388 [2024-12-05 12:14:10.450722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.388 [2024-12-05 12:14:10.450757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.388 qpair failed and we were unable to recover it. 00:30:36.388 [2024-12-05 12:14:10.450961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.388 [2024-12-05 12:14:10.450997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.388 qpair failed and we were unable to recover it. 00:30:36.388 [2024-12-05 12:14:10.451116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.388 [2024-12-05 12:14:10.451149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.388 qpair failed and we were unable to recover it. 00:30:36.388 [2024-12-05 12:14:10.451263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.388 [2024-12-05 12:14:10.451293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.388 qpair failed and we were unable to recover it. 00:30:36.388 [2024-12-05 12:14:10.451486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.388 [2024-12-05 12:14:10.451519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.388 qpair failed and we were unable to recover it. 00:30:36.388 [2024-12-05 12:14:10.451716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.388 [2024-12-05 12:14:10.451747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.388 qpair failed and we were unable to recover it. 00:30:36.388 [2024-12-05 12:14:10.451978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.388 [2024-12-05 12:14:10.452011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.388 qpair failed and we were unable to recover it. 00:30:36.388 [2024-12-05 12:14:10.452206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.388 [2024-12-05 12:14:10.452238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.388 qpair failed and we were unable to recover it. 00:30:36.388 [2024-12-05 12:14:10.452464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.388 [2024-12-05 12:14:10.452498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.388 qpair failed and we were unable to recover it. 00:30:36.388 [2024-12-05 12:14:10.452684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.388 [2024-12-05 12:14:10.452716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.388 qpair failed and we were unable to recover it. 00:30:36.388 [2024-12-05 12:14:10.452977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.388 [2024-12-05 12:14:10.453009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.388 qpair failed and we were unable to recover it. 00:30:36.388 [2024-12-05 12:14:10.453193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.388 [2024-12-05 12:14:10.453226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.388 qpair failed and we were unable to recover it. 00:30:36.388 [2024-12-05 12:14:10.453479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.388 [2024-12-05 12:14:10.453514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.388 qpair failed and we were unable to recover it. 00:30:36.388 [2024-12-05 12:14:10.453737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.388 [2024-12-05 12:14:10.453768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.388 qpair failed and we were unable to recover it. 00:30:36.388 [2024-12-05 12:14:10.453911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.388 [2024-12-05 12:14:10.453943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.388 qpair failed and we were unable to recover it. 00:30:36.388 [2024-12-05 12:14:10.454253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.388 [2024-12-05 12:14:10.454286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.388 qpair failed and we were unable to recover it. 00:30:36.388 [2024-12-05 12:14:10.454468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.388 [2024-12-05 12:14:10.454500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.388 qpair failed and we were unable to recover it. 00:30:36.388 [2024-12-05 12:14:10.454621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.388 [2024-12-05 12:14:10.454654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.388 qpair failed and we were unable to recover it. 00:30:36.388 [2024-12-05 12:14:10.454805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.388 [2024-12-05 12:14:10.454837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.388 qpair failed and we were unable to recover it. 00:30:36.388 [2024-12-05 12:14:10.454976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.388 [2024-12-05 12:14:10.455008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.388 qpair failed and we were unable to recover it. 00:30:36.388 [2024-12-05 12:14:10.455286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.388 [2024-12-05 12:14:10.455319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.388 qpair failed and we were unable to recover it. 00:30:36.388 [2024-12-05 12:14:10.455597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.388 [2024-12-05 12:14:10.455630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.388 qpair failed and we were unable to recover it. 00:30:36.388 [2024-12-05 12:14:10.455840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.388 [2024-12-05 12:14:10.455872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.388 qpair failed and we were unable to recover it. 00:30:36.388 [2024-12-05 12:14:10.456061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.388 [2024-12-05 12:14:10.456094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.388 qpair failed and we were unable to recover it. 00:30:36.388 [2024-12-05 12:14:10.456284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.388 [2024-12-05 12:14:10.456316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.388 qpair failed and we were unable to recover it. 00:30:36.388 [2024-12-05 12:14:10.456519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.388 [2024-12-05 12:14:10.456553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.388 qpair failed and we were unable to recover it. 00:30:36.388 [2024-12-05 12:14:10.456700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.388 [2024-12-05 12:14:10.456733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.388 qpair failed and we were unable to recover it. 00:30:36.388 [2024-12-05 12:14:10.456916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.388 [2024-12-05 12:14:10.456948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.388 qpair failed and we were unable to recover it. 00:30:36.388 [2024-12-05 12:14:10.457154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.388 [2024-12-05 12:14:10.457193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.388 qpair failed and we were unable to recover it. 00:30:36.388 [2024-12-05 12:14:10.457391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.388 [2024-12-05 12:14:10.457426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.388 qpair failed and we were unable to recover it. 00:30:36.388 [2024-12-05 12:14:10.457540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.388 [2024-12-05 12:14:10.457572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.388 qpair failed and we were unable to recover it. 00:30:36.388 [2024-12-05 12:14:10.457772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.388 [2024-12-05 12:14:10.457805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.388 qpair failed and we were unable to recover it. 00:30:36.388 [2024-12-05 12:14:10.457998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.388 [2024-12-05 12:14:10.458031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.388 qpair failed and we were unable to recover it. 00:30:36.388 [2024-12-05 12:14:10.458166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.388 [2024-12-05 12:14:10.458196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.388 qpair failed and we were unable to recover it. 00:30:36.388 [2024-12-05 12:14:10.458394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.388 [2024-12-05 12:14:10.458430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.388 qpair failed and we were unable to recover it. 00:30:36.388 [2024-12-05 12:14:10.458574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.388 [2024-12-05 12:14:10.458606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.388 qpair failed and we were unable to recover it. 00:30:36.388 [2024-12-05 12:14:10.458813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.388 [2024-12-05 12:14:10.458846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.388 qpair failed and we were unable to recover it. 00:30:36.388 [2024-12-05 12:14:10.458960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.388 [2024-12-05 12:14:10.458993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.388 qpair failed and we were unable to recover it. 00:30:36.389 [2024-12-05 12:14:10.459244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.389 [2024-12-05 12:14:10.459278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.389 qpair failed and we were unable to recover it. 00:30:36.389 [2024-12-05 12:14:10.459406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.389 [2024-12-05 12:14:10.459443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.389 qpair failed and we were unable to recover it. 00:30:36.389 [2024-12-05 12:14:10.459663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.389 [2024-12-05 12:14:10.459696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.389 qpair failed and we were unable to recover it. 00:30:36.389 [2024-12-05 12:14:10.459879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.389 [2024-12-05 12:14:10.459911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.389 qpair failed and we were unable to recover it. 00:30:36.389 [2024-12-05 12:14:10.460098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.389 [2024-12-05 12:14:10.460130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.389 qpair failed and we were unable to recover it. 00:30:36.389 [2024-12-05 12:14:10.460242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.389 [2024-12-05 12:14:10.460274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.389 qpair failed and we were unable to recover it. 00:30:36.389 [2024-12-05 12:14:10.460462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.389 [2024-12-05 12:14:10.460499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.389 qpair failed and we were unable to recover it. 00:30:36.389 [2024-12-05 12:14:10.460777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.389 [2024-12-05 12:14:10.460807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.389 qpair failed and we were unable to recover it. 00:30:36.389 [2024-12-05 12:14:10.460985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.389 [2024-12-05 12:14:10.461020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.389 qpair failed and we were unable to recover it. 00:30:36.389 [2024-12-05 12:14:10.461212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.389 [2024-12-05 12:14:10.461244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.389 qpair failed and we were unable to recover it. 00:30:36.389 [2024-12-05 12:14:10.461518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.389 [2024-12-05 12:14:10.461552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.389 qpair failed and we were unable to recover it. 00:30:36.389 [2024-12-05 12:14:10.461682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.389 [2024-12-05 12:14:10.461714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.389 qpair failed and we were unable to recover it. 00:30:36.389 [2024-12-05 12:14:10.461976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.389 [2024-12-05 12:14:10.462008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.389 qpair failed and we were unable to recover it. 00:30:36.389 [2024-12-05 12:14:10.462139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.389 [2024-12-05 12:14:10.462172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.389 qpair failed and we were unable to recover it. 00:30:36.389 [2024-12-05 12:14:10.462389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.389 [2024-12-05 12:14:10.462423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.389 qpair failed and we were unable to recover it. 00:30:36.389 [2024-12-05 12:14:10.462602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.389 [2024-12-05 12:14:10.462635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.389 qpair failed and we were unable to recover it. 00:30:36.389 [2024-12-05 12:14:10.462833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.389 [2024-12-05 12:14:10.462866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.389 qpair failed and we were unable to recover it. 00:30:36.389 [2024-12-05 12:14:10.463065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.389 [2024-12-05 12:14:10.463099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.389 qpair failed and we were unable to recover it. 00:30:36.389 [2024-12-05 12:14:10.463298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.389 [2024-12-05 12:14:10.463332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.389 qpair failed and we were unable to recover it. 00:30:36.389 [2024-12-05 12:14:10.463472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.389 [2024-12-05 12:14:10.463505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.389 qpair failed and we were unable to recover it. 00:30:36.389 [2024-12-05 12:14:10.463790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.389 [2024-12-05 12:14:10.463823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.389 qpair failed and we were unable to recover it. 00:30:36.389 [2024-12-05 12:14:10.464079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.389 [2024-12-05 12:14:10.464112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.389 qpair failed and we were unable to recover it. 00:30:36.389 [2024-12-05 12:14:10.464366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.389 [2024-12-05 12:14:10.464439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.389 qpair failed and we were unable to recover it. 00:30:36.389 [2024-12-05 12:14:10.464557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.389 [2024-12-05 12:14:10.464587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.389 qpair failed and we were unable to recover it. 00:30:36.389 [2024-12-05 12:14:10.464710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.389 [2024-12-05 12:14:10.464741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.389 qpair failed and we were unable to recover it. 00:30:36.389 [2024-12-05 12:14:10.464772] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:30:36.389 [2024-12-05 12:14:10.464830] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:36.389 [2024-12-05 12:14:10.464880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.389 [2024-12-05 12:14:10.464916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.389 qpair failed and we were unable to recover it. 00:30:36.389 [2024-12-05 12:14:10.465117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.389 [2024-12-05 12:14:10.465148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.389 qpair failed and we were unable to recover it. 00:30:36.389 [2024-12-05 12:14:10.465414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.389 [2024-12-05 12:14:10.465446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.389 qpair failed and we were unable to recover it. 00:30:36.389 [2024-12-05 12:14:10.465659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.389 [2024-12-05 12:14:10.465689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.389 qpair failed and we were unable to recover it. 00:30:36.389 [2024-12-05 12:14:10.465949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.389 [2024-12-05 12:14:10.466027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.389 qpair failed and we were unable to recover it. 00:30:36.389 [2024-12-05 12:14:10.466291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.389 [2024-12-05 12:14:10.466365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.389 qpair failed and we were unable to recover it. 00:30:36.389 [2024-12-05 12:14:10.466661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.389 [2024-12-05 12:14:10.466697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.389 qpair failed and we were unable to recover it. 00:30:36.389 [2024-12-05 12:14:10.466931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.389 [2024-12-05 12:14:10.466968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.389 qpair failed and we were unable to recover it. 00:30:36.389 [2024-12-05 12:14:10.467167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.389 [2024-12-05 12:14:10.467201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.389 qpair failed and we were unable to recover it. 00:30:36.389 [2024-12-05 12:14:10.467398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.389 [2024-12-05 12:14:10.467437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.389 qpair failed and we were unable to recover it. 00:30:36.389 [2024-12-05 12:14:10.467637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.389 [2024-12-05 12:14:10.467671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.389 qpair failed and we were unable to recover it. 00:30:36.389 [2024-12-05 12:14:10.467868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.389 [2024-12-05 12:14:10.467902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.389 qpair failed and we were unable to recover it. 00:30:36.389 [2024-12-05 12:14:10.468179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.390 [2024-12-05 12:14:10.468214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.390 qpair failed and we were unable to recover it. 00:30:36.390 [2024-12-05 12:14:10.468348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.390 [2024-12-05 12:14:10.468391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.390 qpair failed and we were unable to recover it. 00:30:36.390 [2024-12-05 12:14:10.468603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.390 [2024-12-05 12:14:10.468639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.390 qpair failed and we were unable to recover it. 00:30:36.390 [2024-12-05 12:14:10.468890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.390 [2024-12-05 12:14:10.468924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.390 qpair failed and we were unable to recover it. 00:30:36.390 [2024-12-05 12:14:10.469213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.390 [2024-12-05 12:14:10.469246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.390 qpair failed and we were unable to recover it. 00:30:36.390 [2024-12-05 12:14:10.469388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.390 [2024-12-05 12:14:10.469452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.390 qpair failed and we were unable to recover it. 00:30:36.390 [2024-12-05 12:14:10.469582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.390 [2024-12-05 12:14:10.469615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.390 qpair failed and we were unable to recover it. 00:30:36.390 [2024-12-05 12:14:10.469809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.390 [2024-12-05 12:14:10.469841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.390 qpair failed and we were unable to recover it. 00:30:36.390 [2024-12-05 12:14:10.470024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.390 [2024-12-05 12:14:10.470056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.390 qpair failed and we were unable to recover it. 00:30:36.390 [2024-12-05 12:14:10.470308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.390 [2024-12-05 12:14:10.470342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.390 qpair failed and we were unable to recover it. 00:30:36.390 [2024-12-05 12:14:10.470535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.390 [2024-12-05 12:14:10.470568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.390 qpair failed and we were unable to recover it. 00:30:36.390 [2024-12-05 12:14:10.470827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.390 [2024-12-05 12:14:10.470860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.390 qpair failed and we were unable to recover it. 00:30:36.390 [2024-12-05 12:14:10.471060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.390 [2024-12-05 12:14:10.471093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.390 qpair failed and we were unable to recover it. 00:30:36.390 [2024-12-05 12:14:10.471209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.390 [2024-12-05 12:14:10.471239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.390 qpair failed and we were unable to recover it. 00:30:36.390 [2024-12-05 12:14:10.471439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.390 [2024-12-05 12:14:10.471471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.390 qpair failed and we were unable to recover it. 00:30:36.390 [2024-12-05 12:14:10.471584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.390 [2024-12-05 12:14:10.471614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.390 qpair failed and we were unable to recover it. 00:30:36.390 [2024-12-05 12:14:10.471722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.390 [2024-12-05 12:14:10.471754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.390 qpair failed and we were unable to recover it. 00:30:36.390 [2024-12-05 12:14:10.471930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.390 [2024-12-05 12:14:10.471960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.390 qpair failed and we were unable to recover it. 00:30:36.390 [2024-12-05 12:14:10.472137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.390 [2024-12-05 12:14:10.472169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.390 qpair failed and we were unable to recover it. 00:30:36.390 [2024-12-05 12:14:10.472311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.390 [2024-12-05 12:14:10.472341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.390 qpair failed and we were unable to recover it. 00:30:36.390 [2024-12-05 12:14:10.472537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.390 [2024-12-05 12:14:10.472568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.390 qpair failed and we were unable to recover it. 00:30:36.390 [2024-12-05 12:14:10.472687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.390 [2024-12-05 12:14:10.472717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.390 qpair failed and we were unable to recover it. 00:30:36.390 [2024-12-05 12:14:10.472930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.390 [2024-12-05 12:14:10.472961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.390 qpair failed and we were unable to recover it. 00:30:36.390 [2024-12-05 12:14:10.473157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.390 [2024-12-05 12:14:10.473190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.390 qpair failed and we were unable to recover it. 00:30:36.390 [2024-12-05 12:14:10.473301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.390 [2024-12-05 12:14:10.473332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.390 qpair failed and we were unable to recover it. 00:30:36.390 [2024-12-05 12:14:10.473484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.390 [2024-12-05 12:14:10.473516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.390 qpair failed and we were unable to recover it. 00:30:36.390 [2024-12-05 12:14:10.473655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.390 [2024-12-05 12:14:10.473691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.390 qpair failed and we were unable to recover it. 00:30:36.390 [2024-12-05 12:14:10.473881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.390 [2024-12-05 12:14:10.473912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.390 qpair failed and we were unable to recover it. 00:30:36.390 [2024-12-05 12:14:10.474094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.390 [2024-12-05 12:14:10.474127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.390 qpair failed and we were unable to recover it. 00:30:36.390 [2024-12-05 12:14:10.474259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.390 [2024-12-05 12:14:10.474290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.390 qpair failed and we were unable to recover it. 00:30:36.390 [2024-12-05 12:14:10.474440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.390 [2024-12-05 12:14:10.474474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.390 qpair failed and we were unable to recover it. 00:30:36.390 [2024-12-05 12:14:10.474602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.390 [2024-12-05 12:14:10.474633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.390 qpair failed and we were unable to recover it. 00:30:36.390 [2024-12-05 12:14:10.474818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.390 [2024-12-05 12:14:10.474896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.390 qpair failed and we were unable to recover it. 00:30:36.390 [2024-12-05 12:14:10.475056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.390 [2024-12-05 12:14:10.475093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.390 qpair failed and we were unable to recover it. 00:30:36.390 [2024-12-05 12:14:10.475298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.390 [2024-12-05 12:14:10.475333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.390 qpair failed and we were unable to recover it. 00:30:36.390 [2024-12-05 12:14:10.475543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.390 [2024-12-05 12:14:10.475582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.390 qpair failed and we were unable to recover it. 00:30:36.390 [2024-12-05 12:14:10.475813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.390 [2024-12-05 12:14:10.475845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.390 qpair failed and we were unable to recover it. 00:30:36.390 [2024-12-05 12:14:10.475954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.390 [2024-12-05 12:14:10.475985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.390 qpair failed and we were unable to recover it. 00:30:36.390 [2024-12-05 12:14:10.476102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.391 [2024-12-05 12:14:10.476133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.391 qpair failed and we were unable to recover it. 00:30:36.391 [2024-12-05 12:14:10.476315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.391 [2024-12-05 12:14:10.476348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.391 qpair failed and we were unable to recover it. 00:30:36.391 [2024-12-05 12:14:10.476561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.391 [2024-12-05 12:14:10.476594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.391 qpair failed and we were unable to recover it. 00:30:36.391 [2024-12-05 12:14:10.476798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.391 [2024-12-05 12:14:10.476830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.391 qpair failed and we were unable to recover it. 00:30:36.391 [2024-12-05 12:14:10.476953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.391 [2024-12-05 12:14:10.476987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.391 qpair failed and we were unable to recover it. 00:30:36.391 [2024-12-05 12:14:10.477249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.391 [2024-12-05 12:14:10.477286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.391 qpair failed and we were unable to recover it. 00:30:36.391 [2024-12-05 12:14:10.477475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.391 [2024-12-05 12:14:10.477509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.391 qpair failed and we were unable to recover it. 00:30:36.391 [2024-12-05 12:14:10.477635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.391 [2024-12-05 12:14:10.477668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.391 qpair failed and we were unable to recover it. 00:30:36.391 [2024-12-05 12:14:10.477812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.391 [2024-12-05 12:14:10.477847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.391 qpair failed and we were unable to recover it. 00:30:36.391 [2024-12-05 12:14:10.478041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.391 [2024-12-05 12:14:10.478074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.391 qpair failed and we were unable to recover it. 00:30:36.391 [2024-12-05 12:14:10.478190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.391 [2024-12-05 12:14:10.478222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.391 qpair failed and we were unable to recover it. 00:30:36.391 [2024-12-05 12:14:10.478445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.391 [2024-12-05 12:14:10.478478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.391 qpair failed and we were unable to recover it. 00:30:36.391 [2024-12-05 12:14:10.478610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.391 [2024-12-05 12:14:10.478641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.391 qpair failed and we were unable to recover it. 00:30:36.391 [2024-12-05 12:14:10.478770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.391 [2024-12-05 12:14:10.478805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.391 qpair failed and we were unable to recover it. 00:30:36.391 [2024-12-05 12:14:10.478931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.391 [2024-12-05 12:14:10.478964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.391 qpair failed and we were unable to recover it. 00:30:36.391 [2024-12-05 12:14:10.479143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.391 [2024-12-05 12:14:10.479174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.391 qpair failed and we were unable to recover it. 00:30:36.391 [2024-12-05 12:14:10.479294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.391 [2024-12-05 12:14:10.479328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.391 qpair failed and we were unable to recover it. 00:30:36.391 [2024-12-05 12:14:10.479520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.391 [2024-12-05 12:14:10.479554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.391 qpair failed and we were unable to recover it. 00:30:36.391 [2024-12-05 12:14:10.479743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.391 [2024-12-05 12:14:10.479776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.391 qpair failed and we were unable to recover it. 00:30:36.391 [2024-12-05 12:14:10.479895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.391 [2024-12-05 12:14:10.479927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.391 qpair failed and we were unable to recover it. 00:30:36.391 [2024-12-05 12:14:10.480198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.391 [2024-12-05 12:14:10.480232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.391 qpair failed and we were unable to recover it. 00:30:36.391 [2024-12-05 12:14:10.480366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.391 [2024-12-05 12:14:10.480409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.391 qpair failed and we were unable to recover it. 00:30:36.391 [2024-12-05 12:14:10.480543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.391 [2024-12-05 12:14:10.480576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.391 qpair failed and we were unable to recover it. 00:30:36.391 [2024-12-05 12:14:10.480711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.391 [2024-12-05 12:14:10.480744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.391 qpair failed and we were unable to recover it. 00:30:36.391 [2024-12-05 12:14:10.480981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.391 [2024-12-05 12:14:10.481013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.391 qpair failed and we were unable to recover it. 00:30:36.391 [2024-12-05 12:14:10.481151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.391 [2024-12-05 12:14:10.481183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.391 qpair failed and we were unable to recover it. 00:30:36.391 [2024-12-05 12:14:10.481290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.391 [2024-12-05 12:14:10.481322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.391 qpair failed and we were unable to recover it. 00:30:36.391 [2024-12-05 12:14:10.481516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.391 [2024-12-05 12:14:10.481551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.391 qpair failed and we were unable to recover it. 00:30:36.391 [2024-12-05 12:14:10.481659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.391 [2024-12-05 12:14:10.481690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.391 qpair failed and we were unable to recover it. 00:30:36.391 [2024-12-05 12:14:10.481817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.391 [2024-12-05 12:14:10.481848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.391 qpair failed and we were unable to recover it. 00:30:36.391 [2024-12-05 12:14:10.481977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.391 [2024-12-05 12:14:10.482014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.391 qpair failed and we were unable to recover it. 00:30:36.391 [2024-12-05 12:14:10.482143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.391 [2024-12-05 12:14:10.482175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.391 qpair failed and we were unable to recover it. 00:30:36.391 [2024-12-05 12:14:10.482414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.391 [2024-12-05 12:14:10.482451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.391 qpair failed and we were unable to recover it. 00:30:36.391 [2024-12-05 12:14:10.482559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.391 [2024-12-05 12:14:10.482592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.391 qpair failed and we were unable to recover it. 00:30:36.391 [2024-12-05 12:14:10.482714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.391 [2024-12-05 12:14:10.482752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.391 qpair failed and we were unable to recover it. 00:30:36.391 [2024-12-05 12:14:10.482891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.391 [2024-12-05 12:14:10.482923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.391 qpair failed and we were unable to recover it. 00:30:36.391 [2024-12-05 12:14:10.483045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.391 [2024-12-05 12:14:10.483079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.391 qpair failed and we were unable to recover it. 00:30:36.391 [2024-12-05 12:14:10.483185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.391 [2024-12-05 12:14:10.483218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.391 qpair failed and we were unable to recover it. 00:30:36.391 [2024-12-05 12:14:10.483351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.391 [2024-12-05 12:14:10.483395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.391 qpair failed and we were unable to recover it. 00:30:36.392 [2024-12-05 12:14:10.483545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.392 [2024-12-05 12:14:10.483578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.392 qpair failed and we were unable to recover it. 00:30:36.392 [2024-12-05 12:14:10.483758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.392 [2024-12-05 12:14:10.483790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.392 qpair failed and we were unable to recover it. 00:30:36.392 [2024-12-05 12:14:10.483931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.392 [2024-12-05 12:14:10.483964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.392 qpair failed and we were unable to recover it. 00:30:36.392 [2024-12-05 12:14:10.484167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.392 [2024-12-05 12:14:10.484202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.392 qpair failed and we were unable to recover it. 00:30:36.392 [2024-12-05 12:14:10.484313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.392 [2024-12-05 12:14:10.484345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.392 qpair failed and we were unable to recover it. 00:30:36.392 [2024-12-05 12:14:10.484538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.392 [2024-12-05 12:14:10.484569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.392 qpair failed and we were unable to recover it. 00:30:36.392 [2024-12-05 12:14:10.484768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.392 [2024-12-05 12:14:10.484799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.392 qpair failed and we were unable to recover it. 00:30:36.392 [2024-12-05 12:14:10.485055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.392 [2024-12-05 12:14:10.485087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.392 qpair failed and we were unable to recover it. 00:30:36.392 [2024-12-05 12:14:10.485276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.392 [2024-12-05 12:14:10.485308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.392 qpair failed and we were unable to recover it. 00:30:36.392 [2024-12-05 12:14:10.485514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.392 [2024-12-05 12:14:10.485550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.392 qpair failed and we were unable to recover it. 00:30:36.392 [2024-12-05 12:14:10.485758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.392 [2024-12-05 12:14:10.485790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.392 qpair failed and we were unable to recover it. 00:30:36.392 [2024-12-05 12:14:10.486070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.392 [2024-12-05 12:14:10.486103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.392 qpair failed and we were unable to recover it. 00:30:36.392 [2024-12-05 12:14:10.486234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.392 [2024-12-05 12:14:10.486266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.392 qpair failed and we were unable to recover it. 00:30:36.392 [2024-12-05 12:14:10.486412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.392 [2024-12-05 12:14:10.486448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.392 qpair failed and we were unable to recover it. 00:30:36.392 [2024-12-05 12:14:10.486565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.392 [2024-12-05 12:14:10.486597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.392 qpair failed and we were unable to recover it. 00:30:36.392 [2024-12-05 12:14:10.486727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.392 [2024-12-05 12:14:10.486759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.392 qpair failed and we were unable to recover it. 00:30:36.392 [2024-12-05 12:14:10.486891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.392 [2024-12-05 12:14:10.486922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.392 qpair failed and we were unable to recover it. 00:30:36.392 [2024-12-05 12:14:10.487103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.392 [2024-12-05 12:14:10.487135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.392 qpair failed and we were unable to recover it. 00:30:36.392 [2024-12-05 12:14:10.487266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.392 [2024-12-05 12:14:10.487297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.392 qpair failed and we were unable to recover it. 00:30:36.392 [2024-12-05 12:14:10.487431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.392 [2024-12-05 12:14:10.487463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.392 qpair failed and we were unable to recover it. 00:30:36.392 [2024-12-05 12:14:10.487726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.392 [2024-12-05 12:14:10.487758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.392 qpair failed and we were unable to recover it. 00:30:36.392 [2024-12-05 12:14:10.487890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.392 [2024-12-05 12:14:10.487923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.392 qpair failed and we were unable to recover it. 00:30:36.392 [2024-12-05 12:14:10.488160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.392 [2024-12-05 12:14:10.488194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.392 qpair failed and we were unable to recover it. 00:30:36.392 [2024-12-05 12:14:10.488340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.392 [2024-12-05 12:14:10.488382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.392 qpair failed and we were unable to recover it. 00:30:36.392 [2024-12-05 12:14:10.488503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.392 [2024-12-05 12:14:10.488535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.392 qpair failed and we were unable to recover it. 00:30:36.392 [2024-12-05 12:14:10.488744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.392 [2024-12-05 12:14:10.488777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.392 qpair failed and we were unable to recover it. 00:30:36.392 [2024-12-05 12:14:10.488890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.392 [2024-12-05 12:14:10.488923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.392 qpair failed and we were unable to recover it. 00:30:36.392 [2024-12-05 12:14:10.489134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.392 [2024-12-05 12:14:10.489166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.392 qpair failed and we were unable to recover it. 00:30:36.392 [2024-12-05 12:14:10.489350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.392 [2024-12-05 12:14:10.489394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.392 qpair failed and we were unable to recover it. 00:30:36.392 [2024-12-05 12:14:10.489660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.392 [2024-12-05 12:14:10.489694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.392 qpair failed and we were unable to recover it. 00:30:36.392 [2024-12-05 12:14:10.489872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.392 [2024-12-05 12:14:10.489904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.392 qpair failed and we were unable to recover it. 00:30:36.392 [2024-12-05 12:14:10.490033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.392 [2024-12-05 12:14:10.490063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.392 qpair failed and we were unable to recover it. 00:30:36.392 [2024-12-05 12:14:10.490259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.392 [2024-12-05 12:14:10.490293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.392 qpair failed and we were unable to recover it. 00:30:36.392 [2024-12-05 12:14:10.490524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.392 [2024-12-05 12:14:10.490558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.392 qpair failed and we were unable to recover it. 00:30:36.393 [2024-12-05 12:14:10.490754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.393 [2024-12-05 12:14:10.490787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.393 qpair failed and we were unable to recover it. 00:30:36.393 [2024-12-05 12:14:10.490901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.393 [2024-12-05 12:14:10.490939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.393 qpair failed and we were unable to recover it. 00:30:36.393 [2024-12-05 12:14:10.491063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.393 [2024-12-05 12:14:10.491095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.393 qpair failed and we were unable to recover it. 00:30:36.393 [2024-12-05 12:14:10.491286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.393 [2024-12-05 12:14:10.491318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.393 qpair failed and we were unable to recover it. 00:30:36.393 [2024-12-05 12:14:10.491460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.393 [2024-12-05 12:14:10.491493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.393 qpair failed and we were unable to recover it. 00:30:36.393 [2024-12-05 12:14:10.491600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.393 [2024-12-05 12:14:10.491631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.393 qpair failed and we were unable to recover it. 00:30:36.393 [2024-12-05 12:14:10.491745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.393 [2024-12-05 12:14:10.491777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.393 qpair failed and we were unable to recover it. 00:30:36.393 [2024-12-05 12:14:10.492067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.393 [2024-12-05 12:14:10.492100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.393 qpair failed and we were unable to recover it. 00:30:36.393 [2024-12-05 12:14:10.492317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.393 [2024-12-05 12:14:10.492350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.393 qpair failed and we were unable to recover it. 00:30:36.393 [2024-12-05 12:14:10.492561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.393 [2024-12-05 12:14:10.492594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.393 qpair failed and we were unable to recover it. 00:30:36.393 [2024-12-05 12:14:10.492780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.393 [2024-12-05 12:14:10.492813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.393 qpair failed and we were unable to recover it. 00:30:36.393 [2024-12-05 12:14:10.492923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.393 [2024-12-05 12:14:10.492957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.393 qpair failed and we were unable to recover it. 00:30:36.393 [2024-12-05 12:14:10.493163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.393 [2024-12-05 12:14:10.493195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.393 qpair failed and we were unable to recover it. 00:30:36.393 [2024-12-05 12:14:10.493381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.393 [2024-12-05 12:14:10.493414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.393 qpair failed and we were unable to recover it. 00:30:36.393 [2024-12-05 12:14:10.493551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.393 [2024-12-05 12:14:10.493583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.393 qpair failed and we were unable to recover it. 00:30:36.393 [2024-12-05 12:14:10.493812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.393 [2024-12-05 12:14:10.493846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.393 qpair failed and we were unable to recover it. 00:30:36.393 [2024-12-05 12:14:10.494016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.393 [2024-12-05 12:14:10.494048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.393 qpair failed and we were unable to recover it. 00:30:36.393 [2024-12-05 12:14:10.494160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.393 [2024-12-05 12:14:10.494191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.393 qpair failed and we were unable to recover it. 00:30:36.393 [2024-12-05 12:14:10.494301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.393 [2024-12-05 12:14:10.494334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.393 qpair failed and we were unable to recover it. 00:30:36.393 [2024-12-05 12:14:10.494510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.393 [2024-12-05 12:14:10.494586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.393 qpair failed and we were unable to recover it. 00:30:36.393 [2024-12-05 12:14:10.494871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.393 [2024-12-05 12:14:10.494908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.393 qpair failed and we were unable to recover it. 00:30:36.393 [2024-12-05 12:14:10.495125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.393 [2024-12-05 12:14:10.495162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.393 qpair failed and we were unable to recover it. 00:30:36.393 [2024-12-05 12:14:10.495299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.393 [2024-12-05 12:14:10.495332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.393 qpair failed and we were unable to recover it. 00:30:36.393 [2024-12-05 12:14:10.495467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.393 [2024-12-05 12:14:10.495503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.393 qpair failed and we were unable to recover it. 00:30:36.393 [2024-12-05 12:14:10.495687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.393 [2024-12-05 12:14:10.495719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.393 qpair failed and we were unable to recover it. 00:30:36.393 [2024-12-05 12:14:10.495966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.393 [2024-12-05 12:14:10.495997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.393 qpair failed and we were unable to recover it. 00:30:36.393 [2024-12-05 12:14:10.496134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.393 [2024-12-05 12:14:10.496166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.393 qpair failed and we were unable to recover it. 00:30:36.393 [2024-12-05 12:14:10.496298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.393 [2024-12-05 12:14:10.496330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.393 qpair failed and we were unable to recover it. 00:30:36.393 [2024-12-05 12:14:10.496538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.393 [2024-12-05 12:14:10.496573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.393 qpair failed and we were unable to recover it. 00:30:36.393 [2024-12-05 12:14:10.496689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.393 [2024-12-05 12:14:10.496721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.393 qpair failed and we were unable to recover it. 00:30:36.393 [2024-12-05 12:14:10.496901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.393 [2024-12-05 12:14:10.496934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.393 qpair failed and we were unable to recover it. 00:30:36.393 [2024-12-05 12:14:10.497121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.393 [2024-12-05 12:14:10.497152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.393 qpair failed and we were unable to recover it. 00:30:36.393 [2024-12-05 12:14:10.497278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.393 [2024-12-05 12:14:10.497311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.393 qpair failed and we were unable to recover it. 00:30:36.393 [2024-12-05 12:14:10.497454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.393 [2024-12-05 12:14:10.497490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.393 qpair failed and we were unable to recover it. 00:30:36.393 [2024-12-05 12:14:10.497611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.393 [2024-12-05 12:14:10.497644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.393 qpair failed and we were unable to recover it. 00:30:36.393 [2024-12-05 12:14:10.497824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.393 [2024-12-05 12:14:10.497857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.393 qpair failed and we were unable to recover it. 00:30:36.393 [2024-12-05 12:14:10.498105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.393 [2024-12-05 12:14:10.498137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.393 qpair failed and we were unable to recover it. 00:30:36.393 [2024-12-05 12:14:10.498255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.393 [2024-12-05 12:14:10.498287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.393 qpair failed and we were unable to recover it. 00:30:36.394 [2024-12-05 12:14:10.498396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.394 [2024-12-05 12:14:10.498428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.394 qpair failed and we were unable to recover it. 00:30:36.394 [2024-12-05 12:14:10.498626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.394 [2024-12-05 12:14:10.498657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.394 qpair failed and we were unable to recover it. 00:30:36.394 [2024-12-05 12:14:10.498782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.394 [2024-12-05 12:14:10.498814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.394 qpair failed and we were unable to recover it. 00:30:36.394 [2024-12-05 12:14:10.499088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.394 [2024-12-05 12:14:10.499127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.394 qpair failed and we were unable to recover it. 00:30:36.394 [2024-12-05 12:14:10.499242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.394 [2024-12-05 12:14:10.499275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.394 qpair failed and we were unable to recover it. 00:30:36.394 [2024-12-05 12:14:10.499460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.394 [2024-12-05 12:14:10.499493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.394 qpair failed and we were unable to recover it. 00:30:36.394 [2024-12-05 12:14:10.499611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.394 [2024-12-05 12:14:10.499642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.394 qpair failed and we were unable to recover it. 00:30:36.394 [2024-12-05 12:14:10.499875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.394 [2024-12-05 12:14:10.499910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.394 qpair failed and we were unable to recover it. 00:30:36.394 [2024-12-05 12:14:10.500096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.394 [2024-12-05 12:14:10.500127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.394 qpair failed and we were unable to recover it. 00:30:36.394 [2024-12-05 12:14:10.500305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.394 [2024-12-05 12:14:10.500337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.394 qpair failed and we were unable to recover it. 00:30:36.394 [2024-12-05 12:14:10.500540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.394 [2024-12-05 12:14:10.500574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.394 qpair failed and we were unable to recover it. 00:30:36.394 [2024-12-05 12:14:10.500694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.394 [2024-12-05 12:14:10.500726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.394 qpair failed and we were unable to recover it. 00:30:36.394 [2024-12-05 12:14:10.500852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.394 [2024-12-05 12:14:10.500885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.394 qpair failed and we were unable to recover it. 00:30:36.394 [2024-12-05 12:14:10.501001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.394 [2024-12-05 12:14:10.501031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.394 qpair failed and we were unable to recover it. 00:30:36.394 [2024-12-05 12:14:10.501244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.394 [2024-12-05 12:14:10.501276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.394 qpair failed and we were unable to recover it. 00:30:36.394 [2024-12-05 12:14:10.501391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.394 [2024-12-05 12:14:10.501424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.394 qpair failed and we were unable to recover it. 00:30:36.394 [2024-12-05 12:14:10.501600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.394 [2024-12-05 12:14:10.501633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.394 qpair failed and we were unable to recover it. 00:30:36.394 [2024-12-05 12:14:10.501827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.394 [2024-12-05 12:14:10.501860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.394 qpair failed and we were unable to recover it. 00:30:36.394 [2024-12-05 12:14:10.501963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.394 [2024-12-05 12:14:10.501995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.394 qpair failed and we were unable to recover it. 00:30:36.394 [2024-12-05 12:14:10.502117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.394 [2024-12-05 12:14:10.502151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.394 qpair failed and we were unable to recover it. 00:30:36.394 [2024-12-05 12:14:10.502335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.394 [2024-12-05 12:14:10.502375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.394 qpair failed and we were unable to recover it. 00:30:36.394 [2024-12-05 12:14:10.502624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.394 [2024-12-05 12:14:10.502656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.394 qpair failed and we were unable to recover it. 00:30:36.394 [2024-12-05 12:14:10.502834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.394 [2024-12-05 12:14:10.502866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.394 qpair failed and we were unable to recover it. 00:30:36.394 [2024-12-05 12:14:10.503129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.394 [2024-12-05 12:14:10.503161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.394 qpair failed and we were unable to recover it. 00:30:36.394 [2024-12-05 12:14:10.503360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.394 [2024-12-05 12:14:10.503404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.394 qpair failed and we were unable to recover it. 00:30:36.394 [2024-12-05 12:14:10.503658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.394 [2024-12-05 12:14:10.503691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.394 qpair failed and we were unable to recover it. 00:30:36.394 [2024-12-05 12:14:10.503874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.394 [2024-12-05 12:14:10.503906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.394 qpair failed and we were unable to recover it. 00:30:36.394 [2024-12-05 12:14:10.504095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.394 [2024-12-05 12:14:10.504127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.394 qpair failed and we were unable to recover it. 00:30:36.394 [2024-12-05 12:14:10.504234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.394 [2024-12-05 12:14:10.504265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.394 qpair failed and we were unable to recover it. 00:30:36.394 [2024-12-05 12:14:10.504440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.394 [2024-12-05 12:14:10.504473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.394 qpair failed and we were unable to recover it. 00:30:36.394 [2024-12-05 12:14:10.504682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.394 [2024-12-05 12:14:10.504715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.394 qpair failed and we were unable to recover it. 00:30:36.394 [2024-12-05 12:14:10.504913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.394 [2024-12-05 12:14:10.504944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.394 qpair failed and we were unable to recover it. 00:30:36.394 [2024-12-05 12:14:10.505049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.394 [2024-12-05 12:14:10.505078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.394 qpair failed and we were unable to recover it. 00:30:36.394 [2024-12-05 12:14:10.505251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.394 [2024-12-05 12:14:10.505282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.394 qpair failed and we were unable to recover it. 00:30:36.394 [2024-12-05 12:14:10.505467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.394 [2024-12-05 12:14:10.505499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.394 qpair failed and we were unable to recover it. 00:30:36.394 [2024-12-05 12:14:10.505621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.394 [2024-12-05 12:14:10.505652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.394 qpair failed and we were unable to recover it. 00:30:36.394 [2024-12-05 12:14:10.505789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.394 [2024-12-05 12:14:10.505819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.394 qpair failed and we were unable to recover it. 00:30:36.394 [2024-12-05 12:14:10.506008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.394 [2024-12-05 12:14:10.506039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.394 qpair failed and we were unable to recover it. 00:30:36.394 [2024-12-05 12:14:10.506163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.395 [2024-12-05 12:14:10.506193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.395 qpair failed and we were unable to recover it. 00:30:36.395 [2024-12-05 12:14:10.506412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.395 [2024-12-05 12:14:10.506445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.395 qpair failed and we were unable to recover it. 00:30:36.395 [2024-12-05 12:14:10.506550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.395 [2024-12-05 12:14:10.506582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.395 qpair failed and we were unable to recover it. 00:30:36.395 [2024-12-05 12:14:10.506776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.395 [2024-12-05 12:14:10.506809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.395 qpair failed and we were unable to recover it. 00:30:36.395 [2024-12-05 12:14:10.506939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.395 [2024-12-05 12:14:10.506971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.395 qpair failed and we were unable to recover it. 00:30:36.395 [2024-12-05 12:14:10.507151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.395 [2024-12-05 12:14:10.507190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.395 qpair failed and we were unable to recover it. 00:30:36.395 [2024-12-05 12:14:10.507381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.395 [2024-12-05 12:14:10.507414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.395 qpair failed and we were unable to recover it. 00:30:36.395 [2024-12-05 12:14:10.507534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.395 [2024-12-05 12:14:10.507564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.395 qpair failed and we were unable to recover it. 00:30:36.395 [2024-12-05 12:14:10.507742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.395 [2024-12-05 12:14:10.507773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.395 qpair failed and we were unable to recover it. 00:30:36.395 [2024-12-05 12:14:10.507895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.395 [2024-12-05 12:14:10.507923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.395 qpair failed and we were unable to recover it. 00:30:36.395 [2024-12-05 12:14:10.508044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.395 [2024-12-05 12:14:10.508074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.395 qpair failed and we were unable to recover it. 00:30:36.395 [2024-12-05 12:14:10.508206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.395 [2024-12-05 12:14:10.508235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.395 qpair failed and we were unable to recover it. 00:30:36.395 [2024-12-05 12:14:10.508440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.395 [2024-12-05 12:14:10.508473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.395 qpair failed and we were unable to recover it. 00:30:36.395 [2024-12-05 12:14:10.508581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.395 [2024-12-05 12:14:10.508612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.395 qpair failed and we were unable to recover it. 00:30:36.395 [2024-12-05 12:14:10.508801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.395 [2024-12-05 12:14:10.508832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.395 qpair failed and we were unable to recover it. 00:30:36.395 [2024-12-05 12:14:10.508940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.395 [2024-12-05 12:14:10.508968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.395 qpair failed and we were unable to recover it. 00:30:36.395 [2024-12-05 12:14:10.509151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.395 [2024-12-05 12:14:10.509182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.395 qpair failed and we were unable to recover it. 00:30:36.395 [2024-12-05 12:14:10.509296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.395 [2024-12-05 12:14:10.509328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.395 qpair failed and we were unable to recover it. 00:30:36.395 [2024-12-05 12:14:10.509456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.395 [2024-12-05 12:14:10.509492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.395 qpair failed and we were unable to recover it. 00:30:36.395 [2024-12-05 12:14:10.509614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.395 [2024-12-05 12:14:10.509647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.395 qpair failed and we were unable to recover it. 00:30:36.395 [2024-12-05 12:14:10.509840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.395 [2024-12-05 12:14:10.509872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.395 qpair failed and we were unable to recover it. 00:30:36.395 [2024-12-05 12:14:10.510062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.395 [2024-12-05 12:14:10.510093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.395 qpair failed and we were unable to recover it. 00:30:36.395 [2024-12-05 12:14:10.510219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.395 [2024-12-05 12:14:10.510250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.395 qpair failed and we were unable to recover it. 00:30:36.395 [2024-12-05 12:14:10.510441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.395 [2024-12-05 12:14:10.510475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.395 qpair failed and we were unable to recover it. 00:30:36.395 [2024-12-05 12:14:10.510676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.395 [2024-12-05 12:14:10.510707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.395 qpair failed and we were unable to recover it. 00:30:36.395 [2024-12-05 12:14:10.510880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.395 [2024-12-05 12:14:10.510912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.395 qpair failed and we were unable to recover it. 00:30:36.395 [2024-12-05 12:14:10.511093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.395 [2024-12-05 12:14:10.511124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.395 qpair failed and we were unable to recover it. 00:30:36.395 [2024-12-05 12:14:10.511244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.395 [2024-12-05 12:14:10.511275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.395 qpair failed and we were unable to recover it. 00:30:36.395 [2024-12-05 12:14:10.511390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.395 [2024-12-05 12:14:10.511423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.395 qpair failed and we were unable to recover it. 00:30:36.395 [2024-12-05 12:14:10.511553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.395 [2024-12-05 12:14:10.511585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.395 qpair failed and we were unable to recover it. 00:30:36.395 [2024-12-05 12:14:10.511705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.395 [2024-12-05 12:14:10.511738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.395 qpair failed and we were unable to recover it. 00:30:36.395 [2024-12-05 12:14:10.511925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.395 [2024-12-05 12:14:10.511957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.395 qpair failed and we were unable to recover it. 00:30:36.395 [2024-12-05 12:14:10.512076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.395 [2024-12-05 12:14:10.512107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.395 qpair failed and we were unable to recover it. 00:30:36.395 [2024-12-05 12:14:10.512354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.395 [2024-12-05 12:14:10.512398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.395 qpair failed and we were unable to recover it. 00:30:36.395 [2024-12-05 12:14:10.512580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.395 [2024-12-05 12:14:10.512611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.395 qpair failed and we were unable to recover it. 00:30:36.395 [2024-12-05 12:14:10.512794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.395 [2024-12-05 12:14:10.512827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.395 qpair failed and we were unable to recover it. 00:30:36.395 [2024-12-05 12:14:10.512961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.395 [2024-12-05 12:14:10.512992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.395 qpair failed and we were unable to recover it. 00:30:36.395 [2024-12-05 12:14:10.513177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.395 [2024-12-05 12:14:10.513211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.395 qpair failed and we were unable to recover it. 00:30:36.395 [2024-12-05 12:14:10.513399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.395 [2024-12-05 12:14:10.513431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.396 qpair failed and we were unable to recover it. 00:30:36.396 [2024-12-05 12:14:10.513641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.396 [2024-12-05 12:14:10.513672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.396 qpair failed and we were unable to recover it. 00:30:36.396 [2024-12-05 12:14:10.513799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.396 [2024-12-05 12:14:10.513830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.396 qpair failed and we were unable to recover it. 00:30:36.396 [2024-12-05 12:14:10.513941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.396 [2024-12-05 12:14:10.513971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.396 qpair failed and we were unable to recover it. 00:30:36.396 [2024-12-05 12:14:10.514089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.396 [2024-12-05 12:14:10.514120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.396 qpair failed and we were unable to recover it. 00:30:36.396 [2024-12-05 12:14:10.514231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.396 [2024-12-05 12:14:10.514264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.396 qpair failed and we were unable to recover it. 00:30:36.396 [2024-12-05 12:14:10.514441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.396 [2024-12-05 12:14:10.514473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.396 qpair failed and we were unable to recover it. 00:30:36.396 [2024-12-05 12:14:10.514580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.396 [2024-12-05 12:14:10.514618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.396 qpair failed and we were unable to recover it. 00:30:36.396 [2024-12-05 12:14:10.514803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.396 [2024-12-05 12:14:10.514835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.396 qpair failed and we were unable to recover it. 00:30:36.396 [2024-12-05 12:14:10.514942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.396 [2024-12-05 12:14:10.514974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.396 qpair failed and we were unable to recover it. 00:30:36.396 [2024-12-05 12:14:10.515091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.396 [2024-12-05 12:14:10.515123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.396 qpair failed and we were unable to recover it. 00:30:36.396 [2024-12-05 12:14:10.515310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.396 [2024-12-05 12:14:10.515342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.396 qpair failed and we were unable to recover it. 00:30:36.396 [2024-12-05 12:14:10.515474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.396 [2024-12-05 12:14:10.515506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.396 qpair failed and we were unable to recover it. 00:30:36.396 [2024-12-05 12:14:10.515640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.396 [2024-12-05 12:14:10.515671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.396 qpair failed and we were unable to recover it. 00:30:36.396 [2024-12-05 12:14:10.515917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.396 [2024-12-05 12:14:10.515950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.396 qpair failed and we were unable to recover it. 00:30:36.396 [2024-12-05 12:14:10.516055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.396 [2024-12-05 12:14:10.516086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.396 qpair failed and we were unable to recover it. 00:30:36.396 [2024-12-05 12:14:10.516267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.396 [2024-12-05 12:14:10.516299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.396 qpair failed and we were unable to recover it. 00:30:36.396 [2024-12-05 12:14:10.516415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.396 [2024-12-05 12:14:10.516447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.396 qpair failed and we were unable to recover it. 00:30:36.396 [2024-12-05 12:14:10.516656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.396 [2024-12-05 12:14:10.516688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.396 qpair failed and we were unable to recover it. 00:30:36.396 [2024-12-05 12:14:10.516868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.396 [2024-12-05 12:14:10.516900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.396 qpair failed and we were unable to recover it. 00:30:36.396 [2024-12-05 12:14:10.517007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.396 [2024-12-05 12:14:10.517039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.396 qpair failed and we were unable to recover it. 00:30:36.396 [2024-12-05 12:14:10.517230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.396 [2024-12-05 12:14:10.517263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.396 qpair failed and we were unable to recover it. 00:30:36.396 [2024-12-05 12:14:10.517445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.396 [2024-12-05 12:14:10.517479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.396 qpair failed and we were unable to recover it. 00:30:36.396 [2024-12-05 12:14:10.517602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.396 [2024-12-05 12:14:10.517634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.396 qpair failed and we were unable to recover it. 00:30:36.396 [2024-12-05 12:14:10.517743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.396 [2024-12-05 12:14:10.517775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.396 qpair failed and we were unable to recover it. 00:30:36.396 [2024-12-05 12:14:10.517951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.396 [2024-12-05 12:14:10.517983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.396 qpair failed and we were unable to recover it. 00:30:36.396 [2024-12-05 12:14:10.518175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.396 [2024-12-05 12:14:10.518206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.396 qpair failed and we were unable to recover it. 00:30:36.396 [2024-12-05 12:14:10.518397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.396 [2024-12-05 12:14:10.518430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.396 qpair failed and we were unable to recover it. 00:30:36.396 [2024-12-05 12:14:10.518545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.396 [2024-12-05 12:14:10.518576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.396 qpair failed and we were unable to recover it. 00:30:36.396 [2024-12-05 12:14:10.518767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.396 [2024-12-05 12:14:10.518799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.396 qpair failed and we were unable to recover it. 00:30:36.396 [2024-12-05 12:14:10.519007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.396 [2024-12-05 12:14:10.519039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.396 qpair failed and we were unable to recover it. 00:30:36.396 [2024-12-05 12:14:10.519166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.396 [2024-12-05 12:14:10.519197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.396 qpair failed and we were unable to recover it. 00:30:36.396 [2024-12-05 12:14:10.519366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.396 [2024-12-05 12:14:10.519406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.396 qpair failed and we were unable to recover it. 00:30:36.396 [2024-12-05 12:14:10.519587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.396 [2024-12-05 12:14:10.519619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.396 qpair failed and we were unable to recover it. 00:30:36.396 [2024-12-05 12:14:10.519828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.396 [2024-12-05 12:14:10.519860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.396 qpair failed and we were unable to recover it. 00:30:36.397 [2024-12-05 12:14:10.520048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.397 [2024-12-05 12:14:10.520080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.397 qpair failed and we were unable to recover it. 00:30:36.397 [2024-12-05 12:14:10.520267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.397 [2024-12-05 12:14:10.520299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.397 qpair failed and we were unable to recover it. 00:30:36.397 [2024-12-05 12:14:10.520426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.397 [2024-12-05 12:14:10.520459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.397 qpair failed and we were unable to recover it. 00:30:36.397 [2024-12-05 12:14:10.520704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.397 [2024-12-05 12:14:10.520735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.397 qpair failed and we were unable to recover it. 00:30:36.397 [2024-12-05 12:14:10.520848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.397 [2024-12-05 12:14:10.520880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.397 qpair failed and we were unable to recover it. 00:30:36.397 [2024-12-05 12:14:10.521017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.397 [2024-12-05 12:14:10.521048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.397 qpair failed and we were unable to recover it. 00:30:36.397 [2024-12-05 12:14:10.521166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.397 [2024-12-05 12:14:10.521199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.397 qpair failed and we were unable to recover it. 00:30:36.397 [2024-12-05 12:14:10.521438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.397 [2024-12-05 12:14:10.521471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.397 qpair failed and we were unable to recover it. 00:30:36.397 [2024-12-05 12:14:10.521585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.397 [2024-12-05 12:14:10.521616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.397 qpair failed and we were unable to recover it. 00:30:36.397 [2024-12-05 12:14:10.521868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.397 [2024-12-05 12:14:10.521899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.397 qpair failed and we were unable to recover it. 00:30:36.397 [2024-12-05 12:14:10.522008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.397 [2024-12-05 12:14:10.522040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.397 qpair failed and we were unable to recover it. 00:30:36.397 [2024-12-05 12:14:10.522160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.397 [2024-12-05 12:14:10.522192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.397 qpair failed and we were unable to recover it. 00:30:36.397 [2024-12-05 12:14:10.522460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.397 [2024-12-05 12:14:10.522499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.397 qpair failed and we were unable to recover it. 00:30:36.397 [2024-12-05 12:14:10.522612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.397 [2024-12-05 12:14:10.522644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.397 qpair failed and we were unable to recover it. 00:30:36.397 [2024-12-05 12:14:10.522826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.397 [2024-12-05 12:14:10.522858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.397 qpair failed and we were unable to recover it. 00:30:36.397 [2024-12-05 12:14:10.523104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.397 [2024-12-05 12:14:10.523139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.397 qpair failed and we were unable to recover it. 00:30:36.397 [2024-12-05 12:14:10.523265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.397 [2024-12-05 12:14:10.523296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.397 qpair failed and we were unable to recover it. 00:30:36.397 [2024-12-05 12:14:10.523557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.397 [2024-12-05 12:14:10.523591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.397 qpair failed and we were unable to recover it. 00:30:36.397 [2024-12-05 12:14:10.523694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.397 [2024-12-05 12:14:10.523725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.397 qpair failed and we were unable to recover it. 00:30:36.397 [2024-12-05 12:14:10.523844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.397 [2024-12-05 12:14:10.523876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.397 qpair failed and we were unable to recover it. 00:30:36.397 [2024-12-05 12:14:10.524122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.397 [2024-12-05 12:14:10.524153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.397 qpair failed and we were unable to recover it. 00:30:36.397 [2024-12-05 12:14:10.524341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.397 [2024-12-05 12:14:10.524382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.397 qpair failed and we were unable to recover it. 00:30:36.397 [2024-12-05 12:14:10.524644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.397 [2024-12-05 12:14:10.524677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.397 qpair failed and we were unable to recover it. 00:30:36.397 [2024-12-05 12:14:10.524867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.397 [2024-12-05 12:14:10.524898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.397 qpair failed and we were unable to recover it. 00:30:36.397 [2024-12-05 12:14:10.525027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.397 [2024-12-05 12:14:10.525059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.397 qpair failed and we were unable to recover it. 00:30:36.397 [2024-12-05 12:14:10.525188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.397 [2024-12-05 12:14:10.525220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.397 qpair failed and we were unable to recover it. 00:30:36.397 [2024-12-05 12:14:10.525339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.397 [2024-12-05 12:14:10.525382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.397 qpair failed and we were unable to recover it. 00:30:36.397 [2024-12-05 12:14:10.525608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.397 [2024-12-05 12:14:10.525639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.397 qpair failed and we were unable to recover it. 00:30:36.397 [2024-12-05 12:14:10.525764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.397 [2024-12-05 12:14:10.525797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.397 qpair failed and we were unable to recover it. 00:30:36.397 [2024-12-05 12:14:10.526049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.397 [2024-12-05 12:14:10.526082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.397 qpair failed and we were unable to recover it. 00:30:36.397 [2024-12-05 12:14:10.526264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.397 [2024-12-05 12:14:10.526296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.397 qpair failed and we were unable to recover it. 00:30:36.397 [2024-12-05 12:14:10.526470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.397 [2024-12-05 12:14:10.526503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.397 qpair failed and we were unable to recover it. 00:30:36.397 [2024-12-05 12:14:10.526698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.397 [2024-12-05 12:14:10.526731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.397 qpair failed and we were unable to recover it. 00:30:36.397 [2024-12-05 12:14:10.526968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.397 [2024-12-05 12:14:10.527002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.397 qpair failed and we were unable to recover it. 00:30:36.397 [2024-12-05 12:14:10.527179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.397 [2024-12-05 12:14:10.527210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.397 qpair failed and we were unable to recover it. 00:30:36.397 [2024-12-05 12:14:10.527449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.397 [2024-12-05 12:14:10.527483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.397 qpair failed and we were unable to recover it. 00:30:36.397 [2024-12-05 12:14:10.527668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.397 [2024-12-05 12:14:10.527700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.397 qpair failed and we were unable to recover it. 00:30:36.397 [2024-12-05 12:14:10.527888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.397 [2024-12-05 12:14:10.527921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.397 qpair failed and we were unable to recover it. 00:30:36.398 [2024-12-05 12:14:10.528031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.398 [2024-12-05 12:14:10.528064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.398 qpair failed and we were unable to recover it. 00:30:36.398 [2024-12-05 12:14:10.528390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.398 [2024-12-05 12:14:10.528464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.398 qpair failed and we were unable to recover it. 00:30:36.398 [2024-12-05 12:14:10.528741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.398 [2024-12-05 12:14:10.528813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.398 qpair failed and we were unable to recover it. 00:30:36.398 [2024-12-05 12:14:10.529027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.398 [2024-12-05 12:14:10.529064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.398 qpair failed and we were unable to recover it. 00:30:36.398 [2024-12-05 12:14:10.529206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.398 [2024-12-05 12:14:10.529239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.398 qpair failed and we were unable to recover it. 00:30:36.398 [2024-12-05 12:14:10.529385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.398 [2024-12-05 12:14:10.529420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.398 qpair failed and we were unable to recover it. 00:30:36.398 [2024-12-05 12:14:10.529630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.398 [2024-12-05 12:14:10.529662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.398 qpair failed and we were unable to recover it. 00:30:36.398 [2024-12-05 12:14:10.529844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.398 [2024-12-05 12:14:10.529876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.398 qpair failed and we were unable to recover it. 00:30:36.398 [2024-12-05 12:14:10.530054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.398 [2024-12-05 12:14:10.530087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.398 qpair failed and we were unable to recover it. 00:30:36.398 [2024-12-05 12:14:10.530403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.398 [2024-12-05 12:14:10.530436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.398 qpair failed and we were unable to recover it. 00:30:36.398 [2024-12-05 12:14:10.530640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.398 [2024-12-05 12:14:10.530672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.398 qpair failed and we were unable to recover it. 00:30:36.398 [2024-12-05 12:14:10.530857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.398 [2024-12-05 12:14:10.530888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.398 qpair failed and we were unable to recover it. 00:30:36.398 [2024-12-05 12:14:10.531063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.398 [2024-12-05 12:14:10.531094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.398 qpair failed and we were unable to recover it. 00:30:36.398 [2024-12-05 12:14:10.531342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.398 [2024-12-05 12:14:10.531384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.398 qpair failed and we were unable to recover it. 00:30:36.398 [2024-12-05 12:14:10.531643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.398 [2024-12-05 12:14:10.531684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.398 qpair failed and we were unable to recover it. 00:30:36.398 [2024-12-05 12:14:10.531813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.398 [2024-12-05 12:14:10.531844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.398 qpair failed and we were unable to recover it. 00:30:36.398 [2024-12-05 12:14:10.532142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.398 [2024-12-05 12:14:10.532176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.398 qpair failed and we were unable to recover it. 00:30:36.398 [2024-12-05 12:14:10.532439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.398 [2024-12-05 12:14:10.532472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.398 qpair failed and we were unable to recover it. 00:30:36.398 [2024-12-05 12:14:10.532715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.398 [2024-12-05 12:14:10.532747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.398 qpair failed and we were unable to recover it. 00:30:36.398 [2024-12-05 12:14:10.532923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.398 [2024-12-05 12:14:10.532955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.398 qpair failed and we were unable to recover it. 00:30:36.398 [2024-12-05 12:14:10.533136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.398 [2024-12-05 12:14:10.533169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.398 qpair failed and we were unable to recover it. 00:30:36.398 [2024-12-05 12:14:10.533386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.398 [2024-12-05 12:14:10.533419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.398 qpair failed and we were unable to recover it. 00:30:36.398 [2024-12-05 12:14:10.533690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.398 [2024-12-05 12:14:10.533722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.398 qpair failed and we were unable to recover it. 00:30:36.398 [2024-12-05 12:14:10.533901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.398 [2024-12-05 12:14:10.533933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.398 qpair failed and we were unable to recover it. 00:30:36.398 [2024-12-05 12:14:10.534164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.398 [2024-12-05 12:14:10.534196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.398 qpair failed and we were unable to recover it. 00:30:36.398 [2024-12-05 12:14:10.534384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.398 [2024-12-05 12:14:10.534418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.398 qpair failed and we were unable to recover it. 00:30:36.398 [2024-12-05 12:14:10.534685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.398 [2024-12-05 12:14:10.534717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.398 qpair failed and we were unable to recover it. 00:30:36.398 [2024-12-05 12:14:10.534828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.398 [2024-12-05 12:14:10.534859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.398 qpair failed and we were unable to recover it. 00:30:36.398 [2024-12-05 12:14:10.535053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.398 [2024-12-05 12:14:10.535085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.398 qpair failed and we were unable to recover it. 00:30:36.398 [2024-12-05 12:14:10.535329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.398 [2024-12-05 12:14:10.535361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.398 qpair failed and we were unable to recover it. 00:30:36.398 [2024-12-05 12:14:10.535544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.398 [2024-12-05 12:14:10.535576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.398 qpair failed and we were unable to recover it. 00:30:36.398 [2024-12-05 12:14:10.535771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.398 [2024-12-05 12:14:10.535802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.398 qpair failed and we were unable to recover it. 00:30:36.398 [2024-12-05 12:14:10.535986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.398 [2024-12-05 12:14:10.536017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.398 qpair failed and we were unable to recover it. 00:30:36.398 [2024-12-05 12:14:10.536202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.398 [2024-12-05 12:14:10.536234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.398 qpair failed and we were unable to recover it. 00:30:36.398 [2024-12-05 12:14:10.536414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.398 [2024-12-05 12:14:10.536446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.398 qpair failed and we were unable to recover it. 00:30:36.398 [2024-12-05 12:14:10.536556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.398 [2024-12-05 12:14:10.536589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.398 qpair failed and we were unable to recover it. 00:30:36.398 [2024-12-05 12:14:10.536732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.398 [2024-12-05 12:14:10.536763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.398 qpair failed and we were unable to recover it. 00:30:36.398 [2024-12-05 12:14:10.537008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.398 [2024-12-05 12:14:10.537039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.398 qpair failed and we were unable to recover it. 00:30:36.398 [2024-12-05 12:14:10.537233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.399 [2024-12-05 12:14:10.537265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.399 qpair failed and we were unable to recover it. 00:30:36.680 [2024-12-05 12:14:10.537517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.680 [2024-12-05 12:14:10.537550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.680 qpair failed and we were unable to recover it. 00:30:36.680 [2024-12-05 12:14:10.537670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.680 [2024-12-05 12:14:10.537701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.680 qpair failed and we were unable to recover it. 00:30:36.680 [2024-12-05 12:14:10.537843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.680 [2024-12-05 12:14:10.537883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.680 qpair failed and we were unable to recover it. 00:30:36.680 [2024-12-05 12:14:10.538121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.680 [2024-12-05 12:14:10.538153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.680 qpair failed and we were unable to recover it. 00:30:36.680 [2024-12-05 12:14:10.538338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.680 [2024-12-05 12:14:10.538392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.680 qpair failed and we were unable to recover it. 00:30:36.680 [2024-12-05 12:14:10.538610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.680 [2024-12-05 12:14:10.538642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.680 qpair failed and we were unable to recover it. 00:30:36.680 [2024-12-05 12:14:10.538761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.680 [2024-12-05 12:14:10.538792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.680 qpair failed and we were unable to recover it. 00:30:36.680 [2024-12-05 12:14:10.538980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.680 [2024-12-05 12:14:10.539012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.680 qpair failed and we were unable to recover it. 00:30:36.680 [2024-12-05 12:14:10.539220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.680 [2024-12-05 12:14:10.539253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.680 qpair failed and we were unable to recover it. 00:30:36.680 [2024-12-05 12:14:10.539431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.680 [2024-12-05 12:14:10.539464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.680 qpair failed and we were unable to recover it. 00:30:36.680 [2024-12-05 12:14:10.539726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.680 [2024-12-05 12:14:10.539758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.680 qpair failed and we were unable to recover it. 00:30:36.680 [2024-12-05 12:14:10.539911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.680 [2024-12-05 12:14:10.539942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.680 qpair failed and we were unable to recover it. 00:30:36.680 [2024-12-05 12:14:10.540054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.680 [2024-12-05 12:14:10.540085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.680 qpair failed and we were unable to recover it. 00:30:36.680 [2024-12-05 12:14:10.540265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.680 [2024-12-05 12:14:10.540296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.680 qpair failed and we were unable to recover it. 00:30:36.680 [2024-12-05 12:14:10.540426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.680 [2024-12-05 12:14:10.540458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.680 qpair failed and we were unable to recover it. 00:30:36.680 [2024-12-05 12:14:10.540634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.680 [2024-12-05 12:14:10.540666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.680 qpair failed and we were unable to recover it. 00:30:36.680 [2024-12-05 12:14:10.540955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.680 [2024-12-05 12:14:10.540987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.680 qpair failed and we were unable to recover it. 00:30:36.680 [2024-12-05 12:14:10.541179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.680 [2024-12-05 12:14:10.541210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.680 qpair failed and we were unable to recover it. 00:30:36.680 [2024-12-05 12:14:10.541392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.680 [2024-12-05 12:14:10.541425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.680 qpair failed and we were unable to recover it. 00:30:36.680 [2024-12-05 12:14:10.541541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.680 [2024-12-05 12:14:10.541574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.680 qpair failed and we were unable to recover it. 00:30:36.680 [2024-12-05 12:14:10.541744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.680 [2024-12-05 12:14:10.541775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.680 qpair failed and we were unable to recover it. 00:30:36.680 [2024-12-05 12:14:10.541982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.680 [2024-12-05 12:14:10.542014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.680 qpair failed and we were unable to recover it. 00:30:36.680 [2024-12-05 12:14:10.542210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.680 [2024-12-05 12:14:10.542242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.680 qpair failed and we were unable to recover it. 00:30:36.680 [2024-12-05 12:14:10.542450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.680 [2024-12-05 12:14:10.542482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.680 qpair failed and we were unable to recover it. 00:30:36.680 [2024-12-05 12:14:10.542728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.680 [2024-12-05 12:14:10.542759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.680 qpair failed and we were unable to recover it. 00:30:36.680 [2024-12-05 12:14:10.542934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.680 [2024-12-05 12:14:10.542965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.680 qpair failed and we were unable to recover it. 00:30:36.680 [2024-12-05 12:14:10.543219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.680 [2024-12-05 12:14:10.543251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.680 qpair failed and we were unable to recover it. 00:30:36.680 [2024-12-05 12:14:10.543495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.680 [2024-12-05 12:14:10.543528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.680 qpair failed and we were unable to recover it. 00:30:36.680 [2024-12-05 12:14:10.543828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.680 [2024-12-05 12:14:10.543859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.680 qpair failed and we were unable to recover it. 00:30:36.680 [2024-12-05 12:14:10.543984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.680 [2024-12-05 12:14:10.544022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.680 qpair failed and we were unable to recover it. 00:30:36.680 [2024-12-05 12:14:10.544213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.680 [2024-12-05 12:14:10.544245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.680 qpair failed and we were unable to recover it. 00:30:36.680 [2024-12-05 12:14:10.544383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.680 [2024-12-05 12:14:10.544417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.680 qpair failed and we were unable to recover it. 00:30:36.680 [2024-12-05 12:14:10.544529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.680 [2024-12-05 12:14:10.544560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.680 qpair failed and we were unable to recover it. 00:30:36.680 [2024-12-05 12:14:10.544737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.680 [2024-12-05 12:14:10.544769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.680 qpair failed and we were unable to recover it. 00:30:36.680 [2024-12-05 12:14:10.544954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.681 [2024-12-05 12:14:10.544985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.681 qpair failed and we were unable to recover it. 00:30:36.681 [2024-12-05 12:14:10.545184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.681 [2024-12-05 12:14:10.545216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.681 qpair failed and we were unable to recover it. 00:30:36.681 [2024-12-05 12:14:10.545418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.681 [2024-12-05 12:14:10.545453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.681 qpair failed and we were unable to recover it. 00:30:36.681 [2024-12-05 12:14:10.545696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.681 [2024-12-05 12:14:10.545728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.681 qpair failed and we were unable to recover it. 00:30:36.681 [2024-12-05 12:14:10.545917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.681 [2024-12-05 12:14:10.545950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.681 qpair failed and we were unable to recover it. 00:30:36.681 [2024-12-05 12:14:10.546057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.681 [2024-12-05 12:14:10.546088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.681 qpair failed and we were unable to recover it. 00:30:36.681 [2024-12-05 12:14:10.546220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.681 [2024-12-05 12:14:10.546252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.681 qpair failed and we were unable to recover it. 00:30:36.681 [2024-12-05 12:14:10.546463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.681 [2024-12-05 12:14:10.546496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.681 qpair failed and we were unable to recover it. 00:30:36.681 [2024-12-05 12:14:10.546679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.681 [2024-12-05 12:14:10.546711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.681 qpair failed and we were unable to recover it. 00:30:36.681 [2024-12-05 12:14:10.546856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.681 [2024-12-05 12:14:10.546888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.681 qpair failed and we were unable to recover it. 00:30:36.681 [2024-12-05 12:14:10.547024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.681 [2024-12-05 12:14:10.547056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.681 qpair failed and we were unable to recover it. 00:30:36.681 [2024-12-05 12:14:10.547177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.681 [2024-12-05 12:14:10.547208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.681 qpair failed and we were unable to recover it. 00:30:36.681 [2024-12-05 12:14:10.547389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.681 [2024-12-05 12:14:10.547422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.681 qpair failed and we were unable to recover it. 00:30:36.681 [2024-12-05 12:14:10.547605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.681 [2024-12-05 12:14:10.547636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.681 qpair failed and we were unable to recover it. 00:30:36.681 [2024-12-05 12:14:10.547818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.681 [2024-12-05 12:14:10.547851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.681 qpair failed and we were unable to recover it. 00:30:36.681 [2024-12-05 12:14:10.548100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.681 [2024-12-05 12:14:10.548131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.681 qpair failed and we were unable to recover it. 00:30:36.681 [2024-12-05 12:14:10.548402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.681 [2024-12-05 12:14:10.548435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.681 qpair failed and we were unable to recover it. 00:30:36.681 [2024-12-05 12:14:10.548671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.681 [2024-12-05 12:14:10.548703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.681 qpair failed and we were unable to recover it. 00:30:36.681 [2024-12-05 12:14:10.548876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.681 [2024-12-05 12:14:10.548913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.681 qpair failed and we were unable to recover it. 00:30:36.681 [2024-12-05 12:14:10.549090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.681 [2024-12-05 12:14:10.549122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.681 qpair failed and we were unable to recover it. 00:30:36.681 [2024-12-05 12:14:10.549363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.681 [2024-12-05 12:14:10.549411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.681 qpair failed and we were unable to recover it. 00:30:36.681 [2024-12-05 12:14:10.549661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.681 [2024-12-05 12:14:10.549693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.681 qpair failed and we were unable to recover it. 00:30:36.681 [2024-12-05 12:14:10.549865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.681 [2024-12-05 12:14:10.549897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.681 qpair failed and we were unable to recover it. 00:30:36.681 [2024-12-05 12:14:10.550094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.681 [2024-12-05 12:14:10.550126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.681 qpair failed and we were unable to recover it. 00:30:36.681 [2024-12-05 12:14:10.550300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.681 [2024-12-05 12:14:10.550331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.681 qpair failed and we were unable to recover it. 00:30:36.681 [2024-12-05 12:14:10.550627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.681 [2024-12-05 12:14:10.550660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.681 qpair failed and we were unable to recover it. 00:30:36.681 [2024-12-05 12:14:10.550855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.681 [2024-12-05 12:14:10.550886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.681 qpair failed and we were unable to recover it. 00:30:36.681 [2024-12-05 12:14:10.551061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.681 [2024-12-05 12:14:10.551092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.681 qpair failed and we were unable to recover it. 00:30:36.681 [2024-12-05 12:14:10.551290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.681 [2024-12-05 12:14:10.551321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.681 qpair failed and we were unable to recover it. 00:30:36.681 [2024-12-05 12:14:10.551379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:36.681 [2024-12-05 12:14:10.551519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.681 [2024-12-05 12:14:10.551554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.681 qpair failed and we were unable to recover it. 00:30:36.681 [2024-12-05 12:14:10.551734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.681 [2024-12-05 12:14:10.551765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.681 qpair failed and we were unable to recover it. 00:30:36.681 [2024-12-05 12:14:10.551879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.681 [2024-12-05 12:14:10.551911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.681 qpair failed and we were unable to recover it. 00:30:36.681 [2024-12-05 12:14:10.552114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.681 [2024-12-05 12:14:10.552146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.681 qpair failed and we were unable to recover it. 00:30:36.681 [2024-12-05 12:14:10.552409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.681 [2024-12-05 12:14:10.552441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.681 qpair failed and we were unable to recover it. 00:30:36.681 [2024-12-05 12:14:10.552626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.681 [2024-12-05 12:14:10.552658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.681 qpair failed and we were unable to recover it. 00:30:36.681 [2024-12-05 12:14:10.552780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.681 [2024-12-05 12:14:10.552812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.681 qpair failed and we were unable to recover it. 00:30:36.681 [2024-12-05 12:14:10.552986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.681 [2024-12-05 12:14:10.553019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.681 qpair failed and we were unable to recover it. 00:30:36.681 [2024-12-05 12:14:10.553258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.681 [2024-12-05 12:14:10.553296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.682 qpair failed and we were unable to recover it. 00:30:36.682 [2024-12-05 12:14:10.553492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.682 [2024-12-05 12:14:10.553525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.682 qpair failed and we were unable to recover it. 00:30:36.682 [2024-12-05 12:14:10.553659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.682 [2024-12-05 12:14:10.553690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.682 qpair failed and we were unable to recover it. 00:30:36.682 [2024-12-05 12:14:10.553959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.682 [2024-12-05 12:14:10.553993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.682 qpair failed and we were unable to recover it. 00:30:36.682 [2024-12-05 12:14:10.554165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.682 [2024-12-05 12:14:10.554197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.682 qpair failed and we were unable to recover it. 00:30:36.682 [2024-12-05 12:14:10.554385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.682 [2024-12-05 12:14:10.554418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.682 qpair failed and we were unable to recover it. 00:30:36.682 [2024-12-05 12:14:10.554524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.682 [2024-12-05 12:14:10.554556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.682 qpair failed and we were unable to recover it. 00:30:36.682 [2024-12-05 12:14:10.554693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.682 [2024-12-05 12:14:10.554725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.682 qpair failed and we were unable to recover it. 00:30:36.682 [2024-12-05 12:14:10.554979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.682 [2024-12-05 12:14:10.555010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.682 qpair failed and we were unable to recover it. 00:30:36.682 [2024-12-05 12:14:10.555185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.682 [2024-12-05 12:14:10.555218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.682 qpair failed and we were unable to recover it. 00:30:36.682 [2024-12-05 12:14:10.555387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.682 [2024-12-05 12:14:10.555422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.682 qpair failed and we were unable to recover it. 00:30:36.682 [2024-12-05 12:14:10.555555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.682 [2024-12-05 12:14:10.555588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.682 qpair failed and we were unable to recover it. 00:30:36.682 [2024-12-05 12:14:10.555840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.682 [2024-12-05 12:14:10.555877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.682 qpair failed and we were unable to recover it. 00:30:36.682 [2024-12-05 12:14:10.556062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.682 [2024-12-05 12:14:10.556095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.682 qpair failed and we were unable to recover it. 00:30:36.682 [2024-12-05 12:14:10.556295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.682 [2024-12-05 12:14:10.556326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.682 qpair failed and we were unable to recover it. 00:30:36.682 [2024-12-05 12:14:10.556465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.682 [2024-12-05 12:14:10.556498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.682 qpair failed and we were unable to recover it. 00:30:36.682 [2024-12-05 12:14:10.556639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.682 [2024-12-05 12:14:10.556670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.682 qpair failed and we were unable to recover it. 00:30:36.682 [2024-12-05 12:14:10.556848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.682 [2024-12-05 12:14:10.556880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.682 qpair failed and we were unable to recover it. 00:30:36.682 [2024-12-05 12:14:10.557101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.682 [2024-12-05 12:14:10.557133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.682 qpair failed and we were unable to recover it. 00:30:36.682 [2024-12-05 12:14:10.557254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.682 [2024-12-05 12:14:10.557286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.682 qpair failed and we were unable to recover it. 00:30:36.682 [2024-12-05 12:14:10.557457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.682 [2024-12-05 12:14:10.557491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.682 qpair failed and we were unable to recover it. 00:30:36.682 [2024-12-05 12:14:10.557616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.682 [2024-12-05 12:14:10.557648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.682 qpair failed and we were unable to recover it. 00:30:36.682 [2024-12-05 12:14:10.557821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.682 [2024-12-05 12:14:10.557854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.682 qpair failed and we were unable to recover it. 00:30:36.682 [2024-12-05 12:14:10.558032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.682 [2024-12-05 12:14:10.558065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.682 qpair failed and we were unable to recover it. 00:30:36.682 [2024-12-05 12:14:10.558260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.682 [2024-12-05 12:14:10.558293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.682 qpair failed and we were unable to recover it. 00:30:36.682 [2024-12-05 12:14:10.558500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.682 [2024-12-05 12:14:10.558534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.682 qpair failed and we were unable to recover it. 00:30:36.682 [2024-12-05 12:14:10.558807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.682 [2024-12-05 12:14:10.558847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.682 qpair failed and we were unable to recover it. 00:30:36.682 [2024-12-05 12:14:10.559121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.682 [2024-12-05 12:14:10.559154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.682 qpair failed and we were unable to recover it. 00:30:36.682 [2024-12-05 12:14:10.559337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.682 [2024-12-05 12:14:10.559380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.682 qpair failed and we were unable to recover it. 00:30:36.682 [2024-12-05 12:14:10.559530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.682 [2024-12-05 12:14:10.559562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.682 qpair failed and we were unable to recover it. 00:30:36.682 [2024-12-05 12:14:10.559823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.682 [2024-12-05 12:14:10.559856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.682 qpair failed and we were unable to recover it. 00:30:36.682 [2024-12-05 12:14:10.560047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.682 [2024-12-05 12:14:10.560079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.682 qpair failed and we were unable to recover it. 00:30:36.682 [2024-12-05 12:14:10.560257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.682 [2024-12-05 12:14:10.560289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.682 qpair failed and we were unable to recover it. 00:30:36.682 [2024-12-05 12:14:10.560416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.682 [2024-12-05 12:14:10.560451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.682 qpair failed and we were unable to recover it. 00:30:36.682 [2024-12-05 12:14:10.560627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.682 [2024-12-05 12:14:10.560660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.682 qpair failed and we were unable to recover it. 00:30:36.682 [2024-12-05 12:14:10.560846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.682 [2024-12-05 12:14:10.560879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.682 qpair failed and we were unable to recover it. 00:30:36.682 [2024-12-05 12:14:10.561099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.682 [2024-12-05 12:14:10.561132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.682 qpair failed and we were unable to recover it. 00:30:36.682 [2024-12-05 12:14:10.561380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.682 [2024-12-05 12:14:10.561416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.682 qpair failed and we were unable to recover it. 00:30:36.682 [2024-12-05 12:14:10.561656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.682 [2024-12-05 12:14:10.561689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.682 qpair failed and we were unable to recover it. 00:30:36.683 [2024-12-05 12:14:10.561820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.683 [2024-12-05 12:14:10.561859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.683 qpair failed and we were unable to recover it. 00:30:36.683 [2024-12-05 12:14:10.562105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.683 [2024-12-05 12:14:10.562138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.683 qpair failed and we were unable to recover it. 00:30:36.683 [2024-12-05 12:14:10.562261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.683 [2024-12-05 12:14:10.562294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.683 qpair failed and we were unable to recover it. 00:30:36.683 [2024-12-05 12:14:10.562485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.683 [2024-12-05 12:14:10.562518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.683 qpair failed and we were unable to recover it. 00:30:36.683 [2024-12-05 12:14:10.562695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.683 [2024-12-05 12:14:10.562728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.683 qpair failed and we were unable to recover it. 00:30:36.683 [2024-12-05 12:14:10.562937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.683 [2024-12-05 12:14:10.562973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.683 qpair failed and we were unable to recover it. 00:30:36.683 [2024-12-05 12:14:10.563101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.683 [2024-12-05 12:14:10.563135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.683 qpair failed and we were unable to recover it. 00:30:36.683 [2024-12-05 12:14:10.563403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.683 [2024-12-05 12:14:10.563437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.683 qpair failed and we were unable to recover it. 00:30:36.683 [2024-12-05 12:14:10.563705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.683 [2024-12-05 12:14:10.563737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.683 qpair failed and we were unable to recover it. 00:30:36.683 [2024-12-05 12:14:10.563854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.683 [2024-12-05 12:14:10.563885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.683 qpair failed and we were unable to recover it. 00:30:36.683 [2024-12-05 12:14:10.564075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.683 [2024-12-05 12:14:10.564108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.683 qpair failed and we were unable to recover it. 00:30:36.683 [2024-12-05 12:14:10.564235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.683 [2024-12-05 12:14:10.564267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.683 qpair failed and we were unable to recover it. 00:30:36.683 [2024-12-05 12:14:10.564392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.683 [2024-12-05 12:14:10.564426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.683 qpair failed and we were unable to recover it. 00:30:36.683 [2024-12-05 12:14:10.564547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.683 [2024-12-05 12:14:10.564578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.683 qpair failed and we were unable to recover it. 00:30:36.683 [2024-12-05 12:14:10.564706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.683 [2024-12-05 12:14:10.564737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.683 qpair failed and we were unable to recover it. 00:30:36.683 [2024-12-05 12:14:10.564853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.683 [2024-12-05 12:14:10.564885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.683 qpair failed and we were unable to recover it. 00:30:36.683 [2024-12-05 12:14:10.565079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.683 [2024-12-05 12:14:10.565110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.683 qpair failed and we were unable to recover it. 00:30:36.683 [2024-12-05 12:14:10.565351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.683 [2024-12-05 12:14:10.565396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.683 qpair failed and we were unable to recover it. 00:30:36.683 [2024-12-05 12:14:10.565536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.683 [2024-12-05 12:14:10.565567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.683 qpair failed and we were unable to recover it. 00:30:36.683 [2024-12-05 12:14:10.565764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.683 [2024-12-05 12:14:10.565796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.683 qpair failed and we were unable to recover it. 00:30:36.683 [2024-12-05 12:14:10.565969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.683 [2024-12-05 12:14:10.566001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.683 qpair failed and we were unable to recover it. 00:30:36.683 [2024-12-05 12:14:10.566235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.683 [2024-12-05 12:14:10.566268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.683 qpair failed and we were unable to recover it. 00:30:36.683 [2024-12-05 12:14:10.566405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.683 [2024-12-05 12:14:10.566439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.683 qpair failed and we were unable to recover it. 00:30:36.683 [2024-12-05 12:14:10.566620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.683 [2024-12-05 12:14:10.566651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.683 qpair failed and we were unable to recover it. 00:30:36.683 [2024-12-05 12:14:10.566761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.683 [2024-12-05 12:14:10.566793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.683 qpair failed and we were unable to recover it. 00:30:36.683 [2024-12-05 12:14:10.566970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.683 [2024-12-05 12:14:10.567002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.683 qpair failed and we were unable to recover it. 00:30:36.683 [2024-12-05 12:14:10.567127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.683 [2024-12-05 12:14:10.567158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.683 qpair failed and we were unable to recover it. 00:30:36.683 [2024-12-05 12:14:10.567440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.683 [2024-12-05 12:14:10.567484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.683 qpair failed and we were unable to recover it. 00:30:36.683 [2024-12-05 12:14:10.567622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.683 [2024-12-05 12:14:10.567655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.683 qpair failed and we were unable to recover it. 00:30:36.683 [2024-12-05 12:14:10.567924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.683 [2024-12-05 12:14:10.567958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.683 qpair failed and we were unable to recover it. 00:30:36.683 [2024-12-05 12:14:10.568139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.683 [2024-12-05 12:14:10.568172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.683 qpair failed and we were unable to recover it. 00:30:36.683 [2024-12-05 12:14:10.568375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.683 [2024-12-05 12:14:10.568409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.683 qpair failed and we were unable to recover it. 00:30:36.683 [2024-12-05 12:14:10.568654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.683 [2024-12-05 12:14:10.568687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.683 qpair failed and we were unable to recover it. 00:30:36.683 [2024-12-05 12:14:10.568878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.683 [2024-12-05 12:14:10.568910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.683 qpair failed and we were unable to recover it. 00:30:36.683 [2024-12-05 12:14:10.569110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.683 [2024-12-05 12:14:10.569141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.683 qpair failed and we were unable to recover it. 00:30:36.683 [2024-12-05 12:14:10.569311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.683 [2024-12-05 12:14:10.569343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.683 qpair failed and we were unable to recover it. 00:30:36.683 [2024-12-05 12:14:10.569458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.683 [2024-12-05 12:14:10.569490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.683 qpair failed and we were unable to recover it. 00:30:36.683 [2024-12-05 12:14:10.569611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.683 [2024-12-05 12:14:10.569643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.683 qpair failed and we were unable to recover it. 00:30:36.683 [2024-12-05 12:14:10.569929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.684 [2024-12-05 12:14:10.569961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.684 qpair failed and we were unable to recover it. 00:30:36.684 [2024-12-05 12:14:10.570151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.684 [2024-12-05 12:14:10.570184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.684 qpair failed and we were unable to recover it. 00:30:36.684 [2024-12-05 12:14:10.570366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.684 [2024-12-05 12:14:10.570414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.684 qpair failed and we were unable to recover it. 00:30:36.684 [2024-12-05 12:14:10.570683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.684 [2024-12-05 12:14:10.570716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.684 qpair failed and we were unable to recover it. 00:30:36.684 [2024-12-05 12:14:10.570838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.684 [2024-12-05 12:14:10.570869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.684 qpair failed and we were unable to recover it. 00:30:36.684 [2024-12-05 12:14:10.570986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.684 [2024-12-05 12:14:10.571018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.684 qpair failed and we were unable to recover it. 00:30:36.684 [2024-12-05 12:14:10.571255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.684 [2024-12-05 12:14:10.571287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.684 qpair failed and we were unable to recover it. 00:30:36.684 [2024-12-05 12:14:10.571398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.684 [2024-12-05 12:14:10.571430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.684 qpair failed and we were unable to recover it. 00:30:36.684 [2024-12-05 12:14:10.571621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.684 [2024-12-05 12:14:10.571652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.684 qpair failed and we were unable to recover it. 00:30:36.684 [2024-12-05 12:14:10.571835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.684 [2024-12-05 12:14:10.571867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.684 qpair failed and we were unable to recover it. 00:30:36.684 [2024-12-05 12:14:10.572066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.684 [2024-12-05 12:14:10.572099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.684 qpair failed and we were unable to recover it. 00:30:36.684 [2024-12-05 12:14:10.572205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.684 [2024-12-05 12:14:10.572236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.684 qpair failed and we were unable to recover it. 00:30:36.684 [2024-12-05 12:14:10.572384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.684 [2024-12-05 12:14:10.572417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.684 qpair failed and we were unable to recover it. 00:30:36.684 [2024-12-05 12:14:10.572527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.684 [2024-12-05 12:14:10.572559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.684 qpair failed and we were unable to recover it. 00:30:36.684 [2024-12-05 12:14:10.572760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.684 [2024-12-05 12:14:10.572791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.684 qpair failed and we were unable to recover it. 00:30:36.684 [2024-12-05 12:14:10.573052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.684 [2024-12-05 12:14:10.573084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.684 qpair failed and we were unable to recover it. 00:30:36.684 [2024-12-05 12:14:10.573263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.684 [2024-12-05 12:14:10.573295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.684 qpair failed and we were unable to recover it. 00:30:36.684 [2024-12-05 12:14:10.573536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.684 [2024-12-05 12:14:10.573568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.684 qpair failed and we were unable to recover it. 00:30:36.684 [2024-12-05 12:14:10.573779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.684 [2024-12-05 12:14:10.573811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.684 qpair failed and we were unable to recover it. 00:30:36.684 [2024-12-05 12:14:10.573998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.684 [2024-12-05 12:14:10.574030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.684 qpair failed and we were unable to recover it. 00:30:36.684 [2024-12-05 12:14:10.574163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.684 [2024-12-05 12:14:10.574194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.684 qpair failed and we were unable to recover it. 00:30:36.684 [2024-12-05 12:14:10.574320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.684 [2024-12-05 12:14:10.574351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.684 qpair failed and we were unable to recover it. 00:30:36.684 [2024-12-05 12:14:10.574548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.684 [2024-12-05 12:14:10.574581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.684 qpair failed and we were unable to recover it. 00:30:36.684 [2024-12-05 12:14:10.574693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.684 [2024-12-05 12:14:10.574724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.684 qpair failed and we were unable to recover it. 00:30:36.684 [2024-12-05 12:14:10.574849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.684 [2024-12-05 12:14:10.574880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.684 qpair failed and we were unable to recover it. 00:30:36.684 [2024-12-05 12:14:10.575075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.684 [2024-12-05 12:14:10.575107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.684 qpair failed and we were unable to recover it. 00:30:36.684 [2024-12-05 12:14:10.575295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.684 [2024-12-05 12:14:10.575327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.684 qpair failed and we were unable to recover it. 00:30:36.684 [2024-12-05 12:14:10.575468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.684 [2024-12-05 12:14:10.575503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.684 qpair failed and we were unable to recover it. 00:30:36.684 [2024-12-05 12:14:10.575613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.684 [2024-12-05 12:14:10.575644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.684 qpair failed and we were unable to recover it. 00:30:36.684 [2024-12-05 12:14:10.575824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.684 [2024-12-05 12:14:10.575895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.684 qpair failed and we were unable to recover it. 00:30:36.684 [2024-12-05 12:14:10.576098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.684 [2024-12-05 12:14:10.576140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.684 qpair failed and we were unable to recover it. 00:30:36.684 [2024-12-05 12:14:10.576308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.684 [2024-12-05 12:14:10.576344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.684 qpair failed and we were unable to recover it. 00:30:36.684 [2024-12-05 12:14:10.576618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.685 [2024-12-05 12:14:10.576651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.685 qpair failed and we were unable to recover it. 00:30:36.685 [2024-12-05 12:14:10.576800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.685 [2024-12-05 12:14:10.576830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.685 qpair failed and we were unable to recover it. 00:30:36.685 [2024-12-05 12:14:10.577023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.685 [2024-12-05 12:14:10.577055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.685 qpair failed and we were unable to recover it. 00:30:36.685 [2024-12-05 12:14:10.577178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.685 [2024-12-05 12:14:10.577210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.685 qpair failed and we were unable to recover it. 00:30:36.685 [2024-12-05 12:14:10.577400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.685 [2024-12-05 12:14:10.577434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.685 qpair failed and we were unable to recover it. 00:30:36.685 [2024-12-05 12:14:10.577621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.685 [2024-12-05 12:14:10.577653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.685 qpair failed and we were unable to recover it. 00:30:36.685 [2024-12-05 12:14:10.577838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.685 [2024-12-05 12:14:10.577869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.685 qpair failed and we were unable to recover it. 00:30:36.685 [2024-12-05 12:14:10.578122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.685 [2024-12-05 12:14:10.578154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.685 qpair failed and we were unable to recover it. 00:30:36.685 [2024-12-05 12:14:10.578326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.685 [2024-12-05 12:14:10.578357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.685 qpair failed and we were unable to recover it. 00:30:36.685 [2024-12-05 12:14:10.578546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.685 [2024-12-05 12:14:10.578579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.685 qpair failed and we were unable to recover it. 00:30:36.685 [2024-12-05 12:14:10.578683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.685 [2024-12-05 12:14:10.578721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.685 qpair failed and we were unable to recover it. 00:30:36.685 [2024-12-05 12:14:10.578897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.685 [2024-12-05 12:14:10.578928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.685 qpair failed and we were unable to recover it. 00:30:36.685 [2024-12-05 12:14:10.579193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.685 [2024-12-05 12:14:10.579225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.685 qpair failed and we were unable to recover it. 00:30:36.685 [2024-12-05 12:14:10.579417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.685 [2024-12-05 12:14:10.579450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.685 qpair failed and we were unable to recover it. 00:30:36.685 [2024-12-05 12:14:10.579749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.685 [2024-12-05 12:14:10.579782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.685 qpair failed and we were unable to recover it. 00:30:36.685 [2024-12-05 12:14:10.579901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.685 [2024-12-05 12:14:10.579932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.685 qpair failed and we were unable to recover it. 00:30:36.685 [2024-12-05 12:14:10.580154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.685 [2024-12-05 12:14:10.580186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.685 qpair failed and we were unable to recover it. 00:30:36.685 [2024-12-05 12:14:10.580428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.685 [2024-12-05 12:14:10.580460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.685 qpair failed and we were unable to recover it. 00:30:36.685 [2024-12-05 12:14:10.580657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.685 [2024-12-05 12:14:10.580689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.685 qpair failed and we were unable to recover it. 00:30:36.685 [2024-12-05 12:14:10.580874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.685 [2024-12-05 12:14:10.580906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.685 qpair failed and we were unable to recover it. 00:30:36.685 [2024-12-05 12:14:10.581111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.685 [2024-12-05 12:14:10.581142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.685 qpair failed and we were unable to recover it. 00:30:36.685 [2024-12-05 12:14:10.581324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.685 [2024-12-05 12:14:10.581361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.685 qpair failed and we were unable to recover it. 00:30:36.685 [2024-12-05 12:14:10.581601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.685 [2024-12-05 12:14:10.581632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.685 qpair failed and we were unable to recover it. 00:30:36.685 [2024-12-05 12:14:10.581765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.685 [2024-12-05 12:14:10.581796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.685 qpair failed and we were unable to recover it. 00:30:36.685 [2024-12-05 12:14:10.581915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.685 [2024-12-05 12:14:10.581949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.685 qpair failed and we were unable to recover it. 00:30:36.685 [2024-12-05 12:14:10.582057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.685 [2024-12-05 12:14:10.582088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.685 qpair failed and we were unable to recover it. 00:30:36.685 [2024-12-05 12:14:10.582193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.685 [2024-12-05 12:14:10.582225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.685 qpair failed and we were unable to recover it. 00:30:36.685 [2024-12-05 12:14:10.582480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.685 [2024-12-05 12:14:10.582512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.685 qpair failed and we were unable to recover it. 00:30:36.685 [2024-12-05 12:14:10.582686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.685 [2024-12-05 12:14:10.582720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.685 qpair failed and we were unable to recover it. 00:30:36.685 [2024-12-05 12:14:10.582833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.685 [2024-12-05 12:14:10.582865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.685 qpair failed and we were unable to recover it. 00:30:36.685 [2024-12-05 12:14:10.582989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.685 [2024-12-05 12:14:10.583021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.685 qpair failed and we were unable to recover it. 00:30:36.685 [2024-12-05 12:14:10.583271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.685 [2024-12-05 12:14:10.583303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.685 qpair failed and we were unable to recover it. 00:30:36.685 [2024-12-05 12:14:10.583505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.685 [2024-12-05 12:14:10.583538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.685 qpair failed and we were unable to recover it. 00:30:36.685 [2024-12-05 12:14:10.583733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.685 [2024-12-05 12:14:10.583765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.685 qpair failed and we were unable to recover it. 00:30:36.685 [2024-12-05 12:14:10.583879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.685 [2024-12-05 12:14:10.583911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.685 qpair failed and we were unable to recover it. 00:30:36.685 [2024-12-05 12:14:10.584154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.685 [2024-12-05 12:14:10.584186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.685 qpair failed and we were unable to recover it. 00:30:36.685 [2024-12-05 12:14:10.584293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.685 [2024-12-05 12:14:10.584324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.685 qpair failed and we were unable to recover it. 00:30:36.685 [2024-12-05 12:14:10.584536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.685 [2024-12-05 12:14:10.584579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.685 qpair failed and we were unable to recover it. 00:30:36.685 [2024-12-05 12:14:10.584873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.686 [2024-12-05 12:14:10.584906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.686 qpair failed and we were unable to recover it. 00:30:36.686 [2024-12-05 12:14:10.585037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.686 [2024-12-05 12:14:10.585070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.686 qpair failed and we were unable to recover it. 00:30:36.686 [2024-12-05 12:14:10.585295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.686 [2024-12-05 12:14:10.585328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.686 qpair failed and we were unable to recover it. 00:30:36.686 [2024-12-05 12:14:10.585468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.686 [2024-12-05 12:14:10.585502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.686 qpair failed and we were unable to recover it. 00:30:36.686 [2024-12-05 12:14:10.585799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.686 [2024-12-05 12:14:10.585831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.686 qpair failed and we were unable to recover it. 00:30:36.686 [2024-12-05 12:14:10.586026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.686 [2024-12-05 12:14:10.586058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.686 qpair failed and we were unable to recover it. 00:30:36.686 [2024-12-05 12:14:10.586193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.686 [2024-12-05 12:14:10.586225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.686 qpair failed and we were unable to recover it. 00:30:36.686 [2024-12-05 12:14:10.586515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.686 [2024-12-05 12:14:10.586548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.686 qpair failed and we were unable to recover it. 00:30:36.686 [2024-12-05 12:14:10.586657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.686 [2024-12-05 12:14:10.586689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.686 qpair failed and we were unable to recover it. 00:30:36.686 [2024-12-05 12:14:10.586828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.686 [2024-12-05 12:14:10.586860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.686 qpair failed and we were unable to recover it. 00:30:36.686 [2024-12-05 12:14:10.587127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.686 [2024-12-05 12:14:10.587159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.686 qpair failed and we were unable to recover it. 00:30:36.686 [2024-12-05 12:14:10.587345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.686 [2024-12-05 12:14:10.587388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.686 qpair failed and we were unable to recover it. 00:30:36.686 [2024-12-05 12:14:10.587506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.686 [2024-12-05 12:14:10.587554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.686 qpair failed and we were unable to recover it. 00:30:36.686 [2024-12-05 12:14:10.587755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.686 [2024-12-05 12:14:10.587787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.686 qpair failed and we were unable to recover it. 00:30:36.686 [2024-12-05 12:14:10.587896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.686 [2024-12-05 12:14:10.587939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.686 qpair failed and we were unable to recover it. 00:30:36.686 [2024-12-05 12:14:10.588045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.686 [2024-12-05 12:14:10.588076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.686 qpair failed and we were unable to recover it. 00:30:36.686 [2024-12-05 12:14:10.588199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.686 [2024-12-05 12:14:10.588231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.686 qpair failed and we were unable to recover it. 00:30:36.686 [2024-12-05 12:14:10.588338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.686 [2024-12-05 12:14:10.588382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.686 qpair failed and we were unable to recover it. 00:30:36.686 [2024-12-05 12:14:10.588581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.686 [2024-12-05 12:14:10.588612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.686 qpair failed and we were unable to recover it. 00:30:36.686 [2024-12-05 12:14:10.588838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.686 [2024-12-05 12:14:10.588872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.686 qpair failed and we were unable to recover it. 00:30:36.686 [2024-12-05 12:14:10.589060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.686 [2024-12-05 12:14:10.589091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.686 qpair failed and we were unable to recover it. 00:30:36.686 [2024-12-05 12:14:10.589279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.686 [2024-12-05 12:14:10.589312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.686 qpair failed and we were unable to recover it. 00:30:36.686 [2024-12-05 12:14:10.589519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.686 [2024-12-05 12:14:10.589553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.686 qpair failed and we were unable to recover it. 00:30:36.686 [2024-12-05 12:14:10.589680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.686 [2024-12-05 12:14:10.589712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.686 qpair failed and we were unable to recover it. 00:30:36.686 [2024-12-05 12:14:10.589907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.686 [2024-12-05 12:14:10.589939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.686 qpair failed and we were unable to recover it. 00:30:36.686 [2024-12-05 12:14:10.590110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.686 [2024-12-05 12:14:10.590151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.686 qpair failed and we were unable to recover it. 00:30:36.686 [2024-12-05 12:14:10.590350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.686 [2024-12-05 12:14:10.590392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.686 qpair failed and we were unable to recover it. 00:30:36.686 [2024-12-05 12:14:10.590532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.686 [2024-12-05 12:14:10.590565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.686 qpair failed and we were unable to recover it. 00:30:36.686 [2024-12-05 12:14:10.590754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.686 [2024-12-05 12:14:10.590789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.686 qpair failed and we were unable to recover it. 00:30:36.686 [2024-12-05 12:14:10.591026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.686 [2024-12-05 12:14:10.591059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.686 qpair failed and we were unable to recover it. 00:30:36.686 [2024-12-05 12:14:10.591249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.686 [2024-12-05 12:14:10.591282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.686 qpair failed and we were unable to recover it. 00:30:36.686 [2024-12-05 12:14:10.591397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.686 [2024-12-05 12:14:10.591432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.686 qpair failed and we were unable to recover it. 00:30:36.686 [2024-12-05 12:14:10.591555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.686 [2024-12-05 12:14:10.591588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.686 qpair failed and we were unable to recover it. 00:30:36.686 [2024-12-05 12:14:10.591848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.686 [2024-12-05 12:14:10.591881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.686 qpair failed and we were unable to recover it. 00:30:36.686 [2024-12-05 12:14:10.592018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.686 [2024-12-05 12:14:10.592051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.686 qpair failed and we were unable to recover it. 00:30:36.686 [2024-12-05 12:14:10.592234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.686 [2024-12-05 12:14:10.592267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.686 qpair failed and we were unable to recover it. 00:30:36.686 [2024-12-05 12:14:10.592460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.686 [2024-12-05 12:14:10.592493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.686 qpair failed and we were unable to recover it. 00:30:36.686 [2024-12-05 12:14:10.592664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.686 [2024-12-05 12:14:10.592697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.686 qpair failed and we were unable to recover it. 00:30:36.686 [2024-12-05 12:14:10.592815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.686 [2024-12-05 12:14:10.592858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.687 qpair failed and we were unable to recover it. 00:30:36.687 [2024-12-05 12:14:10.593000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.687 [2024-12-05 12:14:10.593044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.687 qpair failed and we were unable to recover it. 00:30:36.687 [2024-12-05 12:14:10.593149] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:36.687 [2024-12-05 12:14:10.593173] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:36.687 [2024-12-05 12:14:10.593182] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:36.687 [2024-12-05 12:14:10.593188] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:36.687 [2024-12-05 12:14:10.593194] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:36.687 [2024-12-05 12:14:10.593254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.687 [2024-12-05 12:14:10.593289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.687 qpair failed and we were unable to recover it. 00:30:36.687 [2024-12-05 12:14:10.593412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.687 [2024-12-05 12:14:10.593448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.687 qpair failed and we were unable to recover it. 00:30:36.687 [2024-12-05 12:14:10.593600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.687 [2024-12-05 12:14:10.593632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.687 qpair failed and we were unable to recover it. 00:30:36.687 [2024-12-05 12:14:10.593740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.687 [2024-12-05 12:14:10.593770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.687 qpair failed and we were unable to recover it. 00:30:36.687 [2024-12-05 12:14:10.593898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.687 [2024-12-05 12:14:10.593931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.687 qpair failed and we were unable to recover it. 00:30:36.687 [2024-12-05 12:14:10.594116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.687 [2024-12-05 12:14:10.594149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.687 qpair failed and we were unable to recover it. 00:30:36.687 [2024-12-05 12:14:10.594279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.687 [2024-12-05 12:14:10.594311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.687 qpair failed and we were unable to recover it. 00:30:36.687 [2024-12-05 12:14:10.594523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.687 [2024-12-05 12:14:10.594558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.687 qpair failed and we were unable to recover it. 00:30:36.687 [2024-12-05 12:14:10.594773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.687 [2024-12-05 12:14:10.594805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.687 [2024-12-05 12:14:10.594740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:36.687 qpair failed and we were unable to recover it. 00:30:36.687 [2024-12-05 12:14:10.594850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:36.687 [2024-12-05 12:14:10.594940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.687 [2024-12-05 12:14:10.594977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.687 qpair failed and we were unable to recover it. 00:30:36.687 [2024-12-05 12:14:10.595154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:36.687 [2024-12-05 12:14:10.595166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.687 [2024-12-05 12:14:10.595198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.687 qpair failed and we were unable to recover it. 00:30:36.687 [2024-12-05 12:14:10.595154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:36.687 [2024-12-05 12:14:10.595326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.687 [2024-12-05 12:14:10.595357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.687 qpair failed and we were unable to recover it. 00:30:36.687 [2024-12-05 12:14:10.595562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.687 [2024-12-05 12:14:10.595593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.687 qpair failed and we were unable to recover it. 00:30:36.687 [2024-12-05 12:14:10.595783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.687 [2024-12-05 12:14:10.595815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.687 qpair failed and we were unable to recover it. 00:30:36.687 [2024-12-05 12:14:10.596058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.687 [2024-12-05 12:14:10.596090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.687 qpair failed and we were unable to recover it. 00:30:36.687 [2024-12-05 12:14:10.596353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.687 [2024-12-05 12:14:10.596397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.687 qpair failed and we were unable to recover it. 00:30:36.687 [2024-12-05 12:14:10.596537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.687 [2024-12-05 12:14:10.596571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.687 qpair failed and we were unable to recover it. 00:30:36.687 [2024-12-05 12:14:10.596751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.687 [2024-12-05 12:14:10.596782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.687 qpair failed and we were unable to recover it. 00:30:36.687 [2024-12-05 12:14:10.596967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.687 [2024-12-05 12:14:10.597001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.687 qpair failed and we were unable to recover it. 00:30:36.687 [2024-12-05 12:14:10.597122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.687 [2024-12-05 12:14:10.597154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.687 qpair failed and we were unable to recover it. 00:30:36.687 [2024-12-05 12:14:10.597399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.687 [2024-12-05 12:14:10.597433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.687 qpair failed and we were unable to recover it. 00:30:36.687 [2024-12-05 12:14:10.597611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.687 [2024-12-05 12:14:10.597643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.687 qpair failed and we were unable to recover it. 00:30:36.687 [2024-12-05 12:14:10.597842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.687 [2024-12-05 12:14:10.597878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.687 qpair failed and we were unable to recover it. 00:30:36.687 [2024-12-05 12:14:10.598022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.687 [2024-12-05 12:14:10.598054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.687 qpair failed and we were unable to recover it. 00:30:36.687 [2024-12-05 12:14:10.598239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.687 [2024-12-05 12:14:10.598271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.687 qpair failed and we were unable to recover it. 00:30:36.687 [2024-12-05 12:14:10.598454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.687 [2024-12-05 12:14:10.598487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.687 qpair failed and we were unable to recover it. 00:30:36.687 [2024-12-05 12:14:10.598594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.687 [2024-12-05 12:14:10.598624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.687 qpair failed and we were unable to recover it. 00:30:36.687 [2024-12-05 12:14:10.598889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.687 [2024-12-05 12:14:10.598922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.687 qpair failed and we were unable to recover it. 00:30:36.687 [2024-12-05 12:14:10.599163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.687 [2024-12-05 12:14:10.599196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.687 qpair failed and we were unable to recover it. 00:30:36.687 [2024-12-05 12:14:10.599402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.687 [2024-12-05 12:14:10.599435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.687 qpair failed and we were unable to recover it. 00:30:36.687 [2024-12-05 12:14:10.599566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.687 [2024-12-05 12:14:10.599599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.687 qpair failed and we were unable to recover it. 00:30:36.687 [2024-12-05 12:14:10.599708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.687 [2024-12-05 12:14:10.599745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.687 qpair failed and we were unable to recover it. 00:30:36.687 [2024-12-05 12:14:10.599959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.687 [2024-12-05 12:14:10.599992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.687 qpair failed and we were unable to recover it. 00:30:36.687 [2024-12-05 12:14:10.600092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.687 [2024-12-05 12:14:10.600123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.687 qpair failed and we were unable to recover it. 00:30:36.688 [2024-12-05 12:14:10.600362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.688 [2024-12-05 12:14:10.600405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.688 qpair failed and we were unable to recover it. 00:30:36.688 [2024-12-05 12:14:10.600675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.688 [2024-12-05 12:14:10.600707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.688 qpair failed and we were unable to recover it. 00:30:36.688 [2024-12-05 12:14:10.600834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.688 [2024-12-05 12:14:10.600866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.688 qpair failed and we were unable to recover it. 00:30:36.688 [2024-12-05 12:14:10.601068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.688 [2024-12-05 12:14:10.601101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.688 qpair failed and we were unable to recover it. 00:30:36.688 [2024-12-05 12:14:10.601204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.688 [2024-12-05 12:14:10.601236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.688 qpair failed and we were unable to recover it. 00:30:36.688 [2024-12-05 12:14:10.601340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.688 [2024-12-05 12:14:10.601382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.688 qpair failed and we were unable to recover it. 00:30:36.688 [2024-12-05 12:14:10.601567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.688 [2024-12-05 12:14:10.601600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.688 qpair failed and we were unable to recover it. 00:30:36.688 [2024-12-05 12:14:10.601779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.688 [2024-12-05 12:14:10.601811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.688 qpair failed and we were unable to recover it. 00:30:36.688 [2024-12-05 12:14:10.601995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.688 [2024-12-05 12:14:10.602027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.688 qpair failed and we were unable to recover it. 00:30:36.688 [2024-12-05 12:14:10.602153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.688 [2024-12-05 12:14:10.602184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.688 qpair failed and we were unable to recover it. 00:30:36.688 [2024-12-05 12:14:10.602358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.688 [2024-12-05 12:14:10.602400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.688 qpair failed and we were unable to recover it. 00:30:36.688 [2024-12-05 12:14:10.602575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.688 [2024-12-05 12:14:10.602606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.688 qpair failed and we were unable to recover it. 00:30:36.688 [2024-12-05 12:14:10.602729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.688 [2024-12-05 12:14:10.602763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.688 qpair failed and we were unable to recover it. 00:30:36.688 [2024-12-05 12:14:10.603018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.688 [2024-12-05 12:14:10.603052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.688 qpair failed and we were unable to recover it. 00:30:36.688 [2024-12-05 12:14:10.603261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.688 [2024-12-05 12:14:10.603294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.688 qpair failed and we were unable to recover it. 00:30:36.688 [2024-12-05 12:14:10.603507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.688 [2024-12-05 12:14:10.603540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.688 qpair failed and we were unable to recover it. 00:30:36.688 [2024-12-05 12:14:10.603738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.688 [2024-12-05 12:14:10.603782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.688 qpair failed and we were unable to recover it. 00:30:36.688 [2024-12-05 12:14:10.603987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.688 [2024-12-05 12:14:10.604024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.688 qpair failed and we were unable to recover it. 00:30:36.688 [2024-12-05 12:14:10.604211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.688 [2024-12-05 12:14:10.604244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.688 qpair failed and we were unable to recover it. 00:30:36.688 [2024-12-05 12:14:10.604416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.688 [2024-12-05 12:14:10.604450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.688 qpair failed and we were unable to recover it. 00:30:36.688 [2024-12-05 12:14:10.604564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.688 [2024-12-05 12:14:10.604604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.688 qpair failed and we were unable to recover it. 00:30:36.688 [2024-12-05 12:14:10.604813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.688 [2024-12-05 12:14:10.604845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.688 qpair failed and we were unable to recover it. 00:30:36.688 [2024-12-05 12:14:10.605029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.688 [2024-12-05 12:14:10.605061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.688 qpair failed and we were unable to recover it. 00:30:36.688 [2024-12-05 12:14:10.605194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.688 [2024-12-05 12:14:10.605227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.688 qpair failed and we were unable to recover it. 00:30:36.688 [2024-12-05 12:14:10.605403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.688 [2024-12-05 12:14:10.605438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.688 qpair failed and we were unable to recover it. 00:30:36.688 [2024-12-05 12:14:10.605645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.688 [2024-12-05 12:14:10.605677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.688 qpair failed and we were unable to recover it. 00:30:36.688 [2024-12-05 12:14:10.605867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.688 [2024-12-05 12:14:10.605899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.688 qpair failed and we were unable to recover it. 00:30:36.688 [2024-12-05 12:14:10.606079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.688 [2024-12-05 12:14:10.606111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.688 qpair failed and we were unable to recover it. 00:30:36.688 [2024-12-05 12:14:10.606314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.688 [2024-12-05 12:14:10.606348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.688 qpair failed and we were unable to recover it. 00:30:36.688 [2024-12-05 12:14:10.606540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.688 [2024-12-05 12:14:10.606580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.688 qpair failed and we were unable to recover it. 00:30:36.688 [2024-12-05 12:14:10.606702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.688 [2024-12-05 12:14:10.606734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.688 qpair failed and we were unable to recover it. 00:30:36.688 [2024-12-05 12:14:10.606837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.688 [2024-12-05 12:14:10.606870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.688 qpair failed and we were unable to recover it. 00:30:36.688 [2024-12-05 12:14:10.607007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.688 [2024-12-05 12:14:10.607041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.688 qpair failed and we were unable to recover it. 00:30:36.688 [2024-12-05 12:14:10.607236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.688 [2024-12-05 12:14:10.607268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.688 qpair failed and we were unable to recover it. 00:30:36.688 [2024-12-05 12:14:10.607521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.688 [2024-12-05 12:14:10.607554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.688 qpair failed and we were unable to recover it. 00:30:36.688 [2024-12-05 12:14:10.607732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.688 [2024-12-05 12:14:10.607767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.688 qpair failed and we were unable to recover it. 00:30:36.688 [2024-12-05 12:14:10.607958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.688 [2024-12-05 12:14:10.607990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.688 qpair failed and we were unable to recover it. 00:30:36.688 [2024-12-05 12:14:10.608095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.688 [2024-12-05 12:14:10.608128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.688 qpair failed and we were unable to recover it. 00:30:36.688 [2024-12-05 12:14:10.608310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.688 [2024-12-05 12:14:10.608342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.689 qpair failed and we were unable to recover it. 00:30:36.689 [2024-12-05 12:14:10.608466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.689 [2024-12-05 12:14:10.608501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.689 qpair failed and we were unable to recover it. 00:30:36.689 [2024-12-05 12:14:10.608617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.689 [2024-12-05 12:14:10.608659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.689 qpair failed and we were unable to recover it. 00:30:36.689 [2024-12-05 12:14:10.608767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.689 [2024-12-05 12:14:10.608802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.689 qpair failed and we were unable to recover it. 00:30:36.689 [2024-12-05 12:14:10.608977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.689 [2024-12-05 12:14:10.609009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.689 qpair failed and we were unable to recover it. 00:30:36.689 [2024-12-05 12:14:10.609203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.689 [2024-12-05 12:14:10.609235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.689 qpair failed and we were unable to recover it. 00:30:36.689 [2024-12-05 12:14:10.609502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.689 [2024-12-05 12:14:10.609535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.689 qpair failed and we were unable to recover it. 00:30:36.689 [2024-12-05 12:14:10.609677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.689 [2024-12-05 12:14:10.609709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.689 qpair failed and we were unable to recover it. 00:30:36.689 [2024-12-05 12:14:10.609837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.689 [2024-12-05 12:14:10.609870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.689 qpair failed and we were unable to recover it. 00:30:36.689 [2024-12-05 12:14:10.609998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.689 [2024-12-05 12:14:10.610030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.689 qpair failed and we were unable to recover it. 00:30:36.689 [2024-12-05 12:14:10.610206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.689 [2024-12-05 12:14:10.610239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.689 qpair failed and we were unable to recover it. 00:30:36.689 [2024-12-05 12:14:10.610341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.689 [2024-12-05 12:14:10.610380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.689 qpair failed and we were unable to recover it. 00:30:36.689 [2024-12-05 12:14:10.610568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.689 [2024-12-05 12:14:10.610602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.689 qpair failed and we were unable to recover it. 00:30:36.689 [2024-12-05 12:14:10.610774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.689 [2024-12-05 12:14:10.610805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.689 qpair failed and we were unable to recover it. 00:30:36.689 [2024-12-05 12:14:10.611047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.689 [2024-12-05 12:14:10.611079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.689 qpair failed and we were unable to recover it. 00:30:36.689 [2024-12-05 12:14:10.611202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.689 [2024-12-05 12:14:10.611235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.689 qpair failed and we were unable to recover it. 00:30:36.689 [2024-12-05 12:14:10.611544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.689 [2024-12-05 12:14:10.611580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.689 qpair failed and we were unable to recover it. 00:30:36.689 [2024-12-05 12:14:10.611692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.689 [2024-12-05 12:14:10.611724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.689 qpair failed and we were unable to recover it. 00:30:36.689 [2024-12-05 12:14:10.611883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.689 [2024-12-05 12:14:10.611930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.689 qpair failed and we were unable to recover it. 00:30:36.689 [2024-12-05 12:14:10.612051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.689 [2024-12-05 12:14:10.612084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.689 qpair failed and we were unable to recover it. 00:30:36.689 [2024-12-05 12:14:10.612265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.689 [2024-12-05 12:14:10.612298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.689 qpair failed and we were unable to recover it. 00:30:36.689 [2024-12-05 12:14:10.612477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.689 [2024-12-05 12:14:10.612511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.689 qpair failed and we were unable to recover it. 00:30:36.689 [2024-12-05 12:14:10.612777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.689 [2024-12-05 12:14:10.612812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.689 qpair failed and we were unable to recover it. 00:30:36.689 [2024-12-05 12:14:10.612998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.689 [2024-12-05 12:14:10.613030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.689 qpair failed and we were unable to recover it. 00:30:36.689 [2024-12-05 12:14:10.613230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.689 [2024-12-05 12:14:10.613264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.689 qpair failed and we were unable to recover it. 00:30:36.689 [2024-12-05 12:14:10.613399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.689 [2024-12-05 12:14:10.613433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.689 qpair failed and we were unable to recover it. 00:30:36.689 [2024-12-05 12:14:10.613622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.689 [2024-12-05 12:14:10.613656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.689 qpair failed and we were unable to recover it. 00:30:36.689 [2024-12-05 12:14:10.613848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.689 [2024-12-05 12:14:10.613894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.689 qpair failed and we were unable to recover it. 00:30:36.689 [2024-12-05 12:14:10.614139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.689 [2024-12-05 12:14:10.614173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.689 qpair failed and we were unable to recover it. 00:30:36.689 [2024-12-05 12:14:10.614352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.689 [2024-12-05 12:14:10.614397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.689 qpair failed and we were unable to recover it. 00:30:36.689 [2024-12-05 12:14:10.614573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.689 [2024-12-05 12:14:10.614605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.689 qpair failed and we were unable to recover it. 00:30:36.689 [2024-12-05 12:14:10.614892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.689 [2024-12-05 12:14:10.614927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.689 qpair failed and we were unable to recover it. 00:30:36.689 [2024-12-05 12:14:10.615066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.689 [2024-12-05 12:14:10.615102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.689 qpair failed and we were unable to recover it. 00:30:36.689 [2024-12-05 12:14:10.615301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.689 [2024-12-05 12:14:10.615334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.689 qpair failed and we were unable to recover it. 00:30:36.689 [2024-12-05 12:14:10.615659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.690 [2024-12-05 12:14:10.615710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.690 qpair failed and we were unable to recover it. 00:30:36.690 [2024-12-05 12:14:10.615892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.690 [2024-12-05 12:14:10.615926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.690 qpair failed and we were unable to recover it. 00:30:36.690 [2024-12-05 12:14:10.616114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.690 [2024-12-05 12:14:10.616146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.690 qpair failed and we were unable to recover it. 00:30:36.690 [2024-12-05 12:14:10.616360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.690 [2024-12-05 12:14:10.616406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.690 qpair failed and we were unable to recover it. 00:30:36.690 [2024-12-05 12:14:10.616681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.690 [2024-12-05 12:14:10.616714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.690 qpair failed and we were unable to recover it. 00:30:36.690 [2024-12-05 12:14:10.616818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.690 [2024-12-05 12:14:10.616850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.690 qpair failed and we were unable to recover it. 00:30:36.690 [2024-12-05 12:14:10.617037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.690 [2024-12-05 12:14:10.617069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.690 qpair failed and we were unable to recover it. 00:30:36.690 [2024-12-05 12:14:10.617213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.690 [2024-12-05 12:14:10.617246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.690 qpair failed and we were unable to recover it. 00:30:36.690 [2024-12-05 12:14:10.617418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.690 [2024-12-05 12:14:10.617452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.690 qpair failed and we were unable to recover it. 00:30:36.690 [2024-12-05 12:14:10.617718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.690 [2024-12-05 12:14:10.617750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.690 qpair failed and we were unable to recover it. 00:30:36.690 [2024-12-05 12:14:10.617881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.690 [2024-12-05 12:14:10.617912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.690 qpair failed and we were unable to recover it. 00:30:36.690 [2024-12-05 12:14:10.618165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.690 [2024-12-05 12:14:10.618197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.690 qpair failed and we were unable to recover it. 00:30:36.690 [2024-12-05 12:14:10.618403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.690 [2024-12-05 12:14:10.618436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.690 qpair failed and we were unable to recover it. 00:30:36.690 [2024-12-05 12:14:10.618556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.690 [2024-12-05 12:14:10.618588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.690 qpair failed and we were unable to recover it. 00:30:36.690 [2024-12-05 12:14:10.618709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.690 [2024-12-05 12:14:10.618741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.690 qpair failed and we were unable to recover it. 00:30:36.690 [2024-12-05 12:14:10.618916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.690 [2024-12-05 12:14:10.618948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.690 qpair failed and we were unable to recover it. 00:30:36.690 [2024-12-05 12:14:10.619136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.690 [2024-12-05 12:14:10.619168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.690 qpair failed and we were unable to recover it. 00:30:36.690 [2024-12-05 12:14:10.619356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.690 [2024-12-05 12:14:10.619401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.690 qpair failed and we were unable to recover it. 00:30:36.690 [2024-12-05 12:14:10.619647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.690 [2024-12-05 12:14:10.619680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.690 qpair failed and we were unable to recover it. 00:30:36.690 [2024-12-05 12:14:10.619906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.690 [2024-12-05 12:14:10.619938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.690 qpair failed and we were unable to recover it. 00:30:36.690 [2024-12-05 12:14:10.620128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.690 [2024-12-05 12:14:10.620161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.690 qpair failed and we were unable to recover it. 00:30:36.690 [2024-12-05 12:14:10.620277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.690 [2024-12-05 12:14:10.620308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.690 qpair failed and we were unable to recover it. 00:30:36.690 [2024-12-05 12:14:10.620502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.690 [2024-12-05 12:14:10.620536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.690 qpair failed and we were unable to recover it. 00:30:36.690 [2024-12-05 12:14:10.620751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.690 [2024-12-05 12:14:10.620783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.690 qpair failed and we were unable to recover it. 00:30:36.690 [2024-12-05 12:14:10.621028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.690 [2024-12-05 12:14:10.621067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.690 qpair failed and we were unable to recover it. 00:30:36.690 [2024-12-05 12:14:10.621309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.690 [2024-12-05 12:14:10.621343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.690 qpair failed and we were unable to recover it. 00:30:36.690 [2024-12-05 12:14:10.621543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.690 [2024-12-05 12:14:10.621576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.690 qpair failed and we were unable to recover it. 00:30:36.690 [2024-12-05 12:14:10.621767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.690 [2024-12-05 12:14:10.621800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.690 qpair failed and we were unable to recover it. 00:30:36.690 [2024-12-05 12:14:10.621922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.690 [2024-12-05 12:14:10.621955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.690 qpair failed and we were unable to recover it. 00:30:36.690 [2024-12-05 12:14:10.622067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.690 [2024-12-05 12:14:10.622099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.690 qpair failed and we were unable to recover it. 00:30:36.690 [2024-12-05 12:14:10.622351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.690 [2024-12-05 12:14:10.622396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.690 qpair failed and we were unable to recover it. 00:30:36.690 [2024-12-05 12:14:10.622533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.690 [2024-12-05 12:14:10.622565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.690 qpair failed and we were unable to recover it. 00:30:36.690 [2024-12-05 12:14:10.622805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.690 [2024-12-05 12:14:10.622837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.690 qpair failed and we were unable to recover it. 00:30:36.690 [2024-12-05 12:14:10.623077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.690 [2024-12-05 12:14:10.623108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.690 qpair failed and we were unable to recover it. 00:30:36.690 [2024-12-05 12:14:10.623310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.690 [2024-12-05 12:14:10.623341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.690 qpair failed and we were unable to recover it. 00:30:36.690 [2024-12-05 12:14:10.623623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.690 [2024-12-05 12:14:10.623657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.690 qpair failed and we were unable to recover it. 00:30:36.690 [2024-12-05 12:14:10.623871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.690 [2024-12-05 12:14:10.623902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.690 qpair failed and we were unable to recover it. 00:30:36.690 [2024-12-05 12:14:10.624085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.690 [2024-12-05 12:14:10.624118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.690 qpair failed and we were unable to recover it. 00:30:36.690 [2024-12-05 12:14:10.624389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.690 [2024-12-05 12:14:10.624424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.691 qpair failed and we were unable to recover it. 00:30:36.691 [2024-12-05 12:14:10.624603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.691 [2024-12-05 12:14:10.624635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.691 qpair failed and we were unable to recover it. 00:30:36.691 [2024-12-05 12:14:10.624899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.691 [2024-12-05 12:14:10.624931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.691 qpair failed and we were unable to recover it. 00:30:36.691 [2024-12-05 12:14:10.625217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.691 [2024-12-05 12:14:10.625250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.691 qpair failed and we were unable to recover it. 00:30:36.691 [2024-12-05 12:14:10.625518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.691 [2024-12-05 12:14:10.625550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.691 qpair failed and we were unable to recover it. 00:30:36.691 [2024-12-05 12:14:10.625836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.691 [2024-12-05 12:14:10.625867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.691 qpair failed and we were unable to recover it. 00:30:36.691 [2024-12-05 12:14:10.626075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.691 [2024-12-05 12:14:10.626108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.691 qpair failed and we were unable to recover it. 00:30:36.691 [2024-12-05 12:14:10.626294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.691 [2024-12-05 12:14:10.626325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.691 qpair failed and we were unable to recover it. 00:30:36.691 [2024-12-05 12:14:10.626518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.691 [2024-12-05 12:14:10.626551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.691 qpair failed and we were unable to recover it. 00:30:36.691 [2024-12-05 12:14:10.626750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.691 [2024-12-05 12:14:10.626781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.691 qpair failed and we were unable to recover it. 00:30:36.691 [2024-12-05 12:14:10.626953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.691 [2024-12-05 12:14:10.626985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.691 qpair failed and we were unable to recover it. 00:30:36.691 [2024-12-05 12:14:10.627248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.691 [2024-12-05 12:14:10.627280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.691 qpair failed and we were unable to recover it. 00:30:36.691 [2024-12-05 12:14:10.627565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.691 [2024-12-05 12:14:10.627597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.691 qpair failed and we were unable to recover it. 00:30:36.691 [2024-12-05 12:14:10.627866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.691 [2024-12-05 12:14:10.627899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.691 qpair failed and we were unable to recover it. 00:30:36.691 [2024-12-05 12:14:10.628174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.691 [2024-12-05 12:14:10.628208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.691 qpair failed and we were unable to recover it. 00:30:36.691 [2024-12-05 12:14:10.628485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.691 [2024-12-05 12:14:10.628520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.691 qpair failed and we were unable to recover it. 00:30:36.691 [2024-12-05 12:14:10.628764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.691 [2024-12-05 12:14:10.628797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.691 qpair failed and we were unable to recover it. 00:30:36.691 [2024-12-05 12:14:10.629063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.691 [2024-12-05 12:14:10.629095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.691 qpair failed and we were unable to recover it. 00:30:36.691 [2024-12-05 12:14:10.629281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.691 [2024-12-05 12:14:10.629312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.691 qpair failed and we were unable to recover it. 00:30:36.691 [2024-12-05 12:14:10.629561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.691 [2024-12-05 12:14:10.629595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.691 qpair failed and we were unable to recover it. 00:30:36.691 [2024-12-05 12:14:10.629784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.691 [2024-12-05 12:14:10.629817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.691 qpair failed and we were unable to recover it. 00:30:36.691 [2024-12-05 12:14:10.630055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.691 [2024-12-05 12:14:10.630087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.691 qpair failed and we were unable to recover it. 00:30:36.691 [2024-12-05 12:14:10.630351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.691 [2024-12-05 12:14:10.630391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.691 qpair failed and we were unable to recover it. 00:30:36.691 [2024-12-05 12:14:10.630680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.691 [2024-12-05 12:14:10.630714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.691 qpair failed and we were unable to recover it. 00:30:36.691 [2024-12-05 12:14:10.630852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.691 [2024-12-05 12:14:10.630884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.691 qpair failed and we were unable to recover it. 00:30:36.691 [2024-12-05 12:14:10.631098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.691 [2024-12-05 12:14:10.631132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.691 qpair failed and we were unable to recover it. 00:30:36.691 [2024-12-05 12:14:10.631427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.691 [2024-12-05 12:14:10.631466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.691 qpair failed and we were unable to recover it. 00:30:36.691 [2024-12-05 12:14:10.631716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.691 [2024-12-05 12:14:10.631747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.691 qpair failed and we were unable to recover it. 00:30:36.691 [2024-12-05 12:14:10.632037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.691 [2024-12-05 12:14:10.632069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.691 qpair failed and we were unable to recover it. 00:30:36.691 [2024-12-05 12:14:10.632252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.691 [2024-12-05 12:14:10.632284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.691 qpair failed and we were unable to recover it. 00:30:36.691 [2024-12-05 12:14:10.632543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.691 [2024-12-05 12:14:10.632576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.691 qpair failed and we were unable to recover it. 00:30:36.691 [2024-12-05 12:14:10.632812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.691 [2024-12-05 12:14:10.632844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.691 qpair failed and we were unable to recover it. 00:30:36.691 [2024-12-05 12:14:10.633107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.691 [2024-12-05 12:14:10.633139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.691 qpair failed and we were unable to recover it. 00:30:36.691 [2024-12-05 12:14:10.633385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.691 [2024-12-05 12:14:10.633419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.691 qpair failed and we were unable to recover it. 00:30:36.691 [2024-12-05 12:14:10.633686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.691 [2024-12-05 12:14:10.633717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.691 qpair failed and we were unable to recover it. 00:30:36.691 [2024-12-05 12:14:10.633915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.691 [2024-12-05 12:14:10.633947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.691 qpair failed and we were unable to recover it. 00:30:36.691 [2024-12-05 12:14:10.634125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.691 [2024-12-05 12:14:10.634156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.691 qpair failed and we were unable to recover it. 00:30:36.691 [2024-12-05 12:14:10.634433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.691 [2024-12-05 12:14:10.634465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.691 qpair failed and we were unable to recover it. 00:30:36.691 [2024-12-05 12:14:10.634704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.691 [2024-12-05 12:14:10.634737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.691 qpair failed and we were unable to recover it. 00:30:36.692 [2024-12-05 12:14:10.634920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.692 [2024-12-05 12:14:10.634952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.692 qpair failed and we were unable to recover it. 00:30:36.692 [2024-12-05 12:14:10.635083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.692 [2024-12-05 12:14:10.635116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.692 qpair failed and we were unable to recover it. 00:30:36.692 [2024-12-05 12:14:10.635334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.692 [2024-12-05 12:14:10.635366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.692 qpair failed and we were unable to recover it. 00:30:36.692 [2024-12-05 12:14:10.635624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.692 [2024-12-05 12:14:10.635658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.692 qpair failed and we were unable to recover it. 00:30:36.692 [2024-12-05 12:14:10.635861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.692 [2024-12-05 12:14:10.635894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.692 qpair failed and we were unable to recover it. 00:30:36.692 [2024-12-05 12:14:10.636137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.692 [2024-12-05 12:14:10.636176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.692 qpair failed and we were unable to recover it. 00:30:36.692 [2024-12-05 12:14:10.636358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.692 [2024-12-05 12:14:10.636401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.692 qpair failed and we were unable to recover it. 00:30:36.692 [2024-12-05 12:14:10.636685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.692 [2024-12-05 12:14:10.636719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.692 qpair failed and we were unable to recover it. 00:30:36.692 [2024-12-05 12:14:10.636930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.692 [2024-12-05 12:14:10.636962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.692 qpair failed and we were unable to recover it. 00:30:36.692 [2024-12-05 12:14:10.637206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.692 [2024-12-05 12:14:10.637238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.692 qpair failed and we were unable to recover it. 00:30:36.692 [2024-12-05 12:14:10.637442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.692 [2024-12-05 12:14:10.637475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.692 qpair failed and we were unable to recover it. 00:30:36.692 [2024-12-05 12:14:10.637682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.692 [2024-12-05 12:14:10.637714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.692 qpair failed and we were unable to recover it. 00:30:36.692 [2024-12-05 12:14:10.637907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.692 [2024-12-05 12:14:10.637939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.692 qpair failed and we were unable to recover it. 00:30:36.692 [2024-12-05 12:14:10.638143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.692 [2024-12-05 12:14:10.638174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.692 qpair failed and we were unable to recover it. 00:30:36.692 [2024-12-05 12:14:10.638394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.692 [2024-12-05 12:14:10.638428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.692 qpair failed and we were unable to recover it. 00:30:36.692 [2024-12-05 12:14:10.638550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.692 [2024-12-05 12:14:10.638581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.692 qpair failed and we were unable to recover it. 00:30:36.692 [2024-12-05 12:14:10.638751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.692 [2024-12-05 12:14:10.638783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.692 qpair failed and we were unable to recover it. 00:30:36.692 [2024-12-05 12:14:10.638966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.692 [2024-12-05 12:14:10.638997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.692 qpair failed and we were unable to recover it. 00:30:36.692 [2024-12-05 12:14:10.639186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.692 [2024-12-05 12:14:10.639218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.692 qpair failed and we were unable to recover it. 00:30:36.692 [2024-12-05 12:14:10.639481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.692 [2024-12-05 12:14:10.639513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.692 qpair failed and we were unable to recover it. 00:30:36.692 [2024-12-05 12:14:10.639695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.692 [2024-12-05 12:14:10.639728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.692 qpair failed and we were unable to recover it. 00:30:36.692 [2024-12-05 12:14:10.639989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.692 [2024-12-05 12:14:10.640021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.692 qpair failed and we were unable to recover it. 00:30:36.692 [2024-12-05 12:14:10.640196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.692 [2024-12-05 12:14:10.640228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.692 qpair failed and we were unable to recover it. 00:30:36.692 [2024-12-05 12:14:10.640411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.692 [2024-12-05 12:14:10.640444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.692 qpair failed and we were unable to recover it. 00:30:36.692 [2024-12-05 12:14:10.640585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.692 [2024-12-05 12:14:10.640616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.692 qpair failed and we were unable to recover it. 00:30:36.692 [2024-12-05 12:14:10.640863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.692 [2024-12-05 12:14:10.640894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.692 qpair failed and we were unable to recover it. 00:30:36.692 [2024-12-05 12:14:10.641065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.692 [2024-12-05 12:14:10.641097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.692 qpair failed and we were unable to recover it. 00:30:36.692 [2024-12-05 12:14:10.641337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.692 [2024-12-05 12:14:10.641385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.692 qpair failed and we were unable to recover it. 00:30:36.692 [2024-12-05 12:14:10.641629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.692 [2024-12-05 12:14:10.641661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.692 qpair failed and we were unable to recover it. 00:30:36.692 [2024-12-05 12:14:10.641912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.692 [2024-12-05 12:14:10.641943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.692 qpair failed and we were unable to recover it. 00:30:36.692 [2024-12-05 12:14:10.642134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.692 [2024-12-05 12:14:10.642165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.692 qpair failed and we were unable to recover it. 00:30:36.692 [2024-12-05 12:14:10.642305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.692 [2024-12-05 12:14:10.642336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.692 qpair failed and we were unable to recover it. 00:30:36.692 [2024-12-05 12:14:10.642616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.692 [2024-12-05 12:14:10.642674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.692 qpair failed and we were unable to recover it. 00:30:36.692 [2024-12-05 12:14:10.642894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.692 [2024-12-05 12:14:10.642936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.692 qpair failed and we were unable to recover it. 00:30:36.692 [2024-12-05 12:14:10.643142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.692 [2024-12-05 12:14:10.643172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.692 qpair failed and we were unable to recover it. 00:30:36.692 [2024-12-05 12:14:10.643439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.692 [2024-12-05 12:14:10.643472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.692 qpair failed and we were unable to recover it. 00:30:36.692 [2024-12-05 12:14:10.643715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.692 [2024-12-05 12:14:10.643748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.692 qpair failed and we were unable to recover it. 00:30:36.692 [2024-12-05 12:14:10.643955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.692 [2024-12-05 12:14:10.643985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.692 qpair failed and we were unable to recover it. 00:30:36.692 [2024-12-05 12:14:10.644174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.693 [2024-12-05 12:14:10.644206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.693 qpair failed and we were unable to recover it. 00:30:36.693 [2024-12-05 12:14:10.644444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.693 [2024-12-05 12:14:10.644477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.693 qpair failed and we were unable to recover it. 00:30:36.693 [2024-12-05 12:14:10.644761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.693 [2024-12-05 12:14:10.644792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.693 qpair failed and we were unable to recover it. 00:30:36.693 [2024-12-05 12:14:10.644989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.693 [2024-12-05 12:14:10.645021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.693 qpair failed and we were unable to recover it. 00:30:36.693 [2024-12-05 12:14:10.645282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.693 [2024-12-05 12:14:10.645313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.693 qpair failed and we were unable to recover it. 00:30:36.693 [2024-12-05 12:14:10.645633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.693 [2024-12-05 12:14:10.645666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.693 qpair failed and we were unable to recover it. 00:30:36.693 [2024-12-05 12:14:10.645974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.693 [2024-12-05 12:14:10.646006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.693 qpair failed and we were unable to recover it. 00:30:36.693 [2024-12-05 12:14:10.646192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.693 [2024-12-05 12:14:10.646223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.693 qpair failed and we were unable to recover it. 00:30:36.693 [2024-12-05 12:14:10.646430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.693 [2024-12-05 12:14:10.646462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.693 qpair failed and we were unable to recover it. 00:30:36.693 [2024-12-05 12:14:10.646724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.693 [2024-12-05 12:14:10.646756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.693 qpair failed and we were unable to recover it. 00:30:36.693 [2024-12-05 12:14:10.646964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.693 [2024-12-05 12:14:10.646995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.693 qpair failed and we were unable to recover it. 00:30:36.693 [2024-12-05 12:14:10.647243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.693 [2024-12-05 12:14:10.647274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.693 qpair failed and we were unable to recover it. 00:30:36.693 [2024-12-05 12:14:10.647538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.693 [2024-12-05 12:14:10.647571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.693 qpair failed and we were unable to recover it. 00:30:36.693 [2024-12-05 12:14:10.647774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.693 [2024-12-05 12:14:10.647805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.693 qpair failed and we were unable to recover it. 00:30:36.693 [2024-12-05 12:14:10.648052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.693 [2024-12-05 12:14:10.648084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.693 qpair failed and we were unable to recover it. 00:30:36.693 [2024-12-05 12:14:10.648321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.693 [2024-12-05 12:14:10.648351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.693 qpair failed and we were unable to recover it. 00:30:36.693 [2024-12-05 12:14:10.648614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.693 [2024-12-05 12:14:10.648646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.693 qpair failed and we were unable to recover it. 00:30:36.693 [2024-12-05 12:14:10.648847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.693 [2024-12-05 12:14:10.648878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.693 qpair failed and we were unable to recover it. 00:30:36.693 [2024-12-05 12:14:10.649001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.693 [2024-12-05 12:14:10.649032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.693 qpair failed and we were unable to recover it. 00:30:36.693 [2024-12-05 12:14:10.649289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.693 [2024-12-05 12:14:10.649321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.693 qpair failed and we were unable to recover it. 00:30:36.693 [2024-12-05 12:14:10.649594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.693 [2024-12-05 12:14:10.649627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.693 qpair failed and we were unable to recover it. 00:30:36.693 [2024-12-05 12:14:10.649840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.693 [2024-12-05 12:14:10.649871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.693 qpair failed and we were unable to recover it. 00:30:36.693 [2024-12-05 12:14:10.650143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.693 [2024-12-05 12:14:10.650174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.693 qpair failed and we were unable to recover it. 00:30:36.693 [2024-12-05 12:14:10.650393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.693 [2024-12-05 12:14:10.650425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.693 qpair failed and we were unable to recover it. 00:30:36.693 [2024-12-05 12:14:10.650673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.693 [2024-12-05 12:14:10.650704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.693 qpair failed and we were unable to recover it. 00:30:36.693 [2024-12-05 12:14:10.650973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.693 [2024-12-05 12:14:10.651005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.693 qpair failed and we were unable to recover it. 00:30:36.693 [2024-12-05 12:14:10.651264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.693 [2024-12-05 12:14:10.651295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.693 qpair failed and we were unable to recover it. 00:30:36.693 [2024-12-05 12:14:10.651503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.693 [2024-12-05 12:14:10.651535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.693 qpair failed and we were unable to recover it. 00:30:36.693 [2024-12-05 12:14:10.651711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.693 [2024-12-05 12:14:10.651742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.693 qpair failed and we were unable to recover it. 00:30:36.693 [2024-12-05 12:14:10.651976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.693 [2024-12-05 12:14:10.652016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.693 qpair failed and we were unable to recover it. 00:30:36.693 [2024-12-05 12:14:10.652219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.693 [2024-12-05 12:14:10.652250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.693 qpair failed and we were unable to recover it. 00:30:36.693 [2024-12-05 12:14:10.652501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.693 [2024-12-05 12:14:10.652534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.693 qpair failed and we were unable to recover it. 00:30:36.693 [2024-12-05 12:14:10.652775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.693 [2024-12-05 12:14:10.652808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.693 qpair failed and we were unable to recover it. 00:30:36.693 [2024-12-05 12:14:10.653067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.693 [2024-12-05 12:14:10.653098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.693 qpair failed and we were unable to recover it. 00:30:36.694 [2024-12-05 12:14:10.653225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.694 [2024-12-05 12:14:10.653256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.694 qpair failed and we were unable to recover it. 00:30:36.694 [2024-12-05 12:14:10.653388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.694 [2024-12-05 12:14:10.653420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.694 qpair failed and we were unable to recover it. 00:30:36.694 [2024-12-05 12:14:10.653632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.694 [2024-12-05 12:14:10.653664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.694 qpair failed and we were unable to recover it. 00:30:36.694 [2024-12-05 12:14:10.653909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.694 [2024-12-05 12:14:10.653940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.694 qpair failed and we were unable to recover it. 00:30:36.694 [2024-12-05 12:14:10.654251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.694 [2024-12-05 12:14:10.654283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.694 qpair failed and we were unable to recover it. 00:30:36.694 [2024-12-05 12:14:10.654535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.694 [2024-12-05 12:14:10.654568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.694 qpair failed and we were unable to recover it. 00:30:36.694 [2024-12-05 12:14:10.654831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.694 [2024-12-05 12:14:10.654861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.694 qpair failed and we were unable to recover it. 00:30:36.694 [2024-12-05 12:14:10.655041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.694 [2024-12-05 12:14:10.655072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.694 qpair failed and we were unable to recover it. 00:30:36.694 [2024-12-05 12:14:10.655336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.694 [2024-12-05 12:14:10.655376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.694 qpair failed and we were unable to recover it. 00:30:36.694 [2024-12-05 12:14:10.655651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.694 [2024-12-05 12:14:10.655683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.694 qpair failed and we were unable to recover it. 00:30:36.694 [2024-12-05 12:14:10.655950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.694 [2024-12-05 12:14:10.655981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.694 qpair failed and we were unable to recover it. 00:30:36.694 [2024-12-05 12:14:10.656269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.694 [2024-12-05 12:14:10.656300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.694 qpair failed and we were unable to recover it. 00:30:36.694 [2024-12-05 12:14:10.656493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.694 [2024-12-05 12:14:10.656526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.694 qpair failed and we were unable to recover it. 00:30:36.694 [2024-12-05 12:14:10.656767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.694 [2024-12-05 12:14:10.656797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.694 qpair failed and we were unable to recover it. 00:30:36.694 [2024-12-05 12:14:10.657032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.694 [2024-12-05 12:14:10.657064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.694 qpair failed and we were unable to recover it. 00:30:36.694 [2024-12-05 12:14:10.657326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.694 [2024-12-05 12:14:10.657358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.694 qpair failed and we were unable to recover it. 00:30:36.694 [2024-12-05 12:14:10.657489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.694 [2024-12-05 12:14:10.657521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.694 qpair failed and we were unable to recover it. 00:30:36.694 [2024-12-05 12:14:10.657759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.694 [2024-12-05 12:14:10.657790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.694 qpair failed and we were unable to recover it. 00:30:36.694 [2024-12-05 12:14:10.658003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.694 [2024-12-05 12:14:10.658034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.694 qpair failed and we were unable to recover it. 00:30:36.694 [2024-12-05 12:14:10.658246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.694 [2024-12-05 12:14:10.658278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.694 qpair failed and we were unable to recover it. 00:30:36.694 [2024-12-05 12:14:10.658481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.694 [2024-12-05 12:14:10.658513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.694 qpair failed and we were unable to recover it. 00:30:36.694 [2024-12-05 12:14:10.658756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.694 [2024-12-05 12:14:10.658787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.694 qpair failed and we were unable to recover it. 00:30:36.694 [2024-12-05 12:14:10.659015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.694 [2024-12-05 12:14:10.659047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.694 qpair failed and we were unable to recover it. 00:30:36.694 [2024-12-05 12:14:10.659234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.694 [2024-12-05 12:14:10.659264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.694 qpair failed and we were unable to recover it. 00:30:36.694 [2024-12-05 12:14:10.659511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.694 [2024-12-05 12:14:10.659544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.694 qpair failed and we were unable to recover it. 00:30:36.694 [2024-12-05 12:14:10.659782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.694 [2024-12-05 12:14:10.659814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.694 qpair failed and we were unable to recover it. 00:30:36.694 [2024-12-05 12:14:10.660051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.694 [2024-12-05 12:14:10.660083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.694 qpair failed and we were unable to recover it. 00:30:36.694 [2024-12-05 12:14:10.660393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.694 [2024-12-05 12:14:10.660425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.694 qpair failed and we were unable to recover it. 00:30:36.694 [2024-12-05 12:14:10.660622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.694 [2024-12-05 12:14:10.660658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.694 qpair failed and we were unable to recover it. 00:30:36.694 [2024-12-05 12:14:10.660919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.694 [2024-12-05 12:14:10.660952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.694 qpair failed and we were unable to recover it. 00:30:36.694 [2024-12-05 12:14:10.661239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.694 [2024-12-05 12:14:10.661272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.694 qpair failed and we were unable to recover it. 00:30:36.694 [2024-12-05 12:14:10.661467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.694 [2024-12-05 12:14:10.661500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.694 qpair failed and we were unable to recover it. 00:30:36.694 [2024-12-05 12:14:10.661760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.694 [2024-12-05 12:14:10.661792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.694 qpair failed and we were unable to recover it. 00:30:36.694 [2024-12-05 12:14:10.661921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.694 [2024-12-05 12:14:10.661953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.694 qpair failed and we were unable to recover it. 00:30:36.694 [2024-12-05 12:14:10.662213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.694 [2024-12-05 12:14:10.662243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.694 qpair failed and we were unable to recover it. 00:30:36.694 [2024-12-05 12:14:10.662440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.694 [2024-12-05 12:14:10.662484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.694 qpair failed and we were unable to recover it. 00:30:36.694 [2024-12-05 12:14:10.662725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.694 [2024-12-05 12:14:10.662757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.694 qpair failed and we were unable to recover it. 00:30:36.694 [2024-12-05 12:14:10.662931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.694 [2024-12-05 12:14:10.662963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.694 qpair failed and we were unable to recover it. 00:30:36.694 [2024-12-05 12:14:10.663162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.695 [2024-12-05 12:14:10.663195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.695 qpair failed and we were unable to recover it. 00:30:36.695 [2024-12-05 12:14:10.663376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.695 [2024-12-05 12:14:10.663409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.695 qpair failed and we were unable to recover it. 00:30:36.695 [2024-12-05 12:14:10.663649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.695 [2024-12-05 12:14:10.663699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.695 qpair failed and we were unable to recover it. 00:30:36.695 [2024-12-05 12:14:10.663970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.695 [2024-12-05 12:14:10.664001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.695 qpair failed and we were unable to recover it. 00:30:36.695 [2024-12-05 12:14:10.664133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.695 [2024-12-05 12:14:10.664166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.695 qpair failed and we were unable to recover it. 00:30:36.695 [2024-12-05 12:14:10.664295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.695 [2024-12-05 12:14:10.664327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.695 qpair failed and we were unable to recover it. 00:30:36.695 [2024-12-05 12:14:10.664578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.695 [2024-12-05 12:14:10.664613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.695 qpair failed and we were unable to recover it. 00:30:36.695 [2024-12-05 12:14:10.664833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.695 [2024-12-05 12:14:10.664865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.695 qpair failed and we were unable to recover it. 00:30:36.695 [2024-12-05 12:14:10.665114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.695 [2024-12-05 12:14:10.665146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.695 qpair failed and we were unable to recover it. 00:30:36.695 [2024-12-05 12:14:10.665409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.695 [2024-12-05 12:14:10.665445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.695 qpair failed and we were unable to recover it. 00:30:36.695 [2024-12-05 12:14:10.665633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.695 [2024-12-05 12:14:10.665665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.695 qpair failed and we were unable to recover it. 00:30:36.695 [2024-12-05 12:14:10.665846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.695 [2024-12-05 12:14:10.665881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.695 qpair failed and we were unable to recover it. 00:30:36.695 [2024-12-05 12:14:10.666133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.695 [2024-12-05 12:14:10.666167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.695 qpair failed and we were unable to recover it. 00:30:36.695 [2024-12-05 12:14:10.666304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.695 [2024-12-05 12:14:10.666335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.695 qpair failed and we were unable to recover it. 00:30:36.695 [2024-12-05 12:14:10.666543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.695 [2024-12-05 12:14:10.666579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.695 qpair failed and we were unable to recover it. 00:30:36.695 [2024-12-05 12:14:10.666786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.695 [2024-12-05 12:14:10.666817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.695 qpair failed and we were unable to recover it. 00:30:36.695 [2024-12-05 12:14:10.667077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.695 [2024-12-05 12:14:10.667108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.695 qpair failed and we were unable to recover it. 00:30:36.695 [2024-12-05 12:14:10.667356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.695 [2024-12-05 12:14:10.667397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.695 qpair failed and we were unable to recover it. 00:30:36.695 [2024-12-05 12:14:10.667579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.695 [2024-12-05 12:14:10.667610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.695 qpair failed and we were unable to recover it. 00:30:36.695 [2024-12-05 12:14:10.667858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.695 [2024-12-05 12:14:10.667889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.695 qpair failed and we were unable to recover it. 00:30:36.695 [2024-12-05 12:14:10.668061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.695 [2024-12-05 12:14:10.668092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.695 qpair failed and we were unable to recover it. 00:30:36.695 [2024-12-05 12:14:10.668392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.695 [2024-12-05 12:14:10.668426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.695 qpair failed and we were unable to recover it. 00:30:36.695 [2024-12-05 12:14:10.668681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.695 [2024-12-05 12:14:10.668712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.695 qpair failed and we were unable to recover it. 00:30:36.695 [2024-12-05 12:14:10.668910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.695 [2024-12-05 12:14:10.668941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.695 qpair failed and we were unable to recover it. 00:30:36.695 [2024-12-05 12:14:10.669201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.695 [2024-12-05 12:14:10.669233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.695 qpair failed and we were unable to recover it. 00:30:36.695 [2024-12-05 12:14:10.669415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.695 [2024-12-05 12:14:10.669448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.695 qpair failed and we were unable to recover it. 00:30:36.695 [2024-12-05 12:14:10.669653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.695 [2024-12-05 12:14:10.669684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.695 qpair failed and we were unable to recover it. 00:30:36.695 [2024-12-05 12:14:10.669855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.695 [2024-12-05 12:14:10.669886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.695 qpair failed and we were unable to recover it. 00:30:36.695 [2024-12-05 12:14:10.670163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.695 [2024-12-05 12:14:10.670196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.695 qpair failed and we were unable to recover it. 00:30:36.695 [2024-12-05 12:14:10.670466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.695 [2024-12-05 12:14:10.670500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.695 qpair failed and we were unable to recover it. 00:30:36.695 [2024-12-05 12:14:10.670750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.695 [2024-12-05 12:14:10.670782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.695 qpair failed and we were unable to recover it. 00:30:36.695 [2024-12-05 12:14:10.670927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.695 [2024-12-05 12:14:10.670958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.695 qpair failed and we were unable to recover it. 00:30:36.695 [2024-12-05 12:14:10.671135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.695 [2024-12-05 12:14:10.671166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.695 qpair failed and we were unable to recover it. 00:30:36.695 [2024-12-05 12:14:10.671362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.695 [2024-12-05 12:14:10.671404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.695 qpair failed and we were unable to recover it. 00:30:36.695 [2024-12-05 12:14:10.671688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.695 [2024-12-05 12:14:10.671719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.695 qpair failed and we were unable to recover it. 00:30:36.695 [2024-12-05 12:14:10.671895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.695 [2024-12-05 12:14:10.671927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.695 qpair failed and we were unable to recover it. 00:30:36.695 [2024-12-05 12:14:10.672103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.695 [2024-12-05 12:14:10.672136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.695 qpair failed and we were unable to recover it. 00:30:36.695 [2024-12-05 12:14:10.672416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.695 [2024-12-05 12:14:10.672454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.695 qpair failed and we were unable to recover it. 00:30:36.695 [2024-12-05 12:14:10.672647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.696 [2024-12-05 12:14:10.672678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.696 qpair failed and we were unable to recover it. 00:30:36.696 [2024-12-05 12:14:10.672914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.696 [2024-12-05 12:14:10.672945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.696 qpair failed and we were unable to recover it. 00:30:36.696 [2024-12-05 12:14:10.673218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.696 [2024-12-05 12:14:10.673249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.696 qpair failed and we were unable to recover it. 00:30:36.696 [2024-12-05 12:14:10.673461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.696 [2024-12-05 12:14:10.673494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.696 qpair failed and we were unable to recover it. 00:30:36.696 [2024-12-05 12:14:10.673602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.696 [2024-12-05 12:14:10.673634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.696 qpair failed and we were unable to recover it. 00:30:36.696 [2024-12-05 12:14:10.673825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.696 [2024-12-05 12:14:10.673856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.696 qpair failed and we were unable to recover it. 00:30:36.696 [2024-12-05 12:14:10.674093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.696 [2024-12-05 12:14:10.674126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.696 qpair failed and we were unable to recover it. 00:30:36.696 [2024-12-05 12:14:10.674315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.696 [2024-12-05 12:14:10.674347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.696 qpair failed and we were unable to recover it. 00:30:36.696 [2024-12-05 12:14:10.674550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.696 [2024-12-05 12:14:10.674584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.696 qpair failed and we were unable to recover it. 00:30:36.696 [2024-12-05 12:14:10.674820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.696 [2024-12-05 12:14:10.674853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.696 qpair failed and we were unable to recover it. 00:30:36.696 [2024-12-05 12:14:10.675043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.696 [2024-12-05 12:14:10.675074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.696 qpair failed and we were unable to recover it. 00:30:36.696 [2024-12-05 12:14:10.675250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.696 [2024-12-05 12:14:10.675281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.696 qpair failed and we were unable to recover it. 00:30:36.696 [2024-12-05 12:14:10.675560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.696 [2024-12-05 12:14:10.675594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.696 qpair failed and we were unable to recover it. 00:30:36.696 [2024-12-05 12:14:10.675779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.696 [2024-12-05 12:14:10.675810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.696 qpair failed and we were unable to recover it. 00:30:36.696 [2024-12-05 12:14:10.675982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.696 [2024-12-05 12:14:10.676014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.696 qpair failed and we were unable to recover it. 00:30:36.696 [2024-12-05 12:14:10.676277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.696 [2024-12-05 12:14:10.676309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.696 qpair failed and we were unable to recover it. 00:30:36.696 [2024-12-05 12:14:10.676546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.696 [2024-12-05 12:14:10.676577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.696 qpair failed and we were unable to recover it. 00:30:36.696 [2024-12-05 12:14:10.676708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.696 [2024-12-05 12:14:10.676738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.696 qpair failed and we were unable to recover it. 00:30:36.696 [2024-12-05 12:14:10.676923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.696 [2024-12-05 12:14:10.676956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.696 qpair failed and we were unable to recover it. 00:30:36.696 [2024-12-05 12:14:10.677145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.696 [2024-12-05 12:14:10.677177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.696 qpair failed and we were unable to recover it. 00:30:36.696 [2024-12-05 12:14:10.677391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.696 [2024-12-05 12:14:10.677424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.696 qpair failed and we were unable to recover it. 00:30:36.696 [2024-12-05 12:14:10.677542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.696 [2024-12-05 12:14:10.677573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.696 qpair failed and we were unable to recover it. 00:30:36.696 [2024-12-05 12:14:10.677761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.696 [2024-12-05 12:14:10.677792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.696 qpair failed and we were unable to recover it. 00:30:36.696 [2024-12-05 12:14:10.677965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.696 [2024-12-05 12:14:10.677996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.696 qpair failed and we were unable to recover it. 00:30:36.696 [2024-12-05 12:14:10.678227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.696 [2024-12-05 12:14:10.678260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.696 qpair failed and we were unable to recover it. 00:30:36.696 [2024-12-05 12:14:10.678439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.696 [2024-12-05 12:14:10.678473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.696 qpair failed and we were unable to recover it. 00:30:36.696 [2024-12-05 12:14:10.678761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.696 [2024-12-05 12:14:10.678804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.696 qpair failed and we were unable to recover it. 00:30:36.696 [2024-12-05 12:14:10.679058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.696 [2024-12-05 12:14:10.679092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.696 qpair failed and we were unable to recover it. 00:30:36.696 [2024-12-05 12:14:10.679295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.696 [2024-12-05 12:14:10.679328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.696 qpair failed and we were unable to recover it. 00:30:36.696 [2024-12-05 12:14:10.679530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.696 [2024-12-05 12:14:10.679563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.696 qpair failed and we were unable to recover it. 00:30:36.696 [2024-12-05 12:14:10.679742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.696 [2024-12-05 12:14:10.679774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.696 qpair failed and we were unable to recover it. 00:30:36.696 [2024-12-05 12:14:10.679967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.696 [2024-12-05 12:14:10.679999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.696 qpair failed and we were unable to recover it. 00:30:36.696 [2024-12-05 12:14:10.680136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.696 [2024-12-05 12:14:10.680168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.696 qpair failed and we were unable to recover it. 00:30:36.696 [2024-12-05 12:14:10.680296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.696 [2024-12-05 12:14:10.680327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.696 qpair failed and we were unable to recover it. 00:30:36.696 [2024-12-05 12:14:10.680604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.696 [2024-12-05 12:14:10.680636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.696 qpair failed and we were unable to recover it. 00:30:36.696 [2024-12-05 12:14:10.680777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.696 [2024-12-05 12:14:10.680809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.696 qpair failed and we were unable to recover it. 00:30:36.696 [2024-12-05 12:14:10.680933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.696 [2024-12-05 12:14:10.680963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.696 qpair failed and we were unable to recover it. 00:30:36.696 [2024-12-05 12:14:10.681102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.696 [2024-12-05 12:14:10.681135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.696 qpair failed and we were unable to recover it. 00:30:36.696 [2024-12-05 12:14:10.681384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.696 [2024-12-05 12:14:10.681418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.696 qpair failed and we were unable to recover it. 00:30:36.697 [2024-12-05 12:14:10.681692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.697 [2024-12-05 12:14:10.681723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.697 qpair failed and we were unable to recover it. 00:30:36.697 [2024-12-05 12:14:10.682004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.697 [2024-12-05 12:14:10.682036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.697 qpair failed and we were unable to recover it. 00:30:36.697 [2024-12-05 12:14:10.682227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.697 [2024-12-05 12:14:10.682258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.697 qpair failed and we were unable to recover it. 00:30:36.697 [2024-12-05 12:14:10.682391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.697 [2024-12-05 12:14:10.682424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.697 qpair failed and we were unable to recover it. 00:30:36.697 [2024-12-05 12:14:10.682545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.697 [2024-12-05 12:14:10.682577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.697 qpair failed and we were unable to recover it. 00:30:36.697 [2024-12-05 12:14:10.682751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.697 [2024-12-05 12:14:10.682783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.697 qpair failed and we were unable to recover it. 00:30:36.697 [2024-12-05 12:14:10.682988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.697 [2024-12-05 12:14:10.683019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.697 qpair failed and we were unable to recover it. 00:30:36.697 [2024-12-05 12:14:10.683298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.697 [2024-12-05 12:14:10.683330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.697 qpair failed and we were unable to recover it. 00:30:36.697 [2024-12-05 12:14:10.683547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.697 [2024-12-05 12:14:10.683580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.697 qpair failed and we were unable to recover it. 00:30:36.697 [2024-12-05 12:14:10.683773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.697 [2024-12-05 12:14:10.683805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.697 qpair failed and we were unable to recover it. 00:30:36.697 [2024-12-05 12:14:10.683985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.697 [2024-12-05 12:14:10.684016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.697 qpair failed and we were unable to recover it. 00:30:36.697 [2024-12-05 12:14:10.684254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.697 [2024-12-05 12:14:10.684286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.697 qpair failed and we were unable to recover it. 00:30:36.697 [2024-12-05 12:14:10.684524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.697 [2024-12-05 12:14:10.684556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.697 qpair failed and we were unable to recover it. 00:30:36.697 [2024-12-05 12:14:10.684741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.697 [2024-12-05 12:14:10.684772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.697 qpair failed and we were unable to recover it. 00:30:36.697 [2024-12-05 12:14:10.684961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.697 [2024-12-05 12:14:10.684999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.697 qpair failed and we were unable to recover it. 00:30:36.697 [2024-12-05 12:14:10.685210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.697 [2024-12-05 12:14:10.685242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.697 qpair failed and we were unable to recover it. 00:30:36.697 [2024-12-05 12:14:10.685478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.697 [2024-12-05 12:14:10.685511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.697 qpair failed and we were unable to recover it. 00:30:36.697 [2024-12-05 12:14:10.685690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.697 [2024-12-05 12:14:10.685722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.697 qpair failed and we were unable to recover it. 00:30:36.697 [2024-12-05 12:14:10.685902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.697 [2024-12-05 12:14:10.685933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.697 qpair failed and we were unable to recover it. 00:30:36.697 [2024-12-05 12:14:10.686126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.697 [2024-12-05 12:14:10.686158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.697 qpair failed and we were unable to recover it. 00:30:36.697 [2024-12-05 12:14:10.686381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.697 [2024-12-05 12:14:10.686414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.697 qpair failed and we were unable to recover it. 00:30:36.697 [2024-12-05 12:14:10.686610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.697 [2024-12-05 12:14:10.686641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.697 qpair failed and we were unable to recover it. 00:30:36.697 [2024-12-05 12:14:10.686878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.697 [2024-12-05 12:14:10.686909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.697 qpair failed and we were unable to recover it. 00:30:36.697 [2024-12-05 12:14:10.687095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.697 [2024-12-05 12:14:10.687127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.697 qpair failed and we were unable to recover it. 00:30:36.697 [2024-12-05 12:14:10.687310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.697 [2024-12-05 12:14:10.687341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.697 qpair failed and we were unable to recover it. 00:30:36.697 [2024-12-05 12:14:10.687623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.697 [2024-12-05 12:14:10.687657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.697 qpair failed and we were unable to recover it. 00:30:36.697 [2024-12-05 12:14:10.687850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.697 [2024-12-05 12:14:10.687881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.697 qpair failed and we were unable to recover it. 00:30:36.697 [2024-12-05 12:14:10.688147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.697 [2024-12-05 12:14:10.688180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.697 qpair failed and we were unable to recover it. 00:30:36.697 [2024-12-05 12:14:10.688321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.697 [2024-12-05 12:14:10.688353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.697 qpair failed and we were unable to recover it. 00:30:36.697 [2024-12-05 12:14:10.688505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.697 [2024-12-05 12:14:10.688538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.697 qpair failed and we were unable to recover it. 00:30:36.697 [2024-12-05 12:14:10.688730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.697 [2024-12-05 12:14:10.688761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.697 qpair failed and we were unable to recover it. 00:30:36.697 [2024-12-05 12:14:10.689002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.697 [2024-12-05 12:14:10.689033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.697 qpair failed and we were unable to recover it. 00:30:36.697 [2024-12-05 12:14:10.689239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.697 [2024-12-05 12:14:10.689271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.697 qpair failed and we were unable to recover it. 00:30:36.697 [2024-12-05 12:14:10.689407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.697 [2024-12-05 12:14:10.689439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.697 qpair failed and we were unable to recover it. 00:30:36.697 [2024-12-05 12:14:10.689633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.697 [2024-12-05 12:14:10.689664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.697 qpair failed and we were unable to recover it. 00:30:36.697 [2024-12-05 12:14:10.689859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.697 [2024-12-05 12:14:10.689891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.697 qpair failed and we were unable to recover it. 00:30:36.697 [2024-12-05 12:14:10.690078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.697 [2024-12-05 12:14:10.690109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.697 qpair failed and we were unable to recover it. 00:30:36.697 [2024-12-05 12:14:10.690286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.697 [2024-12-05 12:14:10.690317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.697 qpair failed and we were unable to recover it. 00:30:36.697 [2024-12-05 12:14:10.690447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.697 [2024-12-05 12:14:10.690479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.697 qpair failed and we were unable to recover it. 00:30:36.698 [2024-12-05 12:14:10.690707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.698 [2024-12-05 12:14:10.690738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.698 qpair failed and we were unable to recover it. 00:30:36.698 [2024-12-05 12:14:10.690871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.698 [2024-12-05 12:14:10.690902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.698 qpair failed and we were unable to recover it. 00:30:36.698 [2024-12-05 12:14:10.691015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.698 [2024-12-05 12:14:10.691052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.698 qpair failed and we were unable to recover it. 00:30:36.698 [2024-12-05 12:14:10.691187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.698 [2024-12-05 12:14:10.691219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.698 qpair failed and we were unable to recover it. 00:30:36.698 [2024-12-05 12:14:10.691335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.698 [2024-12-05 12:14:10.691366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.698 qpair failed and we were unable to recover it. 00:30:36.698 [2024-12-05 12:14:10.691566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.698 [2024-12-05 12:14:10.691597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.698 qpair failed and we were unable to recover it. 00:30:36.698 [2024-12-05 12:14:10.691780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.698 [2024-12-05 12:14:10.691811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.698 qpair failed and we were unable to recover it. 00:30:36.698 [2024-12-05 12:14:10.691927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.698 [2024-12-05 12:14:10.691959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.698 qpair failed and we were unable to recover it. 00:30:36.698 [2024-12-05 12:14:10.692080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.698 [2024-12-05 12:14:10.692112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.698 qpair failed and we were unable to recover it. 00:30:36.698 [2024-12-05 12:14:10.692236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.698 [2024-12-05 12:14:10.692267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.698 qpair failed and we were unable to recover it. 00:30:36.698 [2024-12-05 12:14:10.692384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.698 [2024-12-05 12:14:10.692417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.698 qpair failed and we were unable to recover it. 00:30:36.698 [2024-12-05 12:14:10.692615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.698 [2024-12-05 12:14:10.692646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.698 qpair failed and we were unable to recover it. 00:30:36.698 [2024-12-05 12:14:10.692842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.698 [2024-12-05 12:14:10.692873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.698 qpair failed and we were unable to recover it. 00:30:36.698 [2024-12-05 12:14:10.693009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.698 [2024-12-05 12:14:10.693040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.698 qpair failed and we were unable to recover it. 00:30:36.698 [2024-12-05 12:14:10.693285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.698 [2024-12-05 12:14:10.693316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.698 qpair failed and we were unable to recover it. 00:30:36.698 [2024-12-05 12:14:10.693446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.698 [2024-12-05 12:14:10.693478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.698 qpair failed and we were unable to recover it. 00:30:36.698 [2024-12-05 12:14:10.693768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.698 [2024-12-05 12:14:10.693809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.698 qpair failed and we were unable to recover it. 00:30:36.698 [2024-12-05 12:14:10.693926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.698 [2024-12-05 12:14:10.693959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.698 qpair failed and we were unable to recover it. 00:30:36.698 [2024-12-05 12:14:10.694144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.698 [2024-12-05 12:14:10.694177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.698 qpair failed and we were unable to recover it. 00:30:36.698 [2024-12-05 12:14:10.694293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.698 [2024-12-05 12:14:10.694324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.698 qpair failed and we were unable to recover it. 00:30:36.698 [2024-12-05 12:14:10.694465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.698 [2024-12-05 12:14:10.694499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.698 qpair failed and we were unable to recover it. 00:30:36.698 [2024-12-05 12:14:10.694625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.698 [2024-12-05 12:14:10.694658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.698 qpair failed and we were unable to recover it. 00:30:36.698 [2024-12-05 12:14:10.694786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.698 [2024-12-05 12:14:10.694817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.698 qpair failed and we were unable to recover it. 00:30:36.698 [2024-12-05 12:14:10.694926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.698 [2024-12-05 12:14:10.694958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.698 qpair failed and we were unable to recover it. 00:30:36.698 [2024-12-05 12:14:10.695068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.698 [2024-12-05 12:14:10.695100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.698 qpair failed and we were unable to recover it. 00:30:36.698 [2024-12-05 12:14:10.695237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.698 [2024-12-05 12:14:10.695268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.698 qpair failed and we were unable to recover it. 00:30:36.698 [2024-12-05 12:14:10.695410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.698 [2024-12-05 12:14:10.695443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.698 qpair failed and we were unable to recover it. 00:30:36.698 [2024-12-05 12:14:10.695652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.698 [2024-12-05 12:14:10.695685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.698 qpair failed and we were unable to recover it. 00:30:36.698 [2024-12-05 12:14:10.695854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.698 [2024-12-05 12:14:10.695886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.698 qpair failed and we were unable to recover it. 00:30:36.698 [2024-12-05 12:14:10.695995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.698 [2024-12-05 12:14:10.696040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.698 qpair failed and we were unable to recover it. 00:30:36.698 [2024-12-05 12:14:10.696235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.698 [2024-12-05 12:14:10.696267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.698 qpair failed and we were unable to recover it. 00:30:36.698 [2024-12-05 12:14:10.696454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.698 [2024-12-05 12:14:10.696487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.698 qpair failed and we were unable to recover it. 00:30:36.698 [2024-12-05 12:14:10.696725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.698 [2024-12-05 12:14:10.696757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.698 qpair failed and we were unable to recover it. 00:30:36.698 [2024-12-05 12:14:10.696953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.698 [2024-12-05 12:14:10.696985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.698 qpair failed and we were unable to recover it. 00:30:36.698 [2024-12-05 12:14:10.697122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.699 [2024-12-05 12:14:10.697155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.699 qpair failed and we were unable to recover it. 00:30:36.699 [2024-12-05 12:14:10.697351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.699 [2024-12-05 12:14:10.697393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.699 qpair failed and we were unable to recover it. 00:30:36.699 [2024-12-05 12:14:10.697631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.699 [2024-12-05 12:14:10.697662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.699 qpair failed and we were unable to recover it. 00:30:36.699 [2024-12-05 12:14:10.697797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.699 [2024-12-05 12:14:10.697828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.699 qpair failed and we were unable to recover it. 00:30:36.699 [2024-12-05 12:14:10.698085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.699 [2024-12-05 12:14:10.698117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.699 qpair failed and we were unable to recover it. 00:30:36.699 [2024-12-05 12:14:10.698308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.699 [2024-12-05 12:14:10.698339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.699 qpair failed and we were unable to recover it. 00:30:36.699 [2024-12-05 12:14:10.698473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.699 [2024-12-05 12:14:10.698508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.699 qpair failed and we were unable to recover it. 00:30:36.699 [2024-12-05 12:14:10.698710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.699 [2024-12-05 12:14:10.698741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.699 qpair failed and we were unable to recover it. 00:30:36.699 [2024-12-05 12:14:10.698881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.699 [2024-12-05 12:14:10.698912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.699 qpair failed and we were unable to recover it. 00:30:36.699 12:14:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:36.699 [2024-12-05 12:14:10.699115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.699 [2024-12-05 12:14:10.699149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.699 qpair failed and we were unable to recover it. 00:30:36.699 12:14:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:30:36.699 [2024-12-05 12:14:10.699391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.699 [2024-12-05 12:14:10.699426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.699 12:14:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:30:36.699 qpair failed and we were unable to recover it. 00:30:36.699 12:14:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:36.699 [2024-12-05 12:14:10.699615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.699 [2024-12-05 12:14:10.699649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.699 qpair failed and we were unable to recover it. 00:30:36.699 12:14:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:36.699 [2024-12-05 12:14:10.699885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.699 [2024-12-05 12:14:10.699918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.699 qpair failed and we were unable to recover it. 00:30:36.699 [2024-12-05 12:14:10.700161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.699 [2024-12-05 12:14:10.700193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.699 qpair failed and we were unable to recover it. 00:30:36.699 [2024-12-05 12:14:10.700384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.699 [2024-12-05 12:14:10.700417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.699 qpair failed and we were unable to recover it. 00:30:36.699 [2024-12-05 12:14:10.700610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.699 [2024-12-05 12:14:10.700643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.699 qpair failed and we were unable to recover it. 00:30:36.699 [2024-12-05 12:14:10.700785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.699 [2024-12-05 12:14:10.700816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.699 qpair failed and we were unable to recover it. 00:30:36.699 [2024-12-05 12:14:10.701031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.699 [2024-12-05 12:14:10.701062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.699 qpair failed and we were unable to recover it. 00:30:36.699 [2024-12-05 12:14:10.701268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.699 [2024-12-05 12:14:10.701299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.699 qpair failed and we were unable to recover it. 00:30:36.699 [2024-12-05 12:14:10.701480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.699 [2024-12-05 12:14:10.701512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.699 qpair failed and we were unable to recover it. 00:30:36.699 [2024-12-05 12:14:10.701637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.699 [2024-12-05 12:14:10.701674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.699 qpair failed and we were unable to recover it. 00:30:36.699 [2024-12-05 12:14:10.701878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.699 [2024-12-05 12:14:10.701909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.699 qpair failed and we were unable to recover it. 00:30:36.699 [2024-12-05 12:14:10.702101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.699 [2024-12-05 12:14:10.702132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.699 qpair failed and we were unable to recover it. 00:30:36.699 [2024-12-05 12:14:10.702260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.699 [2024-12-05 12:14:10.702294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.699 qpair failed and we were unable to recover it. 00:30:36.699 [2024-12-05 12:14:10.702555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.699 [2024-12-05 12:14:10.702587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.699 qpair failed and we were unable to recover it. 00:30:36.699 [2024-12-05 12:14:10.702810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.699 [2024-12-05 12:14:10.702842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.699 qpair failed and we were unable to recover it. 00:30:36.699 [2024-12-05 12:14:10.703031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.699 [2024-12-05 12:14:10.703063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.699 qpair failed and we were unable to recover it. 00:30:36.699 [2024-12-05 12:14:10.703271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.699 [2024-12-05 12:14:10.703303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.699 qpair failed and we were unable to recover it. 00:30:36.699 [2024-12-05 12:14:10.703496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.699 [2024-12-05 12:14:10.703529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.699 qpair failed and we were unable to recover it. 00:30:36.699 [2024-12-05 12:14:10.703723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.699 [2024-12-05 12:14:10.703754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.699 qpair failed and we were unable to recover it. 00:30:36.699 [2024-12-05 12:14:10.703938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.699 [2024-12-05 12:14:10.703970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.699 qpair failed and we were unable to recover it. 00:30:36.699 [2024-12-05 12:14:10.704216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.699 [2024-12-05 12:14:10.704247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.699 qpair failed and we were unable to recover it. 00:30:36.699 [2024-12-05 12:14:10.704427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.699 [2024-12-05 12:14:10.704460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.699 qpair failed and we were unable to recover it. 00:30:36.699 [2024-12-05 12:14:10.704653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.699 [2024-12-05 12:14:10.704685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.699 qpair failed and we were unable to recover it. 00:30:36.699 [2024-12-05 12:14:10.704901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.699 [2024-12-05 12:14:10.704932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.699 qpair failed and we were unable to recover it. 00:30:36.699 [2024-12-05 12:14:10.705201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.699 [2024-12-05 12:14:10.705234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.699 qpair failed and we were unable to recover it. 00:30:36.699 [2024-12-05 12:14:10.705361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.699 [2024-12-05 12:14:10.705402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.699 qpair failed and we were unable to recover it. 00:30:36.700 [2024-12-05 12:14:10.705581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.700 [2024-12-05 12:14:10.705614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.700 qpair failed and we were unable to recover it. 00:30:36.700 [2024-12-05 12:14:10.705804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.700 [2024-12-05 12:14:10.705835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.700 qpair failed and we were unable to recover it. 00:30:36.700 [2024-12-05 12:14:10.706038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.700 [2024-12-05 12:14:10.706069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.700 qpair failed and we were unable to recover it. 00:30:36.700 [2024-12-05 12:14:10.706263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.700 [2024-12-05 12:14:10.706297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.700 qpair failed and we were unable to recover it. 00:30:36.700 [2024-12-05 12:14:10.706581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.700 [2024-12-05 12:14:10.706613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.700 qpair failed and we were unable to recover it. 00:30:36.700 [2024-12-05 12:14:10.706880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.700 [2024-12-05 12:14:10.706912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.700 qpair failed and we were unable to recover it. 00:30:36.700 [2024-12-05 12:14:10.707202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.700 [2024-12-05 12:14:10.707234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.700 qpair failed and we were unable to recover it. 00:30:36.700 [2024-12-05 12:14:10.707528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.700 [2024-12-05 12:14:10.707562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.700 qpair failed and we were unable to recover it. 00:30:36.700 [2024-12-05 12:14:10.707708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.700 [2024-12-05 12:14:10.707740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.700 qpair failed and we were unable to recover it. 00:30:36.700 [2024-12-05 12:14:10.707996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.700 [2024-12-05 12:14:10.708027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.700 qpair failed and we were unable to recover it. 00:30:36.700 [2024-12-05 12:14:10.708212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.700 [2024-12-05 12:14:10.708249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.700 qpair failed and we were unable to recover it. 00:30:36.700 [2024-12-05 12:14:10.708360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.700 [2024-12-05 12:14:10.708402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.700 qpair failed and we were unable to recover it. 00:30:36.700 [2024-12-05 12:14:10.708639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.700 [2024-12-05 12:14:10.708672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.700 qpair failed and we were unable to recover it. 00:30:36.700 [2024-12-05 12:14:10.708804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.700 [2024-12-05 12:14:10.708837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.700 qpair failed and we were unable to recover it. 00:30:36.700 [2024-12-05 12:14:10.709045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.700 [2024-12-05 12:14:10.709076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.700 qpair failed and we were unable to recover it. 00:30:36.700 [2024-12-05 12:14:10.709262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.700 [2024-12-05 12:14:10.709294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.700 qpair failed and we were unable to recover it. 00:30:36.700 [2024-12-05 12:14:10.709570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.700 [2024-12-05 12:14:10.709603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.700 qpair failed and we were unable to recover it. 00:30:36.700 [2024-12-05 12:14:10.709790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.700 [2024-12-05 12:14:10.709824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.700 qpair failed and we were unable to recover it. 00:30:36.700 [2024-12-05 12:14:10.710029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.700 [2024-12-05 12:14:10.710060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.700 qpair failed and we were unable to recover it. 00:30:36.700 [2024-12-05 12:14:10.710294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.700 [2024-12-05 12:14:10.710327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.700 qpair failed and we were unable to recover it. 00:30:36.700 [2024-12-05 12:14:10.710490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.700 [2024-12-05 12:14:10.710524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.700 qpair failed and we were unable to recover it. 00:30:36.700 [2024-12-05 12:14:10.710661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.700 [2024-12-05 12:14:10.710694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.700 qpair failed and we were unable to recover it. 00:30:36.700 [2024-12-05 12:14:10.710880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.700 [2024-12-05 12:14:10.710912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.700 qpair failed and we were unable to recover it. 00:30:36.700 [2024-12-05 12:14:10.711044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.700 [2024-12-05 12:14:10.711075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.700 qpair failed and we were unable to recover it. 00:30:36.700 [2024-12-05 12:14:10.711273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.700 [2024-12-05 12:14:10.711311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.700 qpair failed and we were unable to recover it. 00:30:36.700 [2024-12-05 12:14:10.711561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.700 [2024-12-05 12:14:10.711594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.700 qpair failed and we were unable to recover it. 00:30:36.700 [2024-12-05 12:14:10.711793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.700 [2024-12-05 12:14:10.711825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.700 qpair failed and we were unable to recover it. 00:30:36.700 [2024-12-05 12:14:10.711959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.700 [2024-12-05 12:14:10.711991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.700 qpair failed and we were unable to recover it. 00:30:36.700 [2024-12-05 12:14:10.712252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.700 [2024-12-05 12:14:10.712283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.700 qpair failed and we were unable to recover it. 00:30:36.700 [2024-12-05 12:14:10.712463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.700 [2024-12-05 12:14:10.712496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.700 qpair failed and we were unable to recover it. 00:30:36.700 [2024-12-05 12:14:10.712707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.700 [2024-12-05 12:14:10.712740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.700 qpair failed and we were unable to recover it. 00:30:36.700 [2024-12-05 12:14:10.713030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.700 [2024-12-05 12:14:10.713062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.700 qpair failed and we were unable to recover it. 00:30:36.700 [2024-12-05 12:14:10.713251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.700 [2024-12-05 12:14:10.713284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.700 qpair failed and we were unable to recover it. 00:30:36.700 [2024-12-05 12:14:10.713515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.700 [2024-12-05 12:14:10.713548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.700 qpair failed and we were unable to recover it. 00:30:36.700 [2024-12-05 12:14:10.713732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.700 [2024-12-05 12:14:10.713764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.700 qpair failed and we were unable to recover it. 00:30:36.700 [2024-12-05 12:14:10.713945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.700 [2024-12-05 12:14:10.713978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.700 qpair failed and we were unable to recover it. 00:30:36.700 [2024-12-05 12:14:10.714182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.700 [2024-12-05 12:14:10.714214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.700 qpair failed and we were unable to recover it. 00:30:36.700 [2024-12-05 12:14:10.714406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.700 [2024-12-05 12:14:10.714445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.700 qpair failed and we were unable to recover it. 00:30:36.700 [2024-12-05 12:14:10.714632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.701 [2024-12-05 12:14:10.714663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.701 qpair failed and we were unable to recover it. 00:30:36.701 [2024-12-05 12:14:10.714844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.701 [2024-12-05 12:14:10.714876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.701 qpair failed and we were unable to recover it. 00:30:36.701 [2024-12-05 12:14:10.715088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.701 [2024-12-05 12:14:10.715120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.701 qpair failed and we were unable to recover it. 00:30:36.701 [2024-12-05 12:14:10.715293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.701 [2024-12-05 12:14:10.715324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.701 qpair failed and we were unable to recover it. 00:30:36.701 [2024-12-05 12:14:10.715548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.701 [2024-12-05 12:14:10.715581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.701 qpair failed and we were unable to recover it. 00:30:36.701 [2024-12-05 12:14:10.715764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.701 [2024-12-05 12:14:10.715796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.701 qpair failed and we were unable to recover it. 00:30:36.701 [2024-12-05 12:14:10.716012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.701 [2024-12-05 12:14:10.716043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.701 qpair failed and we were unable to recover it. 00:30:36.701 [2024-12-05 12:14:10.716303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.701 [2024-12-05 12:14:10.716336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.701 qpair failed and we were unable to recover it. 00:30:36.701 [2024-12-05 12:14:10.716541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.701 [2024-12-05 12:14:10.716592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.701 qpair failed and we were unable to recover it. 00:30:36.701 [2024-12-05 12:14:10.716759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.701 [2024-12-05 12:14:10.716794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.701 qpair failed and we were unable to recover it. 00:30:36.701 [2024-12-05 12:14:10.716977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.701 [2024-12-05 12:14:10.717009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.701 qpair failed and we were unable to recover it. 00:30:36.701 [2024-12-05 12:14:10.717191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.701 [2024-12-05 12:14:10.717223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.701 qpair failed and we were unable to recover it. 00:30:36.701 [2024-12-05 12:14:10.717402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.701 [2024-12-05 12:14:10.717434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.701 qpair failed and we were unable to recover it. 00:30:36.701 [2024-12-05 12:14:10.717588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.701 [2024-12-05 12:14:10.717620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.701 qpair failed and we were unable to recover it. 00:30:36.701 [2024-12-05 12:14:10.717808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.701 [2024-12-05 12:14:10.717841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.701 qpair failed and we were unable to recover it. 00:30:36.701 [2024-12-05 12:14:10.718097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.701 [2024-12-05 12:14:10.718129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.701 qpair failed and we were unable to recover it. 00:30:36.701 [2024-12-05 12:14:10.718307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.701 [2024-12-05 12:14:10.718341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.701 qpair failed and we were unable to recover it. 00:30:36.701 [2024-12-05 12:14:10.718539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.701 [2024-12-05 12:14:10.718570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.701 qpair failed and we were unable to recover it. 00:30:36.701 [2024-12-05 12:14:10.718760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.701 [2024-12-05 12:14:10.718791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.701 qpair failed and we were unable to recover it. 00:30:36.701 [2024-12-05 12:14:10.718926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.701 [2024-12-05 12:14:10.718957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.701 qpair failed and we were unable to recover it. 00:30:36.701 [2024-12-05 12:14:10.719159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.701 [2024-12-05 12:14:10.719191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.701 qpair failed and we were unable to recover it. 00:30:36.701 [2024-12-05 12:14:10.719437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.701 [2024-12-05 12:14:10.719469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.701 qpair failed and we were unable to recover it. 00:30:36.701 [2024-12-05 12:14:10.719662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.701 [2024-12-05 12:14:10.719693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.701 qpair failed and we were unable to recover it. 00:30:36.701 [2024-12-05 12:14:10.719867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.701 [2024-12-05 12:14:10.719899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.701 qpair failed and we were unable to recover it. 00:30:36.701 [2024-12-05 12:14:10.720022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.701 [2024-12-05 12:14:10.720054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.701 qpair failed and we were unable to recover it. 00:30:36.701 [2024-12-05 12:14:10.720308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.701 [2024-12-05 12:14:10.720341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc420000b90 with addr=10.0.0.2, port=4420 00:30:36.701 qpair failed and we were unable to recover it. 00:30:36.701 [2024-12-05 12:14:10.720494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.701 [2024-12-05 12:14:10.720528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.701 qpair failed and we were unable to recover it. 00:30:36.701 [2024-12-05 12:14:10.720672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.701 [2024-12-05 12:14:10.720705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.701 qpair failed and we were unable to recover it. 00:30:36.701 [2024-12-05 12:14:10.720838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.701 [2024-12-05 12:14:10.720870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.701 qpair failed and we were unable to recover it. 00:30:36.701 [2024-12-05 12:14:10.721083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.701 [2024-12-05 12:14:10.721116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.701 qpair failed and we were unable to recover it. 00:30:36.701 [2024-12-05 12:14:10.721304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.701 [2024-12-05 12:14:10.721337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.701 qpair failed and we were unable to recover it. 00:30:36.701 [2024-12-05 12:14:10.721538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.701 [2024-12-05 12:14:10.721573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.701 qpair failed and we were unable to recover it. 00:30:36.701 [2024-12-05 12:14:10.721689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.701 [2024-12-05 12:14:10.721720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.701 qpair failed and we were unable to recover it. 00:30:36.701 [2024-12-05 12:14:10.721828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.701 [2024-12-05 12:14:10.721860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.701 qpair failed and we were unable to recover it. 00:30:36.701 [2024-12-05 12:14:10.722140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.701 [2024-12-05 12:14:10.722171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.701 qpair failed and we were unable to recover it. 00:30:36.701 [2024-12-05 12:14:10.722357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.701 [2024-12-05 12:14:10.722398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.701 qpair failed and we were unable to recover it. 00:30:36.701 [2024-12-05 12:14:10.722540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.701 [2024-12-05 12:14:10.722572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.701 qpair failed and we were unable to recover it. 00:30:36.701 [2024-12-05 12:14:10.722759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.701 [2024-12-05 12:14:10.722791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.701 qpair failed and we were unable to recover it. 00:30:36.702 [2024-12-05 12:14:10.722987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.702 [2024-12-05 12:14:10.723020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.702 qpair failed and we were unable to recover it. 00:30:36.702 [2024-12-05 12:14:10.723224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.702 [2024-12-05 12:14:10.723255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.702 qpair failed and we were unable to recover it. 00:30:36.702 [2024-12-05 12:14:10.723449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.702 [2024-12-05 12:14:10.723483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.702 qpair failed and we were unable to recover it. 00:30:36.702 [2024-12-05 12:14:10.723612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.702 [2024-12-05 12:14:10.723644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.702 qpair failed and we were unable to recover it. 00:30:36.702 [2024-12-05 12:14:10.723784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.702 [2024-12-05 12:14:10.723818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.702 qpair failed and we were unable to recover it. 00:30:36.702 [2024-12-05 12:14:10.724027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.702 [2024-12-05 12:14:10.724059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.702 qpair failed and we were unable to recover it. 00:30:36.702 [2024-12-05 12:14:10.724265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.702 [2024-12-05 12:14:10.724296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.702 qpair failed and we were unable to recover it. 00:30:36.702 [2024-12-05 12:14:10.724498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.702 [2024-12-05 12:14:10.724529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.702 qpair failed and we were unable to recover it. 00:30:36.702 [2024-12-05 12:14:10.724665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.702 [2024-12-05 12:14:10.724697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.702 qpair failed and we were unable to recover it. 00:30:36.702 [2024-12-05 12:14:10.724813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.702 [2024-12-05 12:14:10.724845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.702 qpair failed and we were unable to recover it. 00:30:36.702 [2024-12-05 12:14:10.725049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.702 [2024-12-05 12:14:10.725081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.702 qpair failed and we were unable to recover it. 00:30:36.702 [2024-12-05 12:14:10.725270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.702 [2024-12-05 12:14:10.725303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.702 qpair failed and we were unable to recover it. 00:30:36.702 [2024-12-05 12:14:10.725559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.702 [2024-12-05 12:14:10.725591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.702 qpair failed and we were unable to recover it. 00:30:36.702 [2024-12-05 12:14:10.725722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.702 [2024-12-05 12:14:10.725754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.702 qpair failed and we were unable to recover it. 00:30:36.702 [2024-12-05 12:14:10.725949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.702 [2024-12-05 12:14:10.725980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.702 qpair failed and we were unable to recover it. 00:30:36.702 [2024-12-05 12:14:10.726096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.702 [2024-12-05 12:14:10.726134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.702 qpair failed and we were unable to recover it. 00:30:36.702 [2024-12-05 12:14:10.726345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.702 [2024-12-05 12:14:10.726385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.702 qpair failed and we were unable to recover it. 00:30:36.702 [2024-12-05 12:14:10.726631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.702 [2024-12-05 12:14:10.726662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.702 qpair failed and we were unable to recover it. 00:30:36.702 [2024-12-05 12:14:10.726846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.702 [2024-12-05 12:14:10.726880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.702 qpair failed and we were unable to recover it. 00:30:36.702 [2024-12-05 12:14:10.727024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.702 [2024-12-05 12:14:10.727056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.702 qpair failed and we were unable to recover it. 00:30:36.702 [2024-12-05 12:14:10.727163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.702 [2024-12-05 12:14:10.727194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.702 qpair failed and we were unable to recover it. 00:30:36.702 [2024-12-05 12:14:10.727390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.702 [2024-12-05 12:14:10.727423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.702 qpair failed and we were unable to recover it. 00:30:36.702 [2024-12-05 12:14:10.727553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.702 [2024-12-05 12:14:10.727584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.702 qpair failed and we were unable to recover it. 00:30:36.702 [2024-12-05 12:14:10.727771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.702 [2024-12-05 12:14:10.727803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.702 qpair failed and we were unable to recover it. 00:30:36.702 [2024-12-05 12:14:10.728000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.702 [2024-12-05 12:14:10.728031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.702 qpair failed and we were unable to recover it. 00:30:36.702 [2024-12-05 12:14:10.728146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.702 [2024-12-05 12:14:10.728179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.702 qpair failed and we were unable to recover it. 00:30:36.702 [2024-12-05 12:14:10.728289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.702 [2024-12-05 12:14:10.728321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.702 qpair failed and we were unable to recover it. 00:30:36.702 [2024-12-05 12:14:10.728445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.702 [2024-12-05 12:14:10.728477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.702 qpair failed and we were unable to recover it. 00:30:36.702 [2024-12-05 12:14:10.728662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.702 [2024-12-05 12:14:10.728695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.702 qpair failed and we were unable to recover it. 00:30:36.702 [2024-12-05 12:14:10.728832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.702 [2024-12-05 12:14:10.728863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.702 qpair failed and we were unable to recover it. 00:30:36.702 [2024-12-05 12:14:10.728991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.702 [2024-12-05 12:14:10.729023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.702 qpair failed and we were unable to recover it. 00:30:36.702 [2024-12-05 12:14:10.729162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.702 [2024-12-05 12:14:10.729193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.702 qpair failed and we were unable to recover it. 00:30:36.702 [2024-12-05 12:14:10.729404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.702 [2024-12-05 12:14:10.729437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.702 qpair failed and we were unable to recover it. 00:30:36.702 [2024-12-05 12:14:10.729539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.702 [2024-12-05 12:14:10.729570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.702 qpair failed and we were unable to recover it. 00:30:36.703 [2024-12-05 12:14:10.729695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.703 [2024-12-05 12:14:10.729727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.703 qpair failed and we were unable to recover it. 00:30:36.703 [2024-12-05 12:14:10.729851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.703 [2024-12-05 12:14:10.729883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.703 qpair failed and we were unable to recover it. 00:30:36.703 [2024-12-05 12:14:10.730071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.703 [2024-12-05 12:14:10.730103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.703 qpair failed and we were unable to recover it. 00:30:36.703 [2024-12-05 12:14:10.730206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.703 [2024-12-05 12:14:10.730239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.703 qpair failed and we were unable to recover it. 00:30:36.703 [2024-12-05 12:14:10.730356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.703 [2024-12-05 12:14:10.730399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.703 qpair failed and we were unable to recover it. 00:30:36.703 [2024-12-05 12:14:10.730583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.703 [2024-12-05 12:14:10.730616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.703 qpair failed and we were unable to recover it. 00:30:36.703 [2024-12-05 12:14:10.730801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.703 [2024-12-05 12:14:10.730833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.703 qpair failed and we were unable to recover it. 00:30:36.703 [2024-12-05 12:14:10.731025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.703 [2024-12-05 12:14:10.731056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.703 qpair failed and we were unable to recover it. 00:30:36.703 [2024-12-05 12:14:10.731189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.703 [2024-12-05 12:14:10.731227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.703 qpair failed and we were unable to recover it. 00:30:36.703 [2024-12-05 12:14:10.731439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.703 [2024-12-05 12:14:10.731472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.703 qpair failed and we were unable to recover it. 00:30:36.703 [2024-12-05 12:14:10.731592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.703 [2024-12-05 12:14:10.731623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.703 qpair failed and we were unable to recover it. 00:30:36.703 [2024-12-05 12:14:10.731818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.703 [2024-12-05 12:14:10.731853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.703 qpair failed and we were unable to recover it. 00:30:36.703 [2024-12-05 12:14:10.731966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.703 [2024-12-05 12:14:10.731998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.703 qpair failed and we were unable to recover it. 00:30:36.703 [2024-12-05 12:14:10.732108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.703 [2024-12-05 12:14:10.732139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.703 qpair failed and we were unable to recover it. 00:30:36.703 [2024-12-05 12:14:10.732245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.703 [2024-12-05 12:14:10.732276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.703 qpair failed and we were unable to recover it. 00:30:36.703 [2024-12-05 12:14:10.732401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.703 [2024-12-05 12:14:10.732434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.703 qpair failed and we were unable to recover it. 00:30:36.703 [2024-12-05 12:14:10.732534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.703 [2024-12-05 12:14:10.732565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.703 qpair failed and we were unable to recover it. 00:30:36.703 [2024-12-05 12:14:10.732680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.703 [2024-12-05 12:14:10.732712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.703 qpair failed and we were unable to recover it. 00:30:36.703 [2024-12-05 12:14:10.732890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.703 [2024-12-05 12:14:10.732922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.703 qpair failed and we were unable to recover it. 00:30:36.703 [2024-12-05 12:14:10.733040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.703 [2024-12-05 12:14:10.733071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.703 qpair failed and we were unable to recover it. 00:30:36.703 [2024-12-05 12:14:10.733262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.703 [2024-12-05 12:14:10.733293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.703 qpair failed and we were unable to recover it. 00:30:36.703 [2024-12-05 12:14:10.733490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.703 [2024-12-05 12:14:10.733523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.703 qpair failed and we were unable to recover it. 00:30:36.703 [2024-12-05 12:14:10.733660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.703 [2024-12-05 12:14:10.733696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.703 qpair failed and we were unable to recover it. 00:30:36.703 [2024-12-05 12:14:10.733820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.703 [2024-12-05 12:14:10.733853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.703 qpair failed and we were unable to recover it. 00:30:36.703 [2024-12-05 12:14:10.733964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.703 12:14:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:36.703 [2024-12-05 12:14:10.733997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.703 qpair failed and we were unable to recover it. 00:30:36.703 [2024-12-05 12:14:10.734170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.703 [2024-12-05 12:14:10.734203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.703 qpair failed and we were unable to recover it. 00:30:36.703 [2024-12-05 12:14:10.734305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.703 [2024-12-05 12:14:10.734338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.703 qpair failed and we were unable to recover it. 00:30:36.703 12:14:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:36.703 [2024-12-05 12:14:10.734494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.703 [2024-12-05 12:14:10.734536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.703 qpair failed and we were unable to recover it. 00:30:36.703 [2024-12-05 12:14:10.734654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.703 [2024-12-05 12:14:10.734689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.703 qpair failed and we were unable to recover it. 00:30:36.703 12:14:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.703 [2024-12-05 12:14:10.734817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.703 [2024-12-05 12:14:10.734849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.703 qpair failed and we were unable to recover it. 00:30:36.703 12:14:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:36.703 [2024-12-05 12:14:10.735060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.703 [2024-12-05 12:14:10.735094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.703 qpair failed and we were unable to recover it. 00:30:36.703 [2024-12-05 12:14:10.735265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.703 [2024-12-05 12:14:10.735297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.703 qpair failed and we were unable to recover it. 00:30:36.703 [2024-12-05 12:14:10.735482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.703 [2024-12-05 12:14:10.735517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.703 qpair failed and we were unable to recover it. 00:30:36.703 [2024-12-05 12:14:10.735652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.703 [2024-12-05 12:14:10.735692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.703 qpair failed and we were unable to recover it. 00:30:36.703 [2024-12-05 12:14:10.735819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.703 [2024-12-05 12:14:10.735852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.703 qpair failed and we were unable to recover it. 00:30:36.703 [2024-12-05 12:14:10.735970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.703 [2024-12-05 12:14:10.736002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.703 qpair failed and we were unable to recover it. 00:30:36.703 [2024-12-05 12:14:10.736120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.704 [2024-12-05 12:14:10.736151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.704 qpair failed and we were unable to recover it. 00:30:36.704 [2024-12-05 12:14:10.736263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.704 [2024-12-05 12:14:10.736294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.704 qpair failed and we were unable to recover it. 00:30:36.704 [2024-12-05 12:14:10.736412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.704 [2024-12-05 12:14:10.736445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.704 qpair failed and we were unable to recover it. 00:30:36.704 [2024-12-05 12:14:10.736625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.704 [2024-12-05 12:14:10.736656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.704 qpair failed and we were unable to recover it. 00:30:36.704 [2024-12-05 12:14:10.736788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.704 [2024-12-05 12:14:10.736820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.704 qpair failed and we were unable to recover it. 00:30:36.704 [2024-12-05 12:14:10.737047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.704 [2024-12-05 12:14:10.737077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.704 qpair failed and we were unable to recover it. 00:30:36.704 [2024-12-05 12:14:10.737194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.704 [2024-12-05 12:14:10.737229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.704 qpair failed and we were unable to recover it. 00:30:36.704 [2024-12-05 12:14:10.737336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.704 [2024-12-05 12:14:10.737379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.704 qpair failed and we were unable to recover it. 00:30:36.704 [2024-12-05 12:14:10.737491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.704 [2024-12-05 12:14:10.737522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.704 qpair failed and we were unable to recover it. 00:30:36.704 [2024-12-05 12:14:10.737625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.704 [2024-12-05 12:14:10.737657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.704 qpair failed and we were unable to recover it. 00:30:36.704 [2024-12-05 12:14:10.737831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.704 [2024-12-05 12:14:10.737862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.704 qpair failed and we were unable to recover it. 00:30:36.704 [2024-12-05 12:14:10.737992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.704 [2024-12-05 12:14:10.738024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.704 qpair failed and we were unable to recover it. 00:30:36.704 [2024-12-05 12:14:10.738240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.704 [2024-12-05 12:14:10.738271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.704 qpair failed and we were unable to recover it. 00:30:36.704 [2024-12-05 12:14:10.738399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.704 [2024-12-05 12:14:10.738434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.704 qpair failed and we were unable to recover it. 00:30:36.704 [2024-12-05 12:14:10.738574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.704 [2024-12-05 12:14:10.738605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.704 qpair failed and we were unable to recover it. 00:30:36.704 [2024-12-05 12:14:10.738714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.704 [2024-12-05 12:14:10.738745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.704 qpair failed and we were unable to recover it. 00:30:36.704 [2024-12-05 12:14:10.738861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.704 [2024-12-05 12:14:10.738893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.704 qpair failed and we were unable to recover it. 00:30:36.704 [2024-12-05 12:14:10.739023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.704 [2024-12-05 12:14:10.739053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.704 qpair failed and we were unable to recover it. 00:30:36.704 [2024-12-05 12:14:10.739260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.704 [2024-12-05 12:14:10.739292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.704 qpair failed and we were unable to recover it. 00:30:36.704 [2024-12-05 12:14:10.739473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.704 [2024-12-05 12:14:10.739505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.704 qpair failed and we were unable to recover it. 00:30:36.704 [2024-12-05 12:14:10.739683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.704 [2024-12-05 12:14:10.739716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.704 qpair failed and we were unable to recover it. 00:30:36.704 [2024-12-05 12:14:10.739977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.704 [2024-12-05 12:14:10.740008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.704 qpair failed and we were unable to recover it. 00:30:36.704 [2024-12-05 12:14:10.740114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.704 [2024-12-05 12:14:10.740145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.704 qpair failed and we were unable to recover it. 00:30:36.704 [2024-12-05 12:14:10.740279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.704 [2024-12-05 12:14:10.740311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.704 qpair failed and we were unable to recover it. 00:30:36.704 [2024-12-05 12:14:10.740530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.704 [2024-12-05 12:14:10.740574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.704 qpair failed and we were unable to recover it. 00:30:36.704 [2024-12-05 12:14:10.740744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.704 [2024-12-05 12:14:10.740777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.704 qpair failed and we were unable to recover it. 00:30:36.704 [2024-12-05 12:14:10.740957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.704 [2024-12-05 12:14:10.740988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.704 qpair failed and we were unable to recover it. 00:30:36.704 [2024-12-05 12:14:10.741193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.704 [2024-12-05 12:14:10.741223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.704 qpair failed and we were unable to recover it. 00:30:36.704 [2024-12-05 12:14:10.741478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.704 [2024-12-05 12:14:10.741511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.704 qpair failed and we were unable to recover it. 00:30:36.704 [2024-12-05 12:14:10.741628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.704 [2024-12-05 12:14:10.741660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.704 qpair failed and we were unable to recover it. 00:30:36.704 [2024-12-05 12:14:10.741796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.704 [2024-12-05 12:14:10.741827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.704 qpair failed and we were unable to recover it. 00:30:36.704 [2024-12-05 12:14:10.741946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.704 [2024-12-05 12:14:10.741978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.704 qpair failed and we were unable to recover it. 00:30:36.704 [2024-12-05 12:14:10.742148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.704 [2024-12-05 12:14:10.742179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.704 qpair failed and we were unable to recover it. 00:30:36.704 [2024-12-05 12:14:10.742360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.704 [2024-12-05 12:14:10.742402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.704 qpair failed and we were unable to recover it. 00:30:36.704 [2024-12-05 12:14:10.742579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.704 [2024-12-05 12:14:10.742611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.704 qpair failed and we were unable to recover it. 00:30:36.704 [2024-12-05 12:14:10.742785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.704 [2024-12-05 12:14:10.742817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.705 qpair failed and we were unable to recover it. 00:30:36.705 [2024-12-05 12:14:10.742993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.705 [2024-12-05 12:14:10.743025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.705 qpair failed and we were unable to recover it. 00:30:36.705 [2024-12-05 12:14:10.743202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.705 [2024-12-05 12:14:10.743234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.705 qpair failed and we were unable to recover it. 00:30:36.705 [2024-12-05 12:14:10.743345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.705 [2024-12-05 12:14:10.743388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.705 qpair failed and we were unable to recover it. 00:30:36.705 [2024-12-05 12:14:10.743586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.705 [2024-12-05 12:14:10.743617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.705 qpair failed and we were unable to recover it. 00:30:36.705 [2024-12-05 12:14:10.743796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.705 [2024-12-05 12:14:10.743827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.705 qpair failed and we were unable to recover it. 00:30:36.705 [2024-12-05 12:14:10.744036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.705 [2024-12-05 12:14:10.744067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.705 qpair failed and we were unable to recover it. 00:30:36.705 [2024-12-05 12:14:10.744195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.705 [2024-12-05 12:14:10.744226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.705 qpair failed and we were unable to recover it. 00:30:36.705 [2024-12-05 12:14:10.744425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.705 [2024-12-05 12:14:10.744457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.705 qpair failed and we were unable to recover it. 00:30:36.705 [2024-12-05 12:14:10.744564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.705 [2024-12-05 12:14:10.744596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.705 qpair failed and we were unable to recover it. 00:30:36.705 [2024-12-05 12:14:10.744706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.705 [2024-12-05 12:14:10.744737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.705 qpair failed and we were unable to recover it. 00:30:36.705 [2024-12-05 12:14:10.744905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.705 [2024-12-05 12:14:10.744937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.705 qpair failed and we were unable to recover it. 00:30:36.705 [2024-12-05 12:14:10.745111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.705 [2024-12-05 12:14:10.745144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.705 qpair failed and we were unable to recover it. 00:30:36.705 [2024-12-05 12:14:10.745279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.705 [2024-12-05 12:14:10.745311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.705 qpair failed and we were unable to recover it. 00:30:36.705 [2024-12-05 12:14:10.745441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.705 [2024-12-05 12:14:10.745473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.705 qpair failed and we were unable to recover it. 00:30:36.705 [2024-12-05 12:14:10.745591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.705 [2024-12-05 12:14:10.745623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.705 qpair failed and we were unable to recover it. 00:30:36.705 [2024-12-05 12:14:10.745722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.705 [2024-12-05 12:14:10.745752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.705 qpair failed and we were unable to recover it. 00:30:36.705 [2024-12-05 12:14:10.745936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.705 [2024-12-05 12:14:10.745968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.705 qpair failed and we were unable to recover it. 00:30:36.705 [2024-12-05 12:14:10.746147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.705 [2024-12-05 12:14:10.746179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.705 qpair failed and we were unable to recover it. 00:30:36.705 [2024-12-05 12:14:10.746435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.705 [2024-12-05 12:14:10.746467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.705 qpair failed and we were unable to recover it. 00:30:36.705 [2024-12-05 12:14:10.746580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.705 [2024-12-05 12:14:10.746611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.705 qpair failed and we were unable to recover it. 00:30:36.705 [2024-12-05 12:14:10.746783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.705 [2024-12-05 12:14:10.746815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.705 qpair failed and we were unable to recover it. 00:30:36.705 [2024-12-05 12:14:10.747002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.705 [2024-12-05 12:14:10.747037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.705 qpair failed and we were unable to recover it. 00:30:36.705 [2024-12-05 12:14:10.747281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.705 [2024-12-05 12:14:10.747313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.705 qpair failed and we were unable to recover it. 00:30:36.705 [2024-12-05 12:14:10.747436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.705 [2024-12-05 12:14:10.747468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.705 qpair failed and we were unable to recover it. 00:30:36.705 [2024-12-05 12:14:10.747666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.705 [2024-12-05 12:14:10.747697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.705 qpair failed and we were unable to recover it. 00:30:36.705 [2024-12-05 12:14:10.747906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.705 [2024-12-05 12:14:10.747938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.705 qpair failed and we were unable to recover it. 00:30:36.705 [2024-12-05 12:14:10.748109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.705 [2024-12-05 12:14:10.748140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.705 qpair failed and we were unable to recover it. 00:30:36.705 [2024-12-05 12:14:10.748331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.705 [2024-12-05 12:14:10.748363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.705 qpair failed and we were unable to recover it. 00:30:36.705 [2024-12-05 12:14:10.748478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.705 [2024-12-05 12:14:10.748510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.705 qpair failed and we were unable to recover it. 00:30:36.705 [2024-12-05 12:14:10.748697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.705 [2024-12-05 12:14:10.748735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.705 qpair failed and we were unable to recover it. 00:30:36.705 [2024-12-05 12:14:10.748985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.705 [2024-12-05 12:14:10.749016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.705 qpair failed and we were unable to recover it. 00:30:36.705 [2024-12-05 12:14:10.749141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.705 [2024-12-05 12:14:10.749173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.705 qpair failed and we were unable to recover it. 00:30:36.705 [2024-12-05 12:14:10.749366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.705 [2024-12-05 12:14:10.749411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.705 qpair failed and we were unable to recover it. 00:30:36.705 [2024-12-05 12:14:10.749585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.705 [2024-12-05 12:14:10.749617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.705 qpair failed and we were unable to recover it. 00:30:36.705 [2024-12-05 12:14:10.749752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.705 [2024-12-05 12:14:10.749784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.705 qpair failed and we were unable to recover it. 00:30:36.705 [2024-12-05 12:14:10.749951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.705 [2024-12-05 12:14:10.749982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.705 qpair failed and we were unable to recover it. 00:30:36.705 [2024-12-05 12:14:10.750276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.705 [2024-12-05 12:14:10.750308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.705 qpair failed and we were unable to recover it. 00:30:36.705 [2024-12-05 12:14:10.750555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.705 [2024-12-05 12:14:10.750588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.705 qpair failed and we were unable to recover it. 00:30:36.705 [2024-12-05 12:14:10.750697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.706 [2024-12-05 12:14:10.750728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.706 qpair failed and we were unable to recover it. 00:30:36.706 [2024-12-05 12:14:10.750944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.706 [2024-12-05 12:14:10.750976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.706 qpair failed and we were unable to recover it. 00:30:36.706 [2024-12-05 12:14:10.751144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.706 [2024-12-05 12:14:10.751176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.706 qpair failed and we were unable to recover it. 00:30:36.706 [2024-12-05 12:14:10.751299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.706 [2024-12-05 12:14:10.751330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.706 qpair failed and we were unable to recover it. 00:30:36.706 [2024-12-05 12:14:10.751583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.706 [2024-12-05 12:14:10.751616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.706 qpair failed and we were unable to recover it. 00:30:36.706 [2024-12-05 12:14:10.751750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.706 [2024-12-05 12:14:10.751782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.706 qpair failed and we were unable to recover it. 00:30:36.706 [2024-12-05 12:14:10.751963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.706 [2024-12-05 12:14:10.751994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.706 qpair failed and we were unable to recover it. 00:30:36.706 [2024-12-05 12:14:10.752186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.706 [2024-12-05 12:14:10.752218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.706 qpair failed and we were unable to recover it. 00:30:36.706 [2024-12-05 12:14:10.752461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.706 [2024-12-05 12:14:10.752494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.706 qpair failed and we were unable to recover it. 00:30:36.706 [2024-12-05 12:14:10.752598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.706 [2024-12-05 12:14:10.752629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.706 qpair failed and we were unable to recover it. 00:30:36.706 [2024-12-05 12:14:10.752891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.706 [2024-12-05 12:14:10.752922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.706 qpair failed and we were unable to recover it. 00:30:36.706 [2024-12-05 12:14:10.753105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.706 [2024-12-05 12:14:10.753138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.706 qpair failed and we were unable to recover it. 00:30:36.706 [2024-12-05 12:14:10.753334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.706 [2024-12-05 12:14:10.753366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.706 qpair failed and we were unable to recover it. 00:30:36.706 [2024-12-05 12:14:10.753502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.706 [2024-12-05 12:14:10.753535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.706 qpair failed and we were unable to recover it. 00:30:36.706 [2024-12-05 12:14:10.753645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.706 [2024-12-05 12:14:10.753678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.706 qpair failed and we were unable to recover it. 00:30:36.706 [2024-12-05 12:14:10.753784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.706 [2024-12-05 12:14:10.753814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.706 qpair failed and we were unable to recover it. 00:30:36.706 [2024-12-05 12:14:10.753938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.706 [2024-12-05 12:14:10.753970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.706 qpair failed and we were unable to recover it. 00:30:36.706 [2024-12-05 12:14:10.754181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.706 [2024-12-05 12:14:10.754212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.706 qpair failed and we were unable to recover it. 00:30:36.706 [2024-12-05 12:14:10.754472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.706 [2024-12-05 12:14:10.754510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.706 qpair failed and we were unable to recover it. 00:30:36.706 [2024-12-05 12:14:10.754705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.706 [2024-12-05 12:14:10.754736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.706 qpair failed and we were unable to recover it. 00:30:36.706 [2024-12-05 12:14:10.754858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.706 [2024-12-05 12:14:10.754888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.706 qpair failed and we were unable to recover it. 00:30:36.706 [2024-12-05 12:14:10.755076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.706 [2024-12-05 12:14:10.755108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.706 qpair failed and we were unable to recover it. 00:30:36.706 [2024-12-05 12:14:10.755223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.706 [2024-12-05 12:14:10.755255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.706 qpair failed and we were unable to recover it. 00:30:36.706 [2024-12-05 12:14:10.755438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.706 [2024-12-05 12:14:10.755469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.706 qpair failed and we were unable to recover it. 00:30:36.706 [2024-12-05 12:14:10.755646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.706 [2024-12-05 12:14:10.755677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.706 qpair failed and we were unable to recover it. 00:30:36.706 [2024-12-05 12:14:10.755862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.706 [2024-12-05 12:14:10.755893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.706 qpair failed and we were unable to recover it. 00:30:36.706 [2024-12-05 12:14:10.756061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.706 [2024-12-05 12:14:10.756093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.706 qpair failed and we were unable to recover it. 00:30:36.706 [2024-12-05 12:14:10.756260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.706 [2024-12-05 12:14:10.756292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.706 qpair failed and we were unable to recover it. 00:30:36.706 [2024-12-05 12:14:10.756471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.706 [2024-12-05 12:14:10.756504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.706 qpair failed and we were unable to recover it. 00:30:36.706 [2024-12-05 12:14:10.756675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.706 [2024-12-05 12:14:10.756707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.706 qpair failed and we were unable to recover it. 00:30:36.706 [2024-12-05 12:14:10.756878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.706 [2024-12-05 12:14:10.756909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.706 qpair failed and we were unable to recover it. 00:30:36.706 [2024-12-05 12:14:10.757112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.706 [2024-12-05 12:14:10.757145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.706 qpair failed and we were unable to recover it. 00:30:36.706 [2024-12-05 12:14:10.757317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.706 [2024-12-05 12:14:10.757349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.706 qpair failed and we were unable to recover it. 00:30:36.706 [2024-12-05 12:14:10.757465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.706 [2024-12-05 12:14:10.757497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.706 qpair failed and we were unable to recover it. 00:30:36.706 [2024-12-05 12:14:10.757606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.706 [2024-12-05 12:14:10.757637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.706 qpair failed and we were unable to recover it. 00:30:36.706 [2024-12-05 12:14:10.757810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.706 [2024-12-05 12:14:10.757840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.706 qpair failed and we were unable to recover it. 00:30:36.706 [2024-12-05 12:14:10.758034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.706 [2024-12-05 12:14:10.758067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.706 qpair failed and we were unable to recover it. 00:30:36.706 [2024-12-05 12:14:10.758202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.706 [2024-12-05 12:14:10.758234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.706 qpair failed and we were unable to recover it. 00:30:36.706 [2024-12-05 12:14:10.758443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.706 [2024-12-05 12:14:10.758474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.706 qpair failed and we were unable to recover it. 00:30:36.706 [2024-12-05 12:14:10.758575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.707 [2024-12-05 12:14:10.758607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.707 qpair failed and we were unable to recover it. 00:30:36.707 [2024-12-05 12:14:10.758788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.707 [2024-12-05 12:14:10.758820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.707 qpair failed and we were unable to recover it. 00:30:36.707 [2024-12-05 12:14:10.758938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.707 [2024-12-05 12:14:10.758970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.707 qpair failed and we were unable to recover it. 00:30:36.707 [2024-12-05 12:14:10.759072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.707 [2024-12-05 12:14:10.759104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.707 qpair failed and we were unable to recover it. 00:30:36.707 [2024-12-05 12:14:10.759241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.707 [2024-12-05 12:14:10.759273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.707 qpair failed and we were unable to recover it. 00:30:36.707 [2024-12-05 12:14:10.759472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.707 [2024-12-05 12:14:10.759504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.707 qpair failed and we were unable to recover it. 00:30:36.707 [2024-12-05 12:14:10.759744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.707 [2024-12-05 12:14:10.759775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.707 qpair failed and we were unable to recover it. 00:30:36.707 [2024-12-05 12:14:10.759916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.707 [2024-12-05 12:14:10.759947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.707 qpair failed and we were unable to recover it. 00:30:36.707 [2024-12-05 12:14:10.760118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.707 [2024-12-05 12:14:10.760150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.707 qpair failed and we were unable to recover it. 00:30:36.707 [2024-12-05 12:14:10.760335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.707 [2024-12-05 12:14:10.760378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.707 qpair failed and we were unable to recover it. 00:30:36.707 [2024-12-05 12:14:10.760515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.707 [2024-12-05 12:14:10.760547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.707 qpair failed and we were unable to recover it. 00:30:36.707 [2024-12-05 12:14:10.760723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.707 [2024-12-05 12:14:10.760754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.707 qpair failed and we were unable to recover it. 00:30:36.707 [2024-12-05 12:14:10.760875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.707 [2024-12-05 12:14:10.760907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.707 qpair failed and we were unable to recover it. 00:30:36.707 [2024-12-05 12:14:10.761077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.707 [2024-12-05 12:14:10.761109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.707 qpair failed and we were unable to recover it. 00:30:36.707 [2024-12-05 12:14:10.761278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.707 [2024-12-05 12:14:10.761310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.707 qpair failed and we were unable to recover it. 00:30:36.707 [2024-12-05 12:14:10.761574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.707 [2024-12-05 12:14:10.761608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.707 qpair failed and we were unable to recover it. 00:30:36.707 [2024-12-05 12:14:10.761710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.707 [2024-12-05 12:14:10.761741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.707 qpair failed and we were unable to recover it. 00:30:36.707 [2024-12-05 12:14:10.761935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.707 [2024-12-05 12:14:10.761967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.707 qpair failed and we were unable to recover it. 00:30:36.707 [2024-12-05 12:14:10.762158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.707 [2024-12-05 12:14:10.762190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.707 qpair failed and we were unable to recover it. 00:30:36.707 [2024-12-05 12:14:10.762384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.707 [2024-12-05 12:14:10.762417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.707 qpair failed and we were unable to recover it. 00:30:36.707 [2024-12-05 12:14:10.762626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.707 [2024-12-05 12:14:10.762664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.707 qpair failed and we were unable to recover it. 00:30:36.707 [2024-12-05 12:14:10.762772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.707 [2024-12-05 12:14:10.762804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.707 qpair failed and we were unable to recover it. 00:30:36.707 [2024-12-05 12:14:10.763073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.707 [2024-12-05 12:14:10.763106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.707 qpair failed and we were unable to recover it. 00:30:36.707 [2024-12-05 12:14:10.763238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.707 [2024-12-05 12:14:10.763270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.707 qpair failed and we were unable to recover it. 00:30:36.707 [2024-12-05 12:14:10.763533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.707 [2024-12-05 12:14:10.763566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.707 qpair failed and we were unable to recover it. 00:30:36.707 [2024-12-05 12:14:10.763690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.707 [2024-12-05 12:14:10.763721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.707 qpair failed and we were unable to recover it. 00:30:36.707 [2024-12-05 12:14:10.763827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.707 [2024-12-05 12:14:10.763859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.707 qpair failed and we were unable to recover it. 00:30:36.707 [2024-12-05 12:14:10.764027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.707 [2024-12-05 12:14:10.764058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.707 qpair failed and we were unable to recover it. 00:30:36.707 [2024-12-05 12:14:10.764230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.707 [2024-12-05 12:14:10.764262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.707 qpair failed and we were unable to recover it. 00:30:36.707 [2024-12-05 12:14:10.764380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.707 [2024-12-05 12:14:10.764413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.707 qpair failed and we were unable to recover it. 00:30:36.707 [2024-12-05 12:14:10.764600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.707 [2024-12-05 12:14:10.764632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.707 qpair failed and we were unable to recover it. 00:30:36.707 [2024-12-05 12:14:10.764735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.707 [2024-12-05 12:14:10.764766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.707 qpair failed and we were unable to recover it. 00:30:36.707 [2024-12-05 12:14:10.764948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.707 [2024-12-05 12:14:10.764981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.707 qpair failed and we were unable to recover it. 00:30:36.707 [2024-12-05 12:14:10.765244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.707 [2024-12-05 12:14:10.765277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.707 qpair failed and we were unable to recover it. 00:30:36.707 [2024-12-05 12:14:10.765527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.708 [2024-12-05 12:14:10.765560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.708 qpair failed and we were unable to recover it. 00:30:36.708 [2024-12-05 12:14:10.765732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.708 [2024-12-05 12:14:10.765764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.708 qpair failed and we were unable to recover it. 00:30:36.708 [2024-12-05 12:14:10.766031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.708 [2024-12-05 12:14:10.766062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.708 qpair failed and we were unable to recover it. 00:30:36.708 [2024-12-05 12:14:10.766238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.708 [2024-12-05 12:14:10.766271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.708 qpair failed and we were unable to recover it. 00:30:36.708 [2024-12-05 12:14:10.766464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.708 [2024-12-05 12:14:10.766498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.708 qpair failed and we were unable to recover it. 00:30:36.708 [2024-12-05 12:14:10.766737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.708 [2024-12-05 12:14:10.766769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.708 qpair failed and we were unable to recover it. 00:30:36.708 [2024-12-05 12:14:10.766956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.708 [2024-12-05 12:14:10.766989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.708 qpair failed and we were unable to recover it. 00:30:36.708 [2024-12-05 12:14:10.767114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.708 [2024-12-05 12:14:10.767146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.708 qpair failed and we were unable to recover it. 00:30:36.708 [2024-12-05 12:14:10.767411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.708 [2024-12-05 12:14:10.767445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.708 qpair failed and we were unable to recover it. 00:30:36.708 [2024-12-05 12:14:10.767565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.708 [2024-12-05 12:14:10.767597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.708 qpair failed and we were unable to recover it. 00:30:36.708 [2024-12-05 12:14:10.767729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.708 [2024-12-05 12:14:10.767760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.708 qpair failed and we were unable to recover it. 00:30:36.708 [2024-12-05 12:14:10.767943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.708 [2024-12-05 12:14:10.767974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.708 qpair failed and we were unable to recover it. 00:30:36.708 [2024-12-05 12:14:10.768080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.708 [2024-12-05 12:14:10.768111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.708 qpair failed and we were unable to recover it. 00:30:36.708 [2024-12-05 12:14:10.768294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.708 [2024-12-05 12:14:10.768334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.708 qpair failed and we were unable to recover it. 00:30:36.708 [2024-12-05 12:14:10.768486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.708 [2024-12-05 12:14:10.768519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.708 qpair failed and we were unable to recover it. 00:30:36.708 [2024-12-05 12:14:10.768691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.708 [2024-12-05 12:14:10.768723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.708 qpair failed and we were unable to recover it. 00:30:36.708 [2024-12-05 12:14:10.768921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.708 [2024-12-05 12:14:10.768954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.708 qpair failed and we were unable to recover it. 00:30:36.708 [2024-12-05 12:14:10.769072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.708 [2024-12-05 12:14:10.769105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.708 qpair failed and we were unable to recover it. 00:30:36.708 [2024-12-05 12:14:10.769305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.708 [2024-12-05 12:14:10.769338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.708 qpair failed and we were unable to recover it. 00:30:36.708 [2024-12-05 12:14:10.769609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.708 [2024-12-05 12:14:10.769643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.708 qpair failed and we were unable to recover it. 00:30:36.708 [2024-12-05 12:14:10.769818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.708 [2024-12-05 12:14:10.769849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.708 qpair failed and we were unable to recover it. 00:30:36.708 [2024-12-05 12:14:10.770019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.708 [2024-12-05 12:14:10.770051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.708 qpair failed and we were unable to recover it. 00:30:36.708 [2024-12-05 12:14:10.770239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.708 [2024-12-05 12:14:10.770270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.708 qpair failed and we were unable to recover it. 00:30:36.708 [2024-12-05 12:14:10.770403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.708 [2024-12-05 12:14:10.770436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.708 qpair failed and we were unable to recover it. 00:30:36.708 [2024-12-05 12:14:10.770555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.708 [2024-12-05 12:14:10.770587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.708 qpair failed and we were unable to recover it. 00:30:36.708 [2024-12-05 12:14:10.770867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.708 [2024-12-05 12:14:10.770899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.708 qpair failed and we were unable to recover it. 00:30:36.708 [2024-12-05 12:14:10.771160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.708 [2024-12-05 12:14:10.771192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4cbe0 with addr=10.0.0.2, port=4420 00:30:36.708 qpair failed and we were unable to recover it. 00:30:36.708 [2024-12-05 12:14:10.771384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.708 [2024-12-05 12:14:10.771426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.708 qpair failed and we were unable to recover it. 00:30:36.708 [2024-12-05 12:14:10.771548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.708 [2024-12-05 12:14:10.771580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.708 qpair failed and we were unable to recover it. 00:30:36.708 [2024-12-05 12:14:10.771752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.708 [2024-12-05 12:14:10.771784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.708 qpair failed and we were unable to recover it. 00:30:36.708 [2024-12-05 12:14:10.771963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.708 [2024-12-05 12:14:10.771995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.708 qpair failed and we were unable to recover it. 00:30:36.708 [2024-12-05 12:14:10.772182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.708 [2024-12-05 12:14:10.772214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.708 qpair failed and we were unable to recover it. 00:30:36.708 [2024-12-05 12:14:10.772387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.708 [2024-12-05 12:14:10.772424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.708 qpair failed and we were unable to recover it. 00:30:36.708 [2024-12-05 12:14:10.772608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.708 [2024-12-05 12:14:10.772639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.708 qpair failed and we were unable to recover it. 00:30:36.708 Malloc0 00:30:36.708 [2024-12-05 12:14:10.772824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.708 [2024-12-05 12:14:10.772865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.708 qpair failed and we were unable to recover it. 00:30:36.708 [2024-12-05 12:14:10.773111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.708 [2024-12-05 12:14:10.773143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.708 qpair failed and we were unable to recover it. 00:30:36.708 [2024-12-05 12:14:10.773240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.708 [2024-12-05 12:14:10.773272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.708 qpair failed and we were unable to recover it. 00:30:36.708 [2024-12-05 12:14:10.773402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.708 [2024-12-05 12:14:10.773436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.708 12:14:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.708 qpair failed and we were unable to recover it. 00:30:36.709 [2024-12-05 12:14:10.773552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.709 [2024-12-05 12:14:10.773584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.709 qpair failed and we were unable to recover it. 00:30:36.709 [2024-12-05 12:14:10.773776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.709 [2024-12-05 12:14:10.773808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.709 12:14:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:36.709 qpair failed and we were unable to recover it. 00:30:36.709 [2024-12-05 12:14:10.774001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.709 [2024-12-05 12:14:10.774034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.709 qpair failed and we were unable to recover it. 00:30:36.709 12:14:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.709 [2024-12-05 12:14:10.774143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.709 [2024-12-05 12:14:10.774175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.709 qpair failed and we were unable to recover it. 00:30:36.709 [2024-12-05 12:14:10.774305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.709 [2024-12-05 12:14:10.774337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.709 qpair failed and we were unable to recover it. 00:30:36.709 12:14:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:36.709 [2024-12-05 12:14:10.774469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.709 [2024-12-05 12:14:10.774503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.709 qpair failed and we were unable to recover it. 00:30:36.709 [2024-12-05 12:14:10.774700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.709 [2024-12-05 12:14:10.774732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.709 qpair failed and we were unable to recover it. 00:30:36.709 [2024-12-05 12:14:10.774937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.709 [2024-12-05 12:14:10.774970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.709 qpair failed and we were unable to recover it. 00:30:36.709 [2024-12-05 12:14:10.775154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.709 [2024-12-05 12:14:10.775186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.709 qpair failed and we were unable to recover it. 00:30:36.709 [2024-12-05 12:14:10.775383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.709 [2024-12-05 12:14:10.775417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.709 qpair failed and we were unable to recover it. 00:30:36.709 [2024-12-05 12:14:10.775592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.709 [2024-12-05 12:14:10.775624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.709 qpair failed and we were unable to recover it. 00:30:36.709 [2024-12-05 12:14:10.775794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.709 [2024-12-05 12:14:10.775826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.709 qpair failed and we were unable to recover it. 00:30:36.709 [2024-12-05 12:14:10.776061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.709 [2024-12-05 12:14:10.776093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.709 qpair failed and we were unable to recover it. 00:30:36.709 [2024-12-05 12:14:10.776197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.709 [2024-12-05 12:14:10.776229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.709 qpair failed and we were unable to recover it. 00:30:36.709 [2024-12-05 12:14:10.776411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.709 [2024-12-05 12:14:10.776466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.709 qpair failed and we were unable to recover it. 00:30:36.709 [2024-12-05 12:14:10.776582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.709 [2024-12-05 12:14:10.776614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.709 qpair failed and we were unable to recover it. 00:30:36.709 [2024-12-05 12:14:10.776786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.709 [2024-12-05 12:14:10.776818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.709 qpair failed and we were unable to recover it. 00:30:36.709 [2024-12-05 12:14:10.776998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.709 [2024-12-05 12:14:10.777029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.709 qpair failed and we were unable to recover it. 00:30:36.709 [2024-12-05 12:14:10.777200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.709 [2024-12-05 12:14:10.777231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.709 qpair failed and we were unable to recover it. 00:30:36.709 [2024-12-05 12:14:10.777495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.709 [2024-12-05 12:14:10.777528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.709 qpair failed and we were unable to recover it. 00:30:36.709 [2024-12-05 12:14:10.777709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.709 [2024-12-05 12:14:10.777740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.709 qpair failed and we were unable to recover it. 00:30:36.709 [2024-12-05 12:14:10.777846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.709 [2024-12-05 12:14:10.777877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.709 qpair failed and we were unable to recover it. 00:30:36.709 [2024-12-05 12:14:10.778046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.709 [2024-12-05 12:14:10.778078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.709 qpair failed and we were unable to recover it. 00:30:36.709 [2024-12-05 12:14:10.778200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.709 [2024-12-05 12:14:10.778232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.709 qpair failed and we were unable to recover it. 00:30:36.709 [2024-12-05 12:14:10.778423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.709 [2024-12-05 12:14:10.778455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.709 qpair failed and we were unable to recover it. 00:30:36.709 [2024-12-05 12:14:10.778646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.709 [2024-12-05 12:14:10.778678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.709 qpair failed and we were unable to recover it. 00:30:36.709 [2024-12-05 12:14:10.778885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.709 [2024-12-05 12:14:10.778917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.709 qpair failed and we were unable to recover it. 00:30:36.709 [2024-12-05 12:14:10.779121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.709 [2024-12-05 12:14:10.779159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.709 qpair failed and we were unable to recover it. 00:30:36.709 [2024-12-05 12:14:10.779395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.709 [2024-12-05 12:14:10.779427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.709 qpair failed and we were unable to recover it. 00:30:36.709 [2024-12-05 12:14:10.779608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.709 [2024-12-05 12:14:10.779640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.709 qpair failed and we were unable to recover it. 00:30:36.709 [2024-12-05 12:14:10.779813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.709 [2024-12-05 12:14:10.779844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.709 qpair failed and we were unable to recover it. 00:30:36.709 [2024-12-05 12:14:10.780129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.709 [2024-12-05 12:14:10.780161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.709 qpair failed and we were unable to recover it. 00:30:36.709 [2024-12-05 12:14:10.780424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.709 [2024-12-05 12:14:10.780426] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:36.709 [2024-12-05 12:14:10.780458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.709 qpair failed and we were unable to recover it. 00:30:36.709 [2024-12-05 12:14:10.780644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.709 [2024-12-05 12:14:10.780676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.709 qpair failed and we were unable to recover it. 00:30:36.709 [2024-12-05 12:14:10.780807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.709 [2024-12-05 12:14:10.780839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.709 qpair failed and we were unable to recover it. 00:30:36.709 [2024-12-05 12:14:10.781016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.709 [2024-12-05 12:14:10.781047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.709 qpair failed and we were unable to recover it. 00:30:36.709 [2024-12-05 12:14:10.781328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.710 [2024-12-05 12:14:10.781361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.710 qpair failed and we were unable to recover it. 00:30:36.710 [2024-12-05 12:14:10.781591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.710 [2024-12-05 12:14:10.781623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.710 qpair failed and we were unable to recover it. 00:30:36.710 [2024-12-05 12:14:10.781884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.710 [2024-12-05 12:14:10.781916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.710 qpair failed and we were unable to recover it. 00:30:36.710 [2024-12-05 12:14:10.782096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.710 [2024-12-05 12:14:10.782129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.710 qpair failed and we were unable to recover it. 00:30:36.710 [2024-12-05 12:14:10.782231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.710 [2024-12-05 12:14:10.782268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.710 qpair failed and we were unable to recover it. 00:30:36.710 [2024-12-05 12:14:10.782466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.710 [2024-12-05 12:14:10.782499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.710 qpair failed and we were unable to recover it. 00:30:36.710 [2024-12-05 12:14:10.782681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.710 [2024-12-05 12:14:10.782714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.710 qpair failed and we were unable to recover it. 00:30:36.710 [2024-12-05 12:14:10.782852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.710 [2024-12-05 12:14:10.782883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.710 qpair failed and we were unable to recover it. 00:30:36.710 [2024-12-05 12:14:10.783054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.710 [2024-12-05 12:14:10.783086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.710 qpair failed and we were unable to recover it. 00:30:36.710 [2024-12-05 12:14:10.783338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.710 [2024-12-05 12:14:10.783378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.710 qpair failed and we were unable to recover it. 00:30:36.710 [2024-12-05 12:14:10.783558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.710 [2024-12-05 12:14:10.783590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.710 qpair failed and we were unable to recover it. 00:30:36.710 [2024-12-05 12:14:10.783711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.710 [2024-12-05 12:14:10.783743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.710 qpair failed and we were unable to recover it. 00:30:36.710 [2024-12-05 12:14:10.783874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.710 [2024-12-05 12:14:10.783905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.710 qpair failed and we were unable to recover it. 00:30:36.710 [2024-12-05 12:14:10.784088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.710 [2024-12-05 12:14:10.784120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.710 qpair failed and we were unable to recover it. 00:30:36.710 [2024-12-05 12:14:10.784310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.710 [2024-12-05 12:14:10.784342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.710 qpair failed and we were unable to recover it. 00:30:36.710 [2024-12-05 12:14:10.784542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.710 [2024-12-05 12:14:10.784573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.710 qpair failed and we were unable to recover it. 00:30:36.710 [2024-12-05 12:14:10.784704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.710 [2024-12-05 12:14:10.784737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.710 qpair failed and we were unable to recover it. 00:30:36.710 [2024-12-05 12:14:10.784867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.710 [2024-12-05 12:14:10.784899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.710 qpair failed and we were unable to recover it. 00:30:36.710 [2024-12-05 12:14:10.785112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.710 [2024-12-05 12:14:10.785144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.710 qpair failed and we were unable to recover it. 00:30:36.710 [2024-12-05 12:14:10.785407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.710 [2024-12-05 12:14:10.785441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.710 qpair failed and we were unable to recover it. 00:30:36.710 [2024-12-05 12:14:10.785554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.710 [2024-12-05 12:14:10.785587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.710 qpair failed and we were unable to recover it. 00:30:36.710 [2024-12-05 12:14:10.785771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.710 12:14:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.710 [2024-12-05 12:14:10.785804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.710 qpair failed and we were unable to recover it. 00:30:36.710 [2024-12-05 12:14:10.785975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.710 [2024-12-05 12:14:10.786007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.710 qpair failed and we were unable to recover it. 00:30:36.710 12:14:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:36.710 [2024-12-05 12:14:10.786186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.710 [2024-12-05 12:14:10.786218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.710 qpair failed and we were unable to recover it. 00:30:36.710 [2024-12-05 12:14:10.786338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.710 [2024-12-05 12:14:10.786379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.710 qpair failed and we were unable to recover it. 00:30:36.710 12:14:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.710 [2024-12-05 12:14:10.786625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.710 [2024-12-05 12:14:10.786658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.710 qpair failed and we were unable to recover it. 00:30:36.710 12:14:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:36.710 [2024-12-05 12:14:10.786835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.710 [2024-12-05 12:14:10.786867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.710 qpair failed and we were unable to recover it. 00:30:36.710 [2024-12-05 12:14:10.787001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.710 [2024-12-05 12:14:10.787033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.710 qpair failed and we were unable to recover it. 00:30:36.710 [2024-12-05 12:14:10.787295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.710 [2024-12-05 12:14:10.787327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.710 qpair failed and we were unable to recover it. 00:30:36.710 [2024-12-05 12:14:10.787516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.710 [2024-12-05 12:14:10.787549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.710 qpair failed and we were unable to recover it. 00:30:36.710 [2024-12-05 12:14:10.787725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.710 [2024-12-05 12:14:10.787756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.710 qpair failed and we were unable to recover it. 00:30:36.710 [2024-12-05 12:14:10.787927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.710 [2024-12-05 12:14:10.787959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.710 qpair failed and we were unable to recover it. 00:30:36.710 [2024-12-05 12:14:10.788136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.710 [2024-12-05 12:14:10.788168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.710 qpair failed and we were unable to recover it. 00:30:36.710 [2024-12-05 12:14:10.788358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.710 [2024-12-05 12:14:10.788400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.710 qpair failed and we were unable to recover it. 00:30:36.710 [2024-12-05 12:14:10.788593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.710 [2024-12-05 12:14:10.788625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.710 qpair failed and we were unable to recover it. 00:30:36.710 [2024-12-05 12:14:10.788862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.710 [2024-12-05 12:14:10.788894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.710 qpair failed and we were unable to recover it. 00:30:36.710 [2024-12-05 12:14:10.789086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.710 [2024-12-05 12:14:10.789117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.710 qpair failed and we were unable to recover it. 00:30:36.711 [2024-12-05 12:14:10.789242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.711 [2024-12-05 12:14:10.789274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.711 qpair failed and we were unable to recover it. 00:30:36.711 [2024-12-05 12:14:10.789410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.711 [2024-12-05 12:14:10.789443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.711 qpair failed and we were unable to recover it. 00:30:36.711 [2024-12-05 12:14:10.789730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.711 [2024-12-05 12:14:10.789761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.711 qpair failed and we were unable to recover it. 00:30:36.711 [2024-12-05 12:14:10.789967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.711 [2024-12-05 12:14:10.789999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.711 qpair failed and we were unable to recover it. 00:30:36.711 [2024-12-05 12:14:10.790183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.711 [2024-12-05 12:14:10.790215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.711 qpair failed and we were unable to recover it. 00:30:36.711 [2024-12-05 12:14:10.790336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.711 [2024-12-05 12:14:10.790383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.711 qpair failed and we were unable to recover it. 00:30:36.711 [2024-12-05 12:14:10.790520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.711 [2024-12-05 12:14:10.790552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.711 qpair failed and we were unable to recover it. 00:30:36.711 [2024-12-05 12:14:10.790666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.711 [2024-12-05 12:14:10.790698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.711 qpair failed and we were unable to recover it. 00:30:36.711 [2024-12-05 12:14:10.790800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.711 [2024-12-05 12:14:10.790832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.711 qpair failed and we were unable to recover it. 00:30:36.711 [2024-12-05 12:14:10.791098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.711 [2024-12-05 12:14:10.791130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.711 qpair failed and we were unable to recover it. 00:30:36.711 [2024-12-05 12:14:10.791391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.711 [2024-12-05 12:14:10.791424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.711 qpair failed and we were unable to recover it. 00:30:36.711 [2024-12-05 12:14:10.791613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.711 [2024-12-05 12:14:10.791645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.711 qpair failed and we were unable to recover it. 00:30:36.711 [2024-12-05 12:14:10.791776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.711 [2024-12-05 12:14:10.791807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.711 qpair failed and we were unable to recover it. 00:30:36.711 [2024-12-05 12:14:10.791909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.711 [2024-12-05 12:14:10.791940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.711 qpair failed and we were unable to recover it. 00:30:36.711 [2024-12-05 12:14:10.792142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.711 [2024-12-05 12:14:10.792174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.711 qpair failed and we were unable to recover it. 00:30:36.711 [2024-12-05 12:14:10.792281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.711 [2024-12-05 12:14:10.792312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.711 qpair failed and we were unable to recover it. 00:30:36.711 [2024-12-05 12:14:10.792557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.711 [2024-12-05 12:14:10.792590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.711 qpair failed and we were unable to recover it. 00:30:36.711 [2024-12-05 12:14:10.792761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.711 [2024-12-05 12:14:10.792793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.711 qpair failed and we were unable to recover it. 00:30:36.711 [2024-12-05 12:14:10.792979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.711 [2024-12-05 12:14:10.793010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.711 qpair failed and we were unable to recover it. 00:30:36.711 [2024-12-05 12:14:10.793222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.711 [2024-12-05 12:14:10.793255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.711 qpair failed and we were unable to recover it. 00:30:36.711 [2024-12-05 12:14:10.793445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.711 [2024-12-05 12:14:10.793479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.711 qpair failed and we were unable to recover it. 00:30:36.711 [2024-12-05 12:14:10.793744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.711 [2024-12-05 12:14:10.793776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.711 qpair failed and we were unable to recover it. 00:30:36.711 12:14:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.711 [2024-12-05 12:14:10.794008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.711 [2024-12-05 12:14:10.794041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.711 qpair failed and we were unable to recover it. 00:30:36.711 12:14:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:36.711 [2024-12-05 12:14:10.794217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.711 [2024-12-05 12:14:10.794250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.711 qpair failed and we were unable to recover it. 00:30:36.711 [2024-12-05 12:14:10.794383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.711 [2024-12-05 12:14:10.794416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.711 qpair failed and we were unable to recover it. 00:30:36.711 12:14:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.711 [2024-12-05 12:14:10.794522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.711 [2024-12-05 12:14:10.794554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.711 qpair failed and we were unable to recover it. 00:30:36.711 12:14:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:36.711 [2024-12-05 12:14:10.794750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.711 [2024-12-05 12:14:10.794782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.711 qpair failed and we were unable to recover it. 00:30:36.711 [2024-12-05 12:14:10.794981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.711 [2024-12-05 12:14:10.795013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.711 qpair failed and we were unable to recover it. 00:30:36.711 [2024-12-05 12:14:10.795224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.711 [2024-12-05 12:14:10.795256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.711 qpair failed and we were unable to recover it. 00:30:36.711 [2024-12-05 12:14:10.795450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.711 [2024-12-05 12:14:10.795483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.711 qpair failed and we were unable to recover it. 00:30:36.711 [2024-12-05 12:14:10.795747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.711 [2024-12-05 12:14:10.795785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.711 qpair failed and we were unable to recover it. 00:30:36.711 [2024-12-05 12:14:10.796025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.711 [2024-12-05 12:14:10.796057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.711 qpair failed and we were unable to recover it. 00:30:36.711 [2024-12-05 12:14:10.796193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.711 [2024-12-05 12:14:10.796224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.711 qpair failed and we were unable to recover it. 00:30:36.711 [2024-12-05 12:14:10.796408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.711 [2024-12-05 12:14:10.796441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.711 qpair failed and we were unable to recover it. 00:30:36.711 [2024-12-05 12:14:10.796548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.712 [2024-12-05 12:14:10.796580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.712 qpair failed and we were unable to recover it. 00:30:36.712 [2024-12-05 12:14:10.796866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.712 [2024-12-05 12:14:10.796897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.712 qpair failed and we were unable to recover it. 00:30:36.712 [2024-12-05 12:14:10.797079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.712 [2024-12-05 12:14:10.797110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.712 qpair failed and we were unable to recover it. 00:30:36.712 [2024-12-05 12:14:10.797401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.712 [2024-12-05 12:14:10.797435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.712 qpair failed and we were unable to recover it. 00:30:36.712 [2024-12-05 12:14:10.797613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.712 [2024-12-05 12:14:10.797645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.712 qpair failed and we were unable to recover it. 00:30:36.712 [2024-12-05 12:14:10.797834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.712 [2024-12-05 12:14:10.797867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.712 qpair failed and we were unable to recover it. 00:30:36.712 [2024-12-05 12:14:10.798095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.712 [2024-12-05 12:14:10.798127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.712 qpair failed and we were unable to recover it. 00:30:36.712 [2024-12-05 12:14:10.798325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.712 [2024-12-05 12:14:10.798357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.712 qpair failed and we were unable to recover it. 00:30:36.712 [2024-12-05 12:14:10.798629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.712 [2024-12-05 12:14:10.798661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.712 qpair failed and we were unable to recover it. 00:30:36.712 [2024-12-05 12:14:10.798770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.712 [2024-12-05 12:14:10.798802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.712 qpair failed and we were unable to recover it. 00:30:36.712 [2024-12-05 12:14:10.799027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.712 [2024-12-05 12:14:10.799059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.712 qpair failed and we were unable to recover it. 00:30:36.712 [2024-12-05 12:14:10.799244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.712 [2024-12-05 12:14:10.799276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.712 qpair failed and we were unable to recover it. 00:30:36.712 [2024-12-05 12:14:10.799402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.712 [2024-12-05 12:14:10.799435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.712 qpair failed and we were unable to recover it. 00:30:36.712 [2024-12-05 12:14:10.799640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.712 [2024-12-05 12:14:10.799671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.712 qpair failed and we were unable to recover it. 00:30:36.712 [2024-12-05 12:14:10.799791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.712 [2024-12-05 12:14:10.799823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.712 qpair failed and we were unable to recover it. 00:30:36.712 [2024-12-05 12:14:10.799966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.712 [2024-12-05 12:14:10.799998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.712 qpair failed and we were unable to recover it. 00:30:36.712 [2024-12-05 12:14:10.800259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.712 [2024-12-05 12:14:10.800291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.712 qpair failed and we were unable to recover it. 00:30:36.712 [2024-12-05 12:14:10.800529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.712 [2024-12-05 12:14:10.800562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.712 qpair failed and we were unable to recover it. 00:30:36.712 [2024-12-05 12:14:10.800682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.712 [2024-12-05 12:14:10.800714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.712 qpair failed and we were unable to recover it. 00:30:36.712 [2024-12-05 12:14:10.800902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.712 [2024-12-05 12:14:10.800933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.712 qpair failed and we were unable to recover it. 00:30:36.712 [2024-12-05 12:14:10.801053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.712 [2024-12-05 12:14:10.801086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.712 qpair failed and we were unable to recover it. 00:30:36.712 [2024-12-05 12:14:10.801296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.712 [2024-12-05 12:14:10.801328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.712 qpair failed and we were unable to recover it. 00:30:36.712 [2024-12-05 12:14:10.801518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.712 [2024-12-05 12:14:10.801551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.712 qpair failed and we were unable to recover it. 00:30:36.712 [2024-12-05 12:14:10.801766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.712 [2024-12-05 12:14:10.801798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.712 12:14:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.712 qpair failed and we were unable to recover it. 00:30:36.712 [2024-12-05 12:14:10.801994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.712 [2024-12-05 12:14:10.802027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.712 qpair failed and we were unable to recover it. 00:30:36.712 [2024-12-05 12:14:10.802144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.712 12:14:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:36.712 [2024-12-05 12:14:10.802177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.712 qpair failed and we were unable to recover it. 00:30:36.712 [2024-12-05 12:14:10.802439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.712 [2024-12-05 12:14:10.802473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.712 qpair failed and we were unable to recover it. 00:30:36.712 12:14:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.712 [2024-12-05 12:14:10.802656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.712 [2024-12-05 12:14:10.802689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.712 qpair failed and we were unable to recover it. 00:30:36.712 12:14:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:36.712 [2024-12-05 12:14:10.802877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.712 [2024-12-05 12:14:10.802909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.712 qpair failed and we were unable to recover it. 00:30:36.712 [2024-12-05 12:14:10.803038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.712 [2024-12-05 12:14:10.803070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.712 qpair failed and we were unable to recover it. 00:30:36.712 [2024-12-05 12:14:10.803307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.712 [2024-12-05 12:14:10.803339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc424000b90 with addr=10.0.0.2, port=4420 00:30:36.712 qpair failed and we were unable to recover it. 00:30:36.712 [2024-12-05 12:14:10.803709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.712 [2024-12-05 12:14:10.803780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.712 qpair failed and we were unable to recover it. 00:30:36.712 [2024-12-05 12:14:10.803927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.712 [2024-12-05 12:14:10.803964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.712 qpair failed and we were unable to recover it. 00:30:36.712 [2024-12-05 12:14:10.804092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.712 [2024-12-05 12:14:10.804124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.712 qpair failed and we were unable to recover it. 00:30:36.713 [2024-12-05 12:14:10.804247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.713 [2024-12-05 12:14:10.804289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.713 qpair failed and we were unable to recover it. 00:30:36.713 [2024-12-05 12:14:10.804510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.713 [2024-12-05 12:14:10.804544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.713 qpair failed and we were unable to recover it. 00:30:36.713 [2024-12-05 12:14:10.804658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.713 [2024-12-05 12:14:10.804691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.713 qpair failed and we were unable to recover it. 00:30:36.713 [2024-12-05 12:14:10.804877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.713 [2024-12-05 12:14:10.804909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.713 qpair failed and we were unable to recover it. 00:30:36.713 [2024-12-05 12:14:10.805183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.713 [2024-12-05 12:14:10.805214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.713 qpair failed and we were unable to recover it. 00:30:36.713 [2024-12-05 12:14:10.805335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.713 [2024-12-05 12:14:10.805378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc42c000b90 with addr=10.0.0.2, port=4420 00:30:36.713 qpair failed and we were unable to recover it. 00:30:36.713 [2024-12-05 12:14:10.805462] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:36.713 12:14:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.713 12:14:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:36.713 12:14:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.713 12:14:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:36.713 [2024-12-05 12:14:10.811107] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.713 [2024-12-05 12:14:10.811244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.713 [2024-12-05 12:14:10.811288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.713 [2024-12-05 12:14:10.811311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.713 [2024-12-05 12:14:10.811333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:36.713 [2024-12-05 12:14:10.811395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.713 qpair failed and we were unable to recover it. 00:30:36.713 12:14:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.713 12:14:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 223019 00:30:36.713 [2024-12-05 12:14:10.820997] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.713 [2024-12-05 12:14:10.821074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.713 [2024-12-05 12:14:10.821102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.713 [2024-12-05 12:14:10.821123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.713 [2024-12-05 12:14:10.821137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:36.713 [2024-12-05 12:14:10.821170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.713 qpair failed and we were unable to recover it. 00:30:36.713 [2024-12-05 12:14:10.831011] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.713 [2024-12-05 12:14:10.831074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.713 [2024-12-05 12:14:10.831094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.713 [2024-12-05 12:14:10.831103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.713 [2024-12-05 12:14:10.831112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:36.713 [2024-12-05 12:14:10.831133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.713 qpair failed and we were unable to recover it. 00:30:36.713 [2024-12-05 12:14:10.841044] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.713 [2024-12-05 12:14:10.841100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.713 [2024-12-05 12:14:10.841114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.713 [2024-12-05 12:14:10.841121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.713 [2024-12-05 12:14:10.841127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:36.713 [2024-12-05 12:14:10.841141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.713 qpair failed and we were unable to recover it. 00:30:36.713 [2024-12-05 12:14:10.851001] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.713 [2024-12-05 12:14:10.851057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.713 [2024-12-05 12:14:10.851070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.713 [2024-12-05 12:14:10.851077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.713 [2024-12-05 12:14:10.851083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:36.713 [2024-12-05 12:14:10.851097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.713 qpair failed and we were unable to recover it. 00:30:36.973 [2024-12-05 12:14:10.860997] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.973 [2024-12-05 12:14:10.861050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.973 [2024-12-05 12:14:10.861064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.973 [2024-12-05 12:14:10.861071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.973 [2024-12-05 12:14:10.861076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:36.973 [2024-12-05 12:14:10.861094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.973 qpair failed and we were unable to recover it. 00:30:36.973 [2024-12-05 12:14:10.871022] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.973 [2024-12-05 12:14:10.871078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.973 [2024-12-05 12:14:10.871091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.973 [2024-12-05 12:14:10.871098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.973 [2024-12-05 12:14:10.871104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:36.973 [2024-12-05 12:14:10.871118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.973 qpair failed and we were unable to recover it. 00:30:36.973 [2024-12-05 12:14:10.881115] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.973 [2024-12-05 12:14:10.881216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.973 [2024-12-05 12:14:10.881230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.973 [2024-12-05 12:14:10.881236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.973 [2024-12-05 12:14:10.881242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:36.973 [2024-12-05 12:14:10.881256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.973 qpair failed and we were unable to recover it. 00:30:36.973 [2024-12-05 12:14:10.891106] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.973 [2024-12-05 12:14:10.891159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.973 [2024-12-05 12:14:10.891173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.973 [2024-12-05 12:14:10.891179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.973 [2024-12-05 12:14:10.891185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:36.973 [2024-12-05 12:14:10.891199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.973 qpair failed and we were unable to recover it. 00:30:36.973 [2024-12-05 12:14:10.901121] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.973 [2024-12-05 12:14:10.901174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.973 [2024-12-05 12:14:10.901188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.973 [2024-12-05 12:14:10.901194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.973 [2024-12-05 12:14:10.901200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:36.973 [2024-12-05 12:14:10.901214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.973 qpair failed and we were unable to recover it. 00:30:36.973 [2024-12-05 12:14:10.911158] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.973 [2024-12-05 12:14:10.911215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.973 [2024-12-05 12:14:10.911228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.973 [2024-12-05 12:14:10.911234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.973 [2024-12-05 12:14:10.911240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:36.973 [2024-12-05 12:14:10.911254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.973 qpair failed and we were unable to recover it. 00:30:36.973 [2024-12-05 12:14:10.921182] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.973 [2024-12-05 12:14:10.921259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.973 [2024-12-05 12:14:10.921273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.973 [2024-12-05 12:14:10.921280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.973 [2024-12-05 12:14:10.921286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:36.973 [2024-12-05 12:14:10.921300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.973 qpair failed and we were unable to recover it. 00:30:36.973 [2024-12-05 12:14:10.931224] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.973 [2024-12-05 12:14:10.931282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.973 [2024-12-05 12:14:10.931296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.973 [2024-12-05 12:14:10.931303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.973 [2024-12-05 12:14:10.931309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:36.973 [2024-12-05 12:14:10.931323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.973 qpair failed and we were unable to recover it. 00:30:36.973 [2024-12-05 12:14:10.941218] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.973 [2024-12-05 12:14:10.941296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.973 [2024-12-05 12:14:10.941312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.973 [2024-12-05 12:14:10.941319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.973 [2024-12-05 12:14:10.941325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:36.973 [2024-12-05 12:14:10.941339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.973 qpair failed and we were unable to recover it. 00:30:36.973 [2024-12-05 12:14:10.951243] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.973 [2024-12-05 12:14:10.951312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.973 [2024-12-05 12:14:10.951325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.973 [2024-12-05 12:14:10.951335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.973 [2024-12-05 12:14:10.951341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:36.973 [2024-12-05 12:14:10.951355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.973 qpair failed and we were unable to recover it. 00:30:36.973 [2024-12-05 12:14:10.961275] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.973 [2024-12-05 12:14:10.961332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.973 [2024-12-05 12:14:10.961346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.973 [2024-12-05 12:14:10.961354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.973 [2024-12-05 12:14:10.961360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:36.973 [2024-12-05 12:14:10.961379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.973 qpair failed and we were unable to recover it. 00:30:36.973 [2024-12-05 12:14:10.971401] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.973 [2024-12-05 12:14:10.971458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.973 [2024-12-05 12:14:10.971470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.973 [2024-12-05 12:14:10.971477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.973 [2024-12-05 12:14:10.971483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:36.973 [2024-12-05 12:14:10.971497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.973 qpair failed and we were unable to recover it. 00:30:36.973 [2024-12-05 12:14:10.981325] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.974 [2024-12-05 12:14:10.981401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.974 [2024-12-05 12:14:10.981415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.974 [2024-12-05 12:14:10.981422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.974 [2024-12-05 12:14:10.981428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:36.974 [2024-12-05 12:14:10.981442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.974 qpair failed and we were unable to recover it. 00:30:36.974 [2024-12-05 12:14:10.991373] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.974 [2024-12-05 12:14:10.991428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.974 [2024-12-05 12:14:10.991441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.974 [2024-12-05 12:14:10.991448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.974 [2024-12-05 12:14:10.991454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:36.974 [2024-12-05 12:14:10.991473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.974 qpair failed and we were unable to recover it. 00:30:36.974 [2024-12-05 12:14:11.001408] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.974 [2024-12-05 12:14:11.001465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.974 [2024-12-05 12:14:11.001478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.974 [2024-12-05 12:14:11.001485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.974 [2024-12-05 12:14:11.001491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:36.974 [2024-12-05 12:14:11.001506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.974 qpair failed and we were unable to recover it. 00:30:36.974 [2024-12-05 12:14:11.011426] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.974 [2024-12-05 12:14:11.011500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.974 [2024-12-05 12:14:11.011513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.974 [2024-12-05 12:14:11.011520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.974 [2024-12-05 12:14:11.011526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:36.974 [2024-12-05 12:14:11.011540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.974 qpair failed and we were unable to recover it. 00:30:36.974 [2024-12-05 12:14:11.021424] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.974 [2024-12-05 12:14:11.021479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.974 [2024-12-05 12:14:11.021493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.974 [2024-12-05 12:14:11.021499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.974 [2024-12-05 12:14:11.021505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:36.974 [2024-12-05 12:14:11.021520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.974 qpair failed and we were unable to recover it. 00:30:36.974 [2024-12-05 12:14:11.031487] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.974 [2024-12-05 12:14:11.031539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.974 [2024-12-05 12:14:11.031551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.974 [2024-12-05 12:14:11.031557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.974 [2024-12-05 12:14:11.031564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:36.974 [2024-12-05 12:14:11.031578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.974 qpair failed and we were unable to recover it. 00:30:36.974 [2024-12-05 12:14:11.041512] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.974 [2024-12-05 12:14:11.041569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.974 [2024-12-05 12:14:11.041582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.974 [2024-12-05 12:14:11.041588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.974 [2024-12-05 12:14:11.041594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:36.974 [2024-12-05 12:14:11.041608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.974 qpair failed and we were unable to recover it. 00:30:36.974 [2024-12-05 12:14:11.051492] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.974 [2024-12-05 12:14:11.051547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.974 [2024-12-05 12:14:11.051561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.974 [2024-12-05 12:14:11.051567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.974 [2024-12-05 12:14:11.051572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:36.974 [2024-12-05 12:14:11.051586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.974 qpair failed and we were unable to recover it. 00:30:36.974 [2024-12-05 12:14:11.061564] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.974 [2024-12-05 12:14:11.061618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.974 [2024-12-05 12:14:11.061631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.974 [2024-12-05 12:14:11.061637] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.974 [2024-12-05 12:14:11.061643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:36.974 [2024-12-05 12:14:11.061657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.974 qpair failed and we were unable to recover it. 00:30:36.974 [2024-12-05 12:14:11.071630] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.974 [2024-12-05 12:14:11.071691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.974 [2024-12-05 12:14:11.071704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.974 [2024-12-05 12:14:11.071711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.974 [2024-12-05 12:14:11.071716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:36.974 [2024-12-05 12:14:11.071731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.974 qpair failed and we were unable to recover it. 00:30:36.974 [2024-12-05 12:14:11.081675] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.974 [2024-12-05 12:14:11.081739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.974 [2024-12-05 12:14:11.081754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.974 [2024-12-05 12:14:11.081761] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.974 [2024-12-05 12:14:11.081767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:36.974 [2024-12-05 12:14:11.081781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.974 qpair failed and we were unable to recover it. 00:30:36.974 [2024-12-05 12:14:11.091687] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.974 [2024-12-05 12:14:11.091760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.974 [2024-12-05 12:14:11.091774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.974 [2024-12-05 12:14:11.091780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.974 [2024-12-05 12:14:11.091786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:36.974 [2024-12-05 12:14:11.091801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.975 qpair failed and we were unable to recover it. 00:30:36.975 [2024-12-05 12:14:11.101736] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.975 [2024-12-05 12:14:11.101842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.975 [2024-12-05 12:14:11.101855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.975 [2024-12-05 12:14:11.101861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.975 [2024-12-05 12:14:11.101867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:36.975 [2024-12-05 12:14:11.101881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.975 qpair failed and we were unable to recover it. 00:30:36.975 [2024-12-05 12:14:11.111727] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.975 [2024-12-05 12:14:11.111777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.975 [2024-12-05 12:14:11.111790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.975 [2024-12-05 12:14:11.111797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.975 [2024-12-05 12:14:11.111803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:36.975 [2024-12-05 12:14:11.111817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.975 qpair failed and we were unable to recover it. 00:30:36.975 [2024-12-05 12:14:11.121742] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.975 [2024-12-05 12:14:11.121823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.975 [2024-12-05 12:14:11.121835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.975 [2024-12-05 12:14:11.121842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.975 [2024-12-05 12:14:11.121851] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:36.975 [2024-12-05 12:14:11.121865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.975 qpair failed and we were unable to recover it. 00:30:36.975 [2024-12-05 12:14:11.131783] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.975 [2024-12-05 12:14:11.131860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.975 [2024-12-05 12:14:11.131873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.975 [2024-12-05 12:14:11.131880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.975 [2024-12-05 12:14:11.131886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:36.975 [2024-12-05 12:14:11.131900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.975 qpair failed and we were unable to recover it. 00:30:36.975 [2024-12-05 12:14:11.141840] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.975 [2024-12-05 12:14:11.141901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.975 [2024-12-05 12:14:11.141914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.975 [2024-12-05 12:14:11.141920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.975 [2024-12-05 12:14:11.141926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:36.975 [2024-12-05 12:14:11.141940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.975 qpair failed and we were unable to recover it. 00:30:36.975 [2024-12-05 12:14:11.151830] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.975 [2024-12-05 12:14:11.151883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.975 [2024-12-05 12:14:11.151896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.975 [2024-12-05 12:14:11.151902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.975 [2024-12-05 12:14:11.151908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:36.975 [2024-12-05 12:14:11.151922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.975 qpair failed and we were unable to recover it. 00:30:36.975 [2024-12-05 12:14:11.161915] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.975 [2024-12-05 12:14:11.162020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.975 [2024-12-05 12:14:11.162033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.975 [2024-12-05 12:14:11.162039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.975 [2024-12-05 12:14:11.162045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:36.975 [2024-12-05 12:14:11.162060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.975 qpair failed and we were unable to recover it. 00:30:37.234 [2024-12-05 12:14:11.171903] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.234 [2024-12-05 12:14:11.171953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.234 [2024-12-05 12:14:11.171966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.234 [2024-12-05 12:14:11.171972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.234 [2024-12-05 12:14:11.171978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.234 [2024-12-05 12:14:11.171992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.234 qpair failed and we were unable to recover it. 00:30:37.234 [2024-12-05 12:14:11.181924] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.234 [2024-12-05 12:14:11.181979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.234 [2024-12-05 12:14:11.181991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.234 [2024-12-05 12:14:11.181997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.234 [2024-12-05 12:14:11.182004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.234 [2024-12-05 12:14:11.182018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.234 qpair failed and we were unable to recover it. 00:30:37.234 [2024-12-05 12:14:11.191941] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.234 [2024-12-05 12:14:11.191989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.234 [2024-12-05 12:14:11.192002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.234 [2024-12-05 12:14:11.192009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.234 [2024-12-05 12:14:11.192015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.234 [2024-12-05 12:14:11.192029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.234 qpair failed and we were unable to recover it. 00:30:37.234 [2024-12-05 12:14:11.201976] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.234 [2024-12-05 12:14:11.202031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.234 [2024-12-05 12:14:11.202048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.234 [2024-12-05 12:14:11.202055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.234 [2024-12-05 12:14:11.202061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.234 [2024-12-05 12:14:11.202078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.234 qpair failed and we were unable to recover it. 00:30:37.234 [2024-12-05 12:14:11.212015] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.234 [2024-12-05 12:14:11.212089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.234 [2024-12-05 12:14:11.212105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.234 [2024-12-05 12:14:11.212112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.234 [2024-12-05 12:14:11.212118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.234 [2024-12-05 12:14:11.212133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.234 qpair failed and we were unable to recover it. 00:30:37.234 [2024-12-05 12:14:11.222027] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.234 [2024-12-05 12:14:11.222080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.234 [2024-12-05 12:14:11.222094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.234 [2024-12-05 12:14:11.222101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.234 [2024-12-05 12:14:11.222107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.234 [2024-12-05 12:14:11.222121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.234 qpair failed and we were unable to recover it. 00:30:37.234 [2024-12-05 12:14:11.232033] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.234 [2024-12-05 12:14:11.232122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.234 [2024-12-05 12:14:11.232135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.234 [2024-12-05 12:14:11.232142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.234 [2024-12-05 12:14:11.232147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.235 [2024-12-05 12:14:11.232161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-12-05 12:14:11.242090] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.235 [2024-12-05 12:14:11.242144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.235 [2024-12-05 12:14:11.242157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.235 [2024-12-05 12:14:11.242164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.235 [2024-12-05 12:14:11.242170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.235 [2024-12-05 12:14:11.242184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-12-05 12:14:11.252120] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.235 [2024-12-05 12:14:11.252174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.235 [2024-12-05 12:14:11.252187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.235 [2024-12-05 12:14:11.252193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.235 [2024-12-05 12:14:11.252202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.235 [2024-12-05 12:14:11.252216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-12-05 12:14:11.262058] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.235 [2024-12-05 12:14:11.262159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.235 [2024-12-05 12:14:11.262172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.235 [2024-12-05 12:14:11.262178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.235 [2024-12-05 12:14:11.262184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.235 [2024-12-05 12:14:11.262198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-12-05 12:14:11.272084] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.235 [2024-12-05 12:14:11.272141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.235 [2024-12-05 12:14:11.272154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.235 [2024-12-05 12:14:11.272161] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.235 [2024-12-05 12:14:11.272167] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.235 [2024-12-05 12:14:11.272181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-12-05 12:14:11.282204] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.235 [2024-12-05 12:14:11.282261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.235 [2024-12-05 12:14:11.282273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.235 [2024-12-05 12:14:11.282280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.235 [2024-12-05 12:14:11.282286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.235 [2024-12-05 12:14:11.282301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-12-05 12:14:11.292218] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.235 [2024-12-05 12:14:11.292272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.235 [2024-12-05 12:14:11.292285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.235 [2024-12-05 12:14:11.292292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.235 [2024-12-05 12:14:11.292298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.235 [2024-12-05 12:14:11.292313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-12-05 12:14:11.302244] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.235 [2024-12-05 12:14:11.302296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.235 [2024-12-05 12:14:11.302310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.235 [2024-12-05 12:14:11.302317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.235 [2024-12-05 12:14:11.302323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.235 [2024-12-05 12:14:11.302339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-12-05 12:14:11.312283] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.235 [2024-12-05 12:14:11.312338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.235 [2024-12-05 12:14:11.312351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.235 [2024-12-05 12:14:11.312358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.235 [2024-12-05 12:14:11.312365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.235 [2024-12-05 12:14:11.312384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-12-05 12:14:11.322291] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.235 [2024-12-05 12:14:11.322349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.235 [2024-12-05 12:14:11.322362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.235 [2024-12-05 12:14:11.322374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.235 [2024-12-05 12:14:11.322380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.235 [2024-12-05 12:14:11.322395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-12-05 12:14:11.332343] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.235 [2024-12-05 12:14:11.332404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.235 [2024-12-05 12:14:11.332417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.235 [2024-12-05 12:14:11.332424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.235 [2024-12-05 12:14:11.332429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.235 [2024-12-05 12:14:11.332444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-12-05 12:14:11.342416] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.235 [2024-12-05 12:14:11.342473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.235 [2024-12-05 12:14:11.342488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.235 [2024-12-05 12:14:11.342495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.235 [2024-12-05 12:14:11.342501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.235 [2024-12-05 12:14:11.342515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-12-05 12:14:11.352305] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.235 [2024-12-05 12:14:11.352359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.235 [2024-12-05 12:14:11.352376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.235 [2024-12-05 12:14:11.352383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.235 [2024-12-05 12:14:11.352389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.235 [2024-12-05 12:14:11.352403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.235 qpair failed and we were unable to recover it. 00:30:37.235 [2024-12-05 12:14:11.362417] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.235 [2024-12-05 12:14:11.362472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.235 [2024-12-05 12:14:11.362484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.235 [2024-12-05 12:14:11.362491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.235 [2024-12-05 12:14:11.362497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.235 [2024-12-05 12:14:11.362512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.236 qpair failed and we were unable to recover it. 00:30:37.236 [2024-12-05 12:14:11.372491] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.236 [2024-12-05 12:14:11.372588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.236 [2024-12-05 12:14:11.372601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.236 [2024-12-05 12:14:11.372608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.236 [2024-12-05 12:14:11.372613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.236 [2024-12-05 12:14:11.372628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.236 qpair failed and we were unable to recover it. 00:30:37.236 [2024-12-05 12:14:11.382476] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.236 [2024-12-05 12:14:11.382529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.236 [2024-12-05 12:14:11.382542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.236 [2024-12-05 12:14:11.382551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.236 [2024-12-05 12:14:11.382558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.236 [2024-12-05 12:14:11.382572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.236 qpair failed and we were unable to recover it. 00:30:37.236 [2024-12-05 12:14:11.392503] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.236 [2024-12-05 12:14:11.392556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.236 [2024-12-05 12:14:11.392568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.236 [2024-12-05 12:14:11.392574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.236 [2024-12-05 12:14:11.392580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.236 [2024-12-05 12:14:11.392594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.236 qpair failed and we were unable to recover it. 00:30:37.236 [2024-12-05 12:14:11.402483] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.236 [2024-12-05 12:14:11.402547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.236 [2024-12-05 12:14:11.402560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.236 [2024-12-05 12:14:11.402567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.236 [2024-12-05 12:14:11.402573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.236 [2024-12-05 12:14:11.402587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.236 qpair failed and we were unable to recover it. 00:30:37.236 [2024-12-05 12:14:11.412591] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.236 [2024-12-05 12:14:11.412651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.236 [2024-12-05 12:14:11.412665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.236 [2024-12-05 12:14:11.412671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.236 [2024-12-05 12:14:11.412677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.236 [2024-12-05 12:14:11.412692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.236 qpair failed and we were unable to recover it. 00:30:37.236 [2024-12-05 12:14:11.422590] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.236 [2024-12-05 12:14:11.422643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.236 [2024-12-05 12:14:11.422655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.236 [2024-12-05 12:14:11.422662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.236 [2024-12-05 12:14:11.422668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.236 [2024-12-05 12:14:11.422685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.236 qpair failed and we were unable to recover it. 00:30:37.495 [2024-12-05 12:14:11.432619] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.495 [2024-12-05 12:14:11.432674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.495 [2024-12-05 12:14:11.432686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.495 [2024-12-05 12:14:11.432693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.495 [2024-12-05 12:14:11.432699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.495 [2024-12-05 12:14:11.432712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.495 qpair failed and we were unable to recover it. 00:30:37.495 [2024-12-05 12:14:11.442583] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.495 [2024-12-05 12:14:11.442639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.495 [2024-12-05 12:14:11.442652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.495 [2024-12-05 12:14:11.442659] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.495 [2024-12-05 12:14:11.442664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.495 [2024-12-05 12:14:11.442679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.495 qpair failed and we were unable to recover it. 00:30:37.495 [2024-12-05 12:14:11.452699] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.495 [2024-12-05 12:14:11.452771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.495 [2024-12-05 12:14:11.452784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.495 [2024-12-05 12:14:11.452790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.495 [2024-12-05 12:14:11.452796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.495 [2024-12-05 12:14:11.452810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.495 qpair failed and we were unable to recover it. 00:30:37.495 [2024-12-05 12:14:11.462706] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.495 [2024-12-05 12:14:11.462761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.495 [2024-12-05 12:14:11.462774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.495 [2024-12-05 12:14:11.462781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.495 [2024-12-05 12:14:11.462787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.495 [2024-12-05 12:14:11.462801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.495 qpair failed and we were unable to recover it. 00:30:37.495 [2024-12-05 12:14:11.472779] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.495 [2024-12-05 12:14:11.472837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.495 [2024-12-05 12:14:11.472850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.495 [2024-12-05 12:14:11.472857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.495 [2024-12-05 12:14:11.472863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.495 [2024-12-05 12:14:11.472876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.495 qpair failed and we were unable to recover it. 00:30:37.495 [2024-12-05 12:14:11.482848] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.495 [2024-12-05 12:14:11.482938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.495 [2024-12-05 12:14:11.482953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.495 [2024-12-05 12:14:11.482959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.495 [2024-12-05 12:14:11.482965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.495 [2024-12-05 12:14:11.482980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.495 qpair failed and we were unable to recover it. 00:30:37.495 [2024-12-05 12:14:11.492821] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.495 [2024-12-05 12:14:11.492874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.495 [2024-12-05 12:14:11.492886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.495 [2024-12-05 12:14:11.492893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.495 [2024-12-05 12:14:11.492898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.496 [2024-12-05 12:14:11.492912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.496 qpair failed and we were unable to recover it. 00:30:37.496 [2024-12-05 12:14:11.502847] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.496 [2024-12-05 12:14:11.502906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.496 [2024-12-05 12:14:11.502919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.496 [2024-12-05 12:14:11.502926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.496 [2024-12-05 12:14:11.502932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.496 [2024-12-05 12:14:11.502947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.496 qpair failed and we were unable to recover it. 00:30:37.496 [2024-12-05 12:14:11.512846] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.496 [2024-12-05 12:14:11.512900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.496 [2024-12-05 12:14:11.512913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.496 [2024-12-05 12:14:11.512924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.496 [2024-12-05 12:14:11.512930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.496 [2024-12-05 12:14:11.512944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.496 qpair failed and we were unable to recover it. 00:30:37.496 [2024-12-05 12:14:11.522827] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.496 [2024-12-05 12:14:11.522881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.496 [2024-12-05 12:14:11.522894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.496 [2024-12-05 12:14:11.522901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.496 [2024-12-05 12:14:11.522908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.496 [2024-12-05 12:14:11.522922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.496 qpair failed and we were unable to recover it. 00:30:37.496 [2024-12-05 12:14:11.532914] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.496 [2024-12-05 12:14:11.532969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.496 [2024-12-05 12:14:11.532983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.496 [2024-12-05 12:14:11.532989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.496 [2024-12-05 12:14:11.532995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.496 [2024-12-05 12:14:11.533009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.496 qpair failed and we were unable to recover it. 00:30:37.496 [2024-12-05 12:14:11.542935] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.496 [2024-12-05 12:14:11.542986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.496 [2024-12-05 12:14:11.542999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.496 [2024-12-05 12:14:11.543005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.496 [2024-12-05 12:14:11.543012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.496 [2024-12-05 12:14:11.543027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.496 qpair failed and we were unable to recover it. 00:30:37.496 [2024-12-05 12:14:11.552893] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.496 [2024-12-05 12:14:11.552947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.496 [2024-12-05 12:14:11.552960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.496 [2024-12-05 12:14:11.552966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.496 [2024-12-05 12:14:11.552973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.496 [2024-12-05 12:14:11.552990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.496 qpair failed and we were unable to recover it. 00:30:37.496 [2024-12-05 12:14:11.562918] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.496 [2024-12-05 12:14:11.562976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.496 [2024-12-05 12:14:11.562989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.496 [2024-12-05 12:14:11.562996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.496 [2024-12-05 12:14:11.563002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.496 [2024-12-05 12:14:11.563016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.496 qpair failed and we were unable to recover it. 00:30:37.496 [2024-12-05 12:14:11.572957] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.496 [2024-12-05 12:14:11.573009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.496 [2024-12-05 12:14:11.573021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.496 [2024-12-05 12:14:11.573028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.496 [2024-12-05 12:14:11.573034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.496 [2024-12-05 12:14:11.573048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.496 qpair failed and we were unable to recover it. 00:30:37.496 [2024-12-05 12:14:11.582970] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.496 [2024-12-05 12:14:11.583023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.496 [2024-12-05 12:14:11.583036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.496 [2024-12-05 12:14:11.583042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.496 [2024-12-05 12:14:11.583048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.496 [2024-12-05 12:14:11.583062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.496 qpair failed and we were unable to recover it. 00:30:37.496 [2024-12-05 12:14:11.593055] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.496 [2024-12-05 12:14:11.593111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.496 [2024-12-05 12:14:11.593123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.496 [2024-12-05 12:14:11.593129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.496 [2024-12-05 12:14:11.593135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.496 [2024-12-05 12:14:11.593149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.496 qpair failed and we were unable to recover it. 00:30:37.496 [2024-12-05 12:14:11.603176] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.496 [2024-12-05 12:14:11.603240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.496 [2024-12-05 12:14:11.603252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.496 [2024-12-05 12:14:11.603259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.496 [2024-12-05 12:14:11.603265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.496 [2024-12-05 12:14:11.603279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.496 qpair failed and we were unable to recover it. 00:30:37.496 [2024-12-05 12:14:11.613132] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.496 [2024-12-05 12:14:11.613187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.496 [2024-12-05 12:14:11.613201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.496 [2024-12-05 12:14:11.613207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.496 [2024-12-05 12:14:11.613213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.496 [2024-12-05 12:14:11.613228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.496 qpair failed and we were unable to recover it. 00:30:37.496 [2024-12-05 12:14:11.623201] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.496 [2024-12-05 12:14:11.623251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.496 [2024-12-05 12:14:11.623265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.496 [2024-12-05 12:14:11.623272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.496 [2024-12-05 12:14:11.623278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.496 [2024-12-05 12:14:11.623292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.496 qpair failed and we were unable to recover it. 00:30:37.496 [2024-12-05 12:14:11.633210] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.496 [2024-12-05 12:14:11.633264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.496 [2024-12-05 12:14:11.633278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.496 [2024-12-05 12:14:11.633285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.496 [2024-12-05 12:14:11.633291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.496 [2024-12-05 12:14:11.633306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.496 qpair failed and we were unable to recover it. 00:30:37.496 [2024-12-05 12:14:11.643255] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.496 [2024-12-05 12:14:11.643310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.496 [2024-12-05 12:14:11.643326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.496 [2024-12-05 12:14:11.643333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.496 [2024-12-05 12:14:11.643339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.496 [2024-12-05 12:14:11.643354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.496 qpair failed and we were unable to recover it. 00:30:37.496 [2024-12-05 12:14:11.653178] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.496 [2024-12-05 12:14:11.653259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.496 [2024-12-05 12:14:11.653272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.496 [2024-12-05 12:14:11.653279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.496 [2024-12-05 12:14:11.653285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.496 [2024-12-05 12:14:11.653299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.496 qpair failed and we were unable to recover it. 00:30:37.496 [2024-12-05 12:14:11.663216] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.496 [2024-12-05 12:14:11.663299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.497 [2024-12-05 12:14:11.663312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.497 [2024-12-05 12:14:11.663318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.497 [2024-12-05 12:14:11.663324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.497 [2024-12-05 12:14:11.663339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.497 qpair failed and we were unable to recover it. 00:30:37.497 [2024-12-05 12:14:11.673289] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.497 [2024-12-05 12:14:11.673342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.497 [2024-12-05 12:14:11.673355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.497 [2024-12-05 12:14:11.673361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.497 [2024-12-05 12:14:11.673371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.497 [2024-12-05 12:14:11.673387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.497 qpair failed and we were unable to recover it. 00:30:37.497 [2024-12-05 12:14:11.683285] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.497 [2024-12-05 12:14:11.683340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.497 [2024-12-05 12:14:11.683353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.497 [2024-12-05 12:14:11.683359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.497 [2024-12-05 12:14:11.683372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.497 [2024-12-05 12:14:11.683387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.497 qpair failed and we were unable to recover it. 00:30:37.756 [2024-12-05 12:14:11.693415] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.756 [2024-12-05 12:14:11.693475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.756 [2024-12-05 12:14:11.693488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.756 [2024-12-05 12:14:11.693494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.756 [2024-12-05 12:14:11.693500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.756 [2024-12-05 12:14:11.693515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.756 qpair failed and we were unable to recover it. 00:30:37.756 [2024-12-05 12:14:11.703388] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.756 [2024-12-05 12:14:11.703472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.756 [2024-12-05 12:14:11.703485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.756 [2024-12-05 12:14:11.703491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.756 [2024-12-05 12:14:11.703497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.756 [2024-12-05 12:14:11.703511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.756 qpair failed and we were unable to recover it. 00:30:37.756 [2024-12-05 12:14:11.713391] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.756 [2024-12-05 12:14:11.713448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.756 [2024-12-05 12:14:11.713462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.756 [2024-12-05 12:14:11.713469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.756 [2024-12-05 12:14:11.713475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.756 [2024-12-05 12:14:11.713490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.756 qpair failed and we were unable to recover it. 00:30:37.756 [2024-12-05 12:14:11.723415] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.756 [2024-12-05 12:14:11.723502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.756 [2024-12-05 12:14:11.723515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.756 [2024-12-05 12:14:11.723522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.756 [2024-12-05 12:14:11.723528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.756 [2024-12-05 12:14:11.723542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.756 qpair failed and we were unable to recover it. 00:30:37.756 [2024-12-05 12:14:11.733483] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.756 [2024-12-05 12:14:11.733539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.756 [2024-12-05 12:14:11.733552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.756 [2024-12-05 12:14:11.733560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.756 [2024-12-05 12:14:11.733566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.756 [2024-12-05 12:14:11.733581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.756 qpair failed and we were unable to recover it. 00:30:37.756 [2024-12-05 12:14:11.743529] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.756 [2024-12-05 12:14:11.743584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.756 [2024-12-05 12:14:11.743597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.756 [2024-12-05 12:14:11.743604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.756 [2024-12-05 12:14:11.743610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.756 [2024-12-05 12:14:11.743624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.756 qpair failed and we were unable to recover it. 00:30:37.756 [2024-12-05 12:14:11.753589] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.756 [2024-12-05 12:14:11.753640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.756 [2024-12-05 12:14:11.753653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.756 [2024-12-05 12:14:11.753659] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.756 [2024-12-05 12:14:11.753665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.756 [2024-12-05 12:14:11.753679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.756 qpair failed and we were unable to recover it. 00:30:37.756 [2024-12-05 12:14:11.763504] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.756 [2024-12-05 12:14:11.763557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.756 [2024-12-05 12:14:11.763570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.756 [2024-12-05 12:14:11.763577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.756 [2024-12-05 12:14:11.763582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.756 [2024-12-05 12:14:11.763597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.756 qpair failed and we were unable to recover it. 00:30:37.756 [2024-12-05 12:14:11.773585] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.756 [2024-12-05 12:14:11.773658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.756 [2024-12-05 12:14:11.773699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.756 [2024-12-05 12:14:11.773707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.756 [2024-12-05 12:14:11.773712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.756 [2024-12-05 12:14:11.773738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.756 qpair failed and we were unable to recover it. 00:30:37.756 [2024-12-05 12:14:11.783637] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.756 [2024-12-05 12:14:11.783732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.756 [2024-12-05 12:14:11.783747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.756 [2024-12-05 12:14:11.783754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.756 [2024-12-05 12:14:11.783760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.756 [2024-12-05 12:14:11.783775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.756 qpair failed and we were unable to recover it. 00:30:37.756 [2024-12-05 12:14:11.793647] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.756 [2024-12-05 12:14:11.793699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.757 [2024-12-05 12:14:11.793712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.757 [2024-12-05 12:14:11.793718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.757 [2024-12-05 12:14:11.793724] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.757 [2024-12-05 12:14:11.793739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.757 qpair failed and we were unable to recover it. 00:30:37.757 [2024-12-05 12:14:11.803688] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.757 [2024-12-05 12:14:11.803742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.757 [2024-12-05 12:14:11.803756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.757 [2024-12-05 12:14:11.803762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.757 [2024-12-05 12:14:11.803769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.757 [2024-12-05 12:14:11.803783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.757 qpair failed and we were unable to recover it. 00:30:37.757 [2024-12-05 12:14:11.813696] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.757 [2024-12-05 12:14:11.813753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.757 [2024-12-05 12:14:11.813766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.757 [2024-12-05 12:14:11.813773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.757 [2024-12-05 12:14:11.813781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.757 [2024-12-05 12:14:11.813795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.757 qpair failed and we were unable to recover it. 00:30:37.757 [2024-12-05 12:14:11.823743] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.757 [2024-12-05 12:14:11.823794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.757 [2024-12-05 12:14:11.823807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.757 [2024-12-05 12:14:11.823814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.757 [2024-12-05 12:14:11.823820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.757 [2024-12-05 12:14:11.823835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.757 qpair failed and we were unable to recover it. 00:30:37.757 [2024-12-05 12:14:11.833761] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.757 [2024-12-05 12:14:11.833812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.757 [2024-12-05 12:14:11.833825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.757 [2024-12-05 12:14:11.833831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.757 [2024-12-05 12:14:11.833837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.757 [2024-12-05 12:14:11.833852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.757 qpair failed and we were unable to recover it. 00:30:37.757 [2024-12-05 12:14:11.843797] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.757 [2024-12-05 12:14:11.843852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.757 [2024-12-05 12:14:11.843865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.757 [2024-12-05 12:14:11.843871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.757 [2024-12-05 12:14:11.843877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.757 [2024-12-05 12:14:11.843892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.757 qpair failed and we were unable to recover it. 00:30:37.757 [2024-12-05 12:14:11.853822] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.757 [2024-12-05 12:14:11.853903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.757 [2024-12-05 12:14:11.853916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.757 [2024-12-05 12:14:11.853922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.757 [2024-12-05 12:14:11.853928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.757 [2024-12-05 12:14:11.853942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.757 qpair failed and we were unable to recover it. 00:30:37.757 [2024-12-05 12:14:11.863848] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.757 [2024-12-05 12:14:11.863898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.757 [2024-12-05 12:14:11.863911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.757 [2024-12-05 12:14:11.863918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.757 [2024-12-05 12:14:11.863923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.757 [2024-12-05 12:14:11.863937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.757 qpair failed and we were unable to recover it. 00:30:37.757 [2024-12-05 12:14:11.873874] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.757 [2024-12-05 12:14:11.873927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.757 [2024-12-05 12:14:11.873940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.757 [2024-12-05 12:14:11.873946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.757 [2024-12-05 12:14:11.873952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.757 [2024-12-05 12:14:11.873967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.757 qpair failed and we were unable to recover it. 00:30:37.757 [2024-12-05 12:14:11.883912] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.757 [2024-12-05 12:14:11.883979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.757 [2024-12-05 12:14:11.883991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.757 [2024-12-05 12:14:11.883997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.757 [2024-12-05 12:14:11.884003] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.757 [2024-12-05 12:14:11.884018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.757 qpair failed and we were unable to recover it. 00:30:37.757 [2024-12-05 12:14:11.893870] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.757 [2024-12-05 12:14:11.893919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.757 [2024-12-05 12:14:11.893933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.757 [2024-12-05 12:14:11.893939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.757 [2024-12-05 12:14:11.893945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.757 [2024-12-05 12:14:11.893959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.757 qpair failed and we were unable to recover it. 00:30:37.757 [2024-12-05 12:14:11.903952] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.757 [2024-12-05 12:14:11.904006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.757 [2024-12-05 12:14:11.904022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.757 [2024-12-05 12:14:11.904029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.757 [2024-12-05 12:14:11.904035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.757 [2024-12-05 12:14:11.904048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.757 qpair failed and we were unable to recover it. 00:30:37.757 [2024-12-05 12:14:11.913981] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.757 [2024-12-05 12:14:11.914031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.757 [2024-12-05 12:14:11.914044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.757 [2024-12-05 12:14:11.914050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.757 [2024-12-05 12:14:11.914056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.757 [2024-12-05 12:14:11.914071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.757 qpair failed and we were unable to recover it. 00:30:37.757 [2024-12-05 12:14:11.924014] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.757 [2024-12-05 12:14:11.924069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.757 [2024-12-05 12:14:11.924083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.758 [2024-12-05 12:14:11.924089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.758 [2024-12-05 12:14:11.924095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.758 [2024-12-05 12:14:11.924109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.758 qpair failed and we were unable to recover it. 00:30:37.758 [2024-12-05 12:14:11.934049] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.758 [2024-12-05 12:14:11.934102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.758 [2024-12-05 12:14:11.934115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.758 [2024-12-05 12:14:11.934122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.758 [2024-12-05 12:14:11.934128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.758 [2024-12-05 12:14:11.934142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.758 qpair failed and we were unable to recover it. 00:30:37.758 [2024-12-05 12:14:11.944061] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.758 [2024-12-05 12:14:11.944110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.758 [2024-12-05 12:14:11.944123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.758 [2024-12-05 12:14:11.944132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.758 [2024-12-05 12:14:11.944138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:37.758 [2024-12-05 12:14:11.944152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:37.758 qpair failed and we were unable to recover it. 00:30:38.016 [2024-12-05 12:14:11.954122] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.016 [2024-12-05 12:14:11.954174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.016 [2024-12-05 12:14:11.954187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.016 [2024-12-05 12:14:11.954194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.016 [2024-12-05 12:14:11.954199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.016 [2024-12-05 12:14:11.954213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.016 qpair failed and we were unable to recover it. 00:30:38.016 [2024-12-05 12:14:11.964204] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.016 [2024-12-05 12:14:11.964261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.016 [2024-12-05 12:14:11.964274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.016 [2024-12-05 12:14:11.964280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.016 [2024-12-05 12:14:11.964286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.016 [2024-12-05 12:14:11.964300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.016 qpair failed and we were unable to recover it. 00:30:38.016 [2024-12-05 12:14:11.974152] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.016 [2024-12-05 12:14:11.974209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.016 [2024-12-05 12:14:11.974222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.016 [2024-12-05 12:14:11.974229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.016 [2024-12-05 12:14:11.974235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.016 [2024-12-05 12:14:11.974249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.016 qpair failed and we were unable to recover it. 00:30:38.016 [2024-12-05 12:14:11.984180] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.016 [2024-12-05 12:14:11.984235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.016 [2024-12-05 12:14:11.984248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.017 [2024-12-05 12:14:11.984255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.017 [2024-12-05 12:14:11.984261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.017 [2024-12-05 12:14:11.984278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.017 qpair failed and we were unable to recover it. 00:30:38.017 [2024-12-05 12:14:11.994203] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.017 [2024-12-05 12:14:11.994255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.017 [2024-12-05 12:14:11.994267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.017 [2024-12-05 12:14:11.994274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.017 [2024-12-05 12:14:11.994280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.017 [2024-12-05 12:14:11.994294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.017 qpair failed and we were unable to recover it. 00:30:38.017 [2024-12-05 12:14:12.004232] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.017 [2024-12-05 12:14:12.004288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.017 [2024-12-05 12:14:12.004302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.017 [2024-12-05 12:14:12.004308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.017 [2024-12-05 12:14:12.004314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.017 [2024-12-05 12:14:12.004328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.017 qpair failed and we were unable to recover it. 00:30:38.017 [2024-12-05 12:14:12.014268] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.017 [2024-12-05 12:14:12.014324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.017 [2024-12-05 12:14:12.014337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.017 [2024-12-05 12:14:12.014344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.017 [2024-12-05 12:14:12.014350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.017 [2024-12-05 12:14:12.014364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.017 qpair failed and we were unable to recover it. 00:30:38.017 [2024-12-05 12:14:12.024290] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.017 [2024-12-05 12:14:12.024344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.017 [2024-12-05 12:14:12.024357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.017 [2024-12-05 12:14:12.024363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.017 [2024-12-05 12:14:12.024372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.017 [2024-12-05 12:14:12.024390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.017 qpair failed and we were unable to recover it. 00:30:38.017 [2024-12-05 12:14:12.034321] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.017 [2024-12-05 12:14:12.034378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.017 [2024-12-05 12:14:12.034391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.017 [2024-12-05 12:14:12.034398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.017 [2024-12-05 12:14:12.034404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.017 [2024-12-05 12:14:12.034419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.017 qpair failed and we were unable to recover it. 00:30:38.017 [2024-12-05 12:14:12.044352] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.017 [2024-12-05 12:14:12.044414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.017 [2024-12-05 12:14:12.044427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.017 [2024-12-05 12:14:12.044434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.017 [2024-12-05 12:14:12.044440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.017 [2024-12-05 12:14:12.044454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.017 qpair failed and we were unable to recover it. 00:30:38.017 [2024-12-05 12:14:12.054416] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.017 [2024-12-05 12:14:12.054470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.017 [2024-12-05 12:14:12.054483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.017 [2024-12-05 12:14:12.054489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.017 [2024-12-05 12:14:12.054495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.017 [2024-12-05 12:14:12.054509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.017 qpair failed and we were unable to recover it. 00:30:38.017 [2024-12-05 12:14:12.064440] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.017 [2024-12-05 12:14:12.064504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.017 [2024-12-05 12:14:12.064517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.017 [2024-12-05 12:14:12.064523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.017 [2024-12-05 12:14:12.064529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.017 [2024-12-05 12:14:12.064543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.017 qpair failed and we were unable to recover it. 00:30:38.017 [2024-12-05 12:14:12.074434] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.017 [2024-12-05 12:14:12.074489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.017 [2024-12-05 12:14:12.074502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.017 [2024-12-05 12:14:12.074514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.017 [2024-12-05 12:14:12.074520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.017 [2024-12-05 12:14:12.074534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.017 qpair failed and we were unable to recover it. 00:30:38.017 [2024-12-05 12:14:12.084473] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.017 [2024-12-05 12:14:12.084529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.017 [2024-12-05 12:14:12.084542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.017 [2024-12-05 12:14:12.084548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.017 [2024-12-05 12:14:12.084554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.017 [2024-12-05 12:14:12.084568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.017 qpair failed and we were unable to recover it. 00:30:38.017 [2024-12-05 12:14:12.094536] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.017 [2024-12-05 12:14:12.094596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.017 [2024-12-05 12:14:12.094608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.017 [2024-12-05 12:14:12.094615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.017 [2024-12-05 12:14:12.094621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.017 [2024-12-05 12:14:12.094635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.017 qpair failed and we were unable to recover it. 00:30:38.017 [2024-12-05 12:14:12.104539] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.017 [2024-12-05 12:14:12.104589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.017 [2024-12-05 12:14:12.104601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.017 [2024-12-05 12:14:12.104607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.017 [2024-12-05 12:14:12.104613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.017 [2024-12-05 12:14:12.104627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.017 qpair failed and we were unable to recover it. 00:30:38.017 [2024-12-05 12:14:12.114551] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.017 [2024-12-05 12:14:12.114604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.017 [2024-12-05 12:14:12.114617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.017 [2024-12-05 12:14:12.114624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.018 [2024-12-05 12:14:12.114630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.018 [2024-12-05 12:14:12.114647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.018 qpair failed and we were unable to recover it. 00:30:38.018 [2024-12-05 12:14:12.124631] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.018 [2024-12-05 12:14:12.124687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.018 [2024-12-05 12:14:12.124699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.018 [2024-12-05 12:14:12.124705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.018 [2024-12-05 12:14:12.124712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.018 [2024-12-05 12:14:12.124726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.018 qpair failed and we were unable to recover it. 00:30:38.018 [2024-12-05 12:14:12.134538] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.018 [2024-12-05 12:14:12.134592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.018 [2024-12-05 12:14:12.134606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.018 [2024-12-05 12:14:12.134613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.018 [2024-12-05 12:14:12.134618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.018 [2024-12-05 12:14:12.134633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.018 qpair failed and we were unable to recover it. 00:30:38.018 [2024-12-05 12:14:12.144655] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.018 [2024-12-05 12:14:12.144707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.018 [2024-12-05 12:14:12.144721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.018 [2024-12-05 12:14:12.144727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.018 [2024-12-05 12:14:12.144733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.018 [2024-12-05 12:14:12.144747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.018 qpair failed and we were unable to recover it. 00:30:38.018 [2024-12-05 12:14:12.154670] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.018 [2024-12-05 12:14:12.154723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.018 [2024-12-05 12:14:12.154736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.018 [2024-12-05 12:14:12.154742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.018 [2024-12-05 12:14:12.154748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.018 [2024-12-05 12:14:12.154763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.018 qpair failed and we were unable to recover it. 00:30:38.018 [2024-12-05 12:14:12.164692] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.018 [2024-12-05 12:14:12.164750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.018 [2024-12-05 12:14:12.164762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.018 [2024-12-05 12:14:12.164769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.018 [2024-12-05 12:14:12.164775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.018 [2024-12-05 12:14:12.164789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.018 qpair failed and we were unable to recover it. 00:30:38.018 [2024-12-05 12:14:12.174745] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.018 [2024-12-05 12:14:12.174804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.018 [2024-12-05 12:14:12.174819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.018 [2024-12-05 12:14:12.174825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.018 [2024-12-05 12:14:12.174832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.018 [2024-12-05 12:14:12.174847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.018 qpair failed and we were unable to recover it. 00:30:38.018 [2024-12-05 12:14:12.184751] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.018 [2024-12-05 12:14:12.184806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.018 [2024-12-05 12:14:12.184819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.018 [2024-12-05 12:14:12.184825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.018 [2024-12-05 12:14:12.184831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.018 [2024-12-05 12:14:12.184845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.018 qpair failed and we were unable to recover it. 00:30:38.018 [2024-12-05 12:14:12.194767] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.018 [2024-12-05 12:14:12.194822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.018 [2024-12-05 12:14:12.194835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.018 [2024-12-05 12:14:12.194841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.018 [2024-12-05 12:14:12.194847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.018 [2024-12-05 12:14:12.194861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.018 qpair failed and we were unable to recover it. 00:30:38.018 [2024-12-05 12:14:12.204817] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.018 [2024-12-05 12:14:12.204909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.018 [2024-12-05 12:14:12.204924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.018 [2024-12-05 12:14:12.204931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.018 [2024-12-05 12:14:12.204937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.018 [2024-12-05 12:14:12.204951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.018 qpair failed and we were unable to recover it. 00:30:38.277 [2024-12-05 12:14:12.214844] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.277 [2024-12-05 12:14:12.214899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.277 [2024-12-05 12:14:12.214912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.277 [2024-12-05 12:14:12.214918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.277 [2024-12-05 12:14:12.214924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.277 [2024-12-05 12:14:12.214938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.277 qpair failed and we were unable to recover it. 00:30:38.277 [2024-12-05 12:14:12.224898] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.277 [2024-12-05 12:14:12.224960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.277 [2024-12-05 12:14:12.224973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.277 [2024-12-05 12:14:12.224980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.277 [2024-12-05 12:14:12.224986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.277 [2024-12-05 12:14:12.225000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.277 qpair failed and we were unable to recover it. 00:30:38.277 [2024-12-05 12:14:12.234896] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.277 [2024-12-05 12:14:12.234954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.277 [2024-12-05 12:14:12.234967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.277 [2024-12-05 12:14:12.234974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.277 [2024-12-05 12:14:12.234980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.277 [2024-12-05 12:14:12.234995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.277 qpair failed and we were unable to recover it. 00:30:38.277 [2024-12-05 12:14:12.244921] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.277 [2024-12-05 12:14:12.244976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.277 [2024-12-05 12:14:12.244989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.277 [2024-12-05 12:14:12.244996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.277 [2024-12-05 12:14:12.245005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.277 [2024-12-05 12:14:12.245019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.277 qpair failed and we were unable to recover it. 00:30:38.277 [2024-12-05 12:14:12.254954] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.277 [2024-12-05 12:14:12.255011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.277 [2024-12-05 12:14:12.255024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.277 [2024-12-05 12:14:12.255031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.277 [2024-12-05 12:14:12.255037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.277 [2024-12-05 12:14:12.255051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.277 qpair failed and we were unable to recover it. 00:30:38.277 [2024-12-05 12:14:12.264976] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.277 [2024-12-05 12:14:12.265029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.277 [2024-12-05 12:14:12.265041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.277 [2024-12-05 12:14:12.265047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.277 [2024-12-05 12:14:12.265054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.277 [2024-12-05 12:14:12.265068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.277 qpair failed and we were unable to recover it. 00:30:38.277 [2024-12-05 12:14:12.275006] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.277 [2024-12-05 12:14:12.275058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.277 [2024-12-05 12:14:12.275071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.278 [2024-12-05 12:14:12.275077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.278 [2024-12-05 12:14:12.275083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.278 [2024-12-05 12:14:12.275097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.278 qpair failed and we were unable to recover it. 00:30:38.278 [2024-12-05 12:14:12.285082] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.278 [2024-12-05 12:14:12.285144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.278 [2024-12-05 12:14:12.285158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.278 [2024-12-05 12:14:12.285164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.278 [2024-12-05 12:14:12.285170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.278 [2024-12-05 12:14:12.285184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.278 qpair failed and we were unable to recover it. 00:30:38.278 [2024-12-05 12:14:12.295057] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.278 [2024-12-05 12:14:12.295109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.278 [2024-12-05 12:14:12.295122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.278 [2024-12-05 12:14:12.295129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.278 [2024-12-05 12:14:12.295135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.278 [2024-12-05 12:14:12.295149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.278 qpair failed and we were unable to recover it. 00:30:38.278 [2024-12-05 12:14:12.305102] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.278 [2024-12-05 12:14:12.305157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.278 [2024-12-05 12:14:12.305169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.278 [2024-12-05 12:14:12.305175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.278 [2024-12-05 12:14:12.305182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.278 [2024-12-05 12:14:12.305195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.278 qpair failed and we were unable to recover it. 00:30:38.278 [2024-12-05 12:14:12.315108] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.278 [2024-12-05 12:14:12.315159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.278 [2024-12-05 12:14:12.315172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.278 [2024-12-05 12:14:12.315179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.278 [2024-12-05 12:14:12.315185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.278 [2024-12-05 12:14:12.315199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.278 qpair failed and we were unable to recover it. 00:30:38.278 [2024-12-05 12:14:12.325177] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.278 [2024-12-05 12:14:12.325231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.278 [2024-12-05 12:14:12.325245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.278 [2024-12-05 12:14:12.325252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.278 [2024-12-05 12:14:12.325257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.278 [2024-12-05 12:14:12.325272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.278 qpair failed and we were unable to recover it. 00:30:38.278 [2024-12-05 12:14:12.335168] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.278 [2024-12-05 12:14:12.335218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.278 [2024-12-05 12:14:12.335234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.278 [2024-12-05 12:14:12.335240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.278 [2024-12-05 12:14:12.335246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.278 [2024-12-05 12:14:12.335261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.278 qpair failed and we were unable to recover it. 00:30:38.278 [2024-12-05 12:14:12.345198] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.278 [2024-12-05 12:14:12.345253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.278 [2024-12-05 12:14:12.345267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.278 [2024-12-05 12:14:12.345273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.278 [2024-12-05 12:14:12.345279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.278 [2024-12-05 12:14:12.345293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.278 qpair failed and we were unable to recover it. 00:30:38.278 [2024-12-05 12:14:12.355255] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.278 [2024-12-05 12:14:12.355309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.278 [2024-12-05 12:14:12.355322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.278 [2024-12-05 12:14:12.355329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.278 [2024-12-05 12:14:12.355335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.278 [2024-12-05 12:14:12.355350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.278 qpair failed and we were unable to recover it. 00:30:38.278 [2024-12-05 12:14:12.365261] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.278 [2024-12-05 12:14:12.365325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.278 [2024-12-05 12:14:12.365338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.278 [2024-12-05 12:14:12.365345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.278 [2024-12-05 12:14:12.365351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.278 [2024-12-05 12:14:12.365365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.278 qpair failed and we were unable to recover it. 00:30:38.278 [2024-12-05 12:14:12.375276] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.278 [2024-12-05 12:14:12.375328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.278 [2024-12-05 12:14:12.375341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.278 [2024-12-05 12:14:12.375348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.278 [2024-12-05 12:14:12.375360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.278 [2024-12-05 12:14:12.375378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.278 qpair failed and we were unable to recover it. 00:30:38.278 [2024-12-05 12:14:12.385294] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.278 [2024-12-05 12:14:12.385353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.278 [2024-12-05 12:14:12.385370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.278 [2024-12-05 12:14:12.385377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.278 [2024-12-05 12:14:12.385384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.278 [2024-12-05 12:14:12.385398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.278 qpair failed and we were unable to recover it. 00:30:38.278 [2024-12-05 12:14:12.395330] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.278 [2024-12-05 12:14:12.395404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.278 [2024-12-05 12:14:12.395418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.278 [2024-12-05 12:14:12.395424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.278 [2024-12-05 12:14:12.395430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.278 [2024-12-05 12:14:12.395445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.278 qpair failed and we were unable to recover it. 00:30:38.278 [2024-12-05 12:14:12.405417] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.278 [2024-12-05 12:14:12.405491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.278 [2024-12-05 12:14:12.405504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.278 [2024-12-05 12:14:12.405510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.279 [2024-12-05 12:14:12.405516] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.279 [2024-12-05 12:14:12.405530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.279 qpair failed and we were unable to recover it. 00:30:38.279 [2024-12-05 12:14:12.415389] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.279 [2024-12-05 12:14:12.415443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.279 [2024-12-05 12:14:12.415456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.279 [2024-12-05 12:14:12.415462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.279 [2024-12-05 12:14:12.415468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.279 [2024-12-05 12:14:12.415483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.279 qpair failed and we were unable to recover it. 00:30:38.279 [2024-12-05 12:14:12.425434] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.279 [2024-12-05 12:14:12.425493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.279 [2024-12-05 12:14:12.425507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.279 [2024-12-05 12:14:12.425514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.279 [2024-12-05 12:14:12.425520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.279 [2024-12-05 12:14:12.425535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.279 qpair failed and we were unable to recover it. 00:30:38.279 [2024-12-05 12:14:12.435442] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.279 [2024-12-05 12:14:12.435500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.279 [2024-12-05 12:14:12.435513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.279 [2024-12-05 12:14:12.435520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.279 [2024-12-05 12:14:12.435526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.279 [2024-12-05 12:14:12.435540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.279 qpair failed and we were unable to recover it. 00:30:38.279 [2024-12-05 12:14:12.445476] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.279 [2024-12-05 12:14:12.445539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.279 [2024-12-05 12:14:12.445553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.279 [2024-12-05 12:14:12.445560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.279 [2024-12-05 12:14:12.445565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.279 [2024-12-05 12:14:12.445581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.279 qpair failed and we were unable to recover it. 00:30:38.279 [2024-12-05 12:14:12.455520] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.279 [2024-12-05 12:14:12.455585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.279 [2024-12-05 12:14:12.455598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.279 [2024-12-05 12:14:12.455605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.279 [2024-12-05 12:14:12.455611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.279 [2024-12-05 12:14:12.455625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.279 qpair failed and we were unable to recover it. 00:30:38.279 [2024-12-05 12:14:12.465571] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.279 [2024-12-05 12:14:12.465679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.279 [2024-12-05 12:14:12.465692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.279 [2024-12-05 12:14:12.465698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.279 [2024-12-05 12:14:12.465705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.279 [2024-12-05 12:14:12.465718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.279 qpair failed and we were unable to recover it. 00:30:38.538 [2024-12-05 12:14:12.475552] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.538 [2024-12-05 12:14:12.475609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.538 [2024-12-05 12:14:12.475621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.538 [2024-12-05 12:14:12.475628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.538 [2024-12-05 12:14:12.475634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.538 [2024-12-05 12:14:12.475648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.538 qpair failed and we were unable to recover it. 00:30:38.538 [2024-12-05 12:14:12.485576] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.538 [2024-12-05 12:14:12.485629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.538 [2024-12-05 12:14:12.485641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.538 [2024-12-05 12:14:12.485648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.538 [2024-12-05 12:14:12.485654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.538 [2024-12-05 12:14:12.485668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.538 qpair failed and we were unable to recover it. 00:30:38.538 [2024-12-05 12:14:12.495612] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.538 [2024-12-05 12:14:12.495681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.538 [2024-12-05 12:14:12.495694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.538 [2024-12-05 12:14:12.495700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.538 [2024-12-05 12:14:12.495706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.538 [2024-12-05 12:14:12.495720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.538 qpair failed and we were unable to recover it. 00:30:38.538 [2024-12-05 12:14:12.505632] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.538 [2024-12-05 12:14:12.505684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.538 [2024-12-05 12:14:12.505697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.538 [2024-12-05 12:14:12.505706] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.538 [2024-12-05 12:14:12.505712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.538 [2024-12-05 12:14:12.505726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.538 qpair failed and we were unable to recover it. 00:30:38.538 [2024-12-05 12:14:12.515651] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.538 [2024-12-05 12:14:12.515700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.538 [2024-12-05 12:14:12.515713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.538 [2024-12-05 12:14:12.515719] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.538 [2024-12-05 12:14:12.515725] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.538 [2024-12-05 12:14:12.515739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.538 qpair failed and we were unable to recover it. 00:30:38.538 [2024-12-05 12:14:12.525731] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.538 [2024-12-05 12:14:12.525785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.538 [2024-12-05 12:14:12.525798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.538 [2024-12-05 12:14:12.525804] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.538 [2024-12-05 12:14:12.525811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.538 [2024-12-05 12:14:12.525825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.538 qpair failed and we were unable to recover it. 00:30:38.538 [2024-12-05 12:14:12.535748] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.538 [2024-12-05 12:14:12.535804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.538 [2024-12-05 12:14:12.535816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.538 [2024-12-05 12:14:12.535823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.538 [2024-12-05 12:14:12.535829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.538 [2024-12-05 12:14:12.535843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.538 qpair failed and we were unable to recover it. 00:30:38.538 [2024-12-05 12:14:12.545751] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.538 [2024-12-05 12:14:12.545805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.538 [2024-12-05 12:14:12.545819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.538 [2024-12-05 12:14:12.545825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.538 [2024-12-05 12:14:12.545831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.538 [2024-12-05 12:14:12.545850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.538 qpair failed and we were unable to recover it. 00:30:38.538 [2024-12-05 12:14:12.555779] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.538 [2024-12-05 12:14:12.555832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.538 [2024-12-05 12:14:12.555845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.538 [2024-12-05 12:14:12.555851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.538 [2024-12-05 12:14:12.555857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.538 [2024-12-05 12:14:12.555871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.538 qpair failed and we were unable to recover it. 00:30:38.538 [2024-12-05 12:14:12.565740] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.538 [2024-12-05 12:14:12.565827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.538 [2024-12-05 12:14:12.565839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.538 [2024-12-05 12:14:12.565846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.538 [2024-12-05 12:14:12.565852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.538 [2024-12-05 12:14:12.565866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.538 qpair failed and we were unable to recover it. 00:30:38.538 [2024-12-05 12:14:12.575831] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.538 [2024-12-05 12:14:12.575889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.538 [2024-12-05 12:14:12.575902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.538 [2024-12-05 12:14:12.575908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.538 [2024-12-05 12:14:12.575914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.538 [2024-12-05 12:14:12.575929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.538 qpair failed and we were unable to recover it. 00:30:38.538 [2024-12-05 12:14:12.585875] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.538 [2024-12-05 12:14:12.585924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.538 [2024-12-05 12:14:12.585937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.539 [2024-12-05 12:14:12.585944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.539 [2024-12-05 12:14:12.585950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.539 [2024-12-05 12:14:12.585964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.539 qpair failed and we were unable to recover it. 00:30:38.539 [2024-12-05 12:14:12.595888] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.539 [2024-12-05 12:14:12.595941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.539 [2024-12-05 12:14:12.595953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.539 [2024-12-05 12:14:12.595959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.539 [2024-12-05 12:14:12.595965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.539 [2024-12-05 12:14:12.595980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.539 qpair failed and we were unable to recover it. 00:30:38.539 [2024-12-05 12:14:12.605975] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.539 [2024-12-05 12:14:12.606030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.539 [2024-12-05 12:14:12.606042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.539 [2024-12-05 12:14:12.606048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.539 [2024-12-05 12:14:12.606054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.539 [2024-12-05 12:14:12.606069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.539 qpair failed and we were unable to recover it. 00:30:38.539 [2024-12-05 12:14:12.615970] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.539 [2024-12-05 12:14:12.616024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.539 [2024-12-05 12:14:12.616038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.539 [2024-12-05 12:14:12.616044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.539 [2024-12-05 12:14:12.616051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.539 [2024-12-05 12:14:12.616065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.539 qpair failed and we were unable to recover it. 00:30:38.539 [2024-12-05 12:14:12.626008] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.539 [2024-12-05 12:14:12.626067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.539 [2024-12-05 12:14:12.626081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.539 [2024-12-05 12:14:12.626088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.539 [2024-12-05 12:14:12.626094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.539 [2024-12-05 12:14:12.626109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.539 qpair failed and we were unable to recover it. 00:30:38.539 [2024-12-05 12:14:12.636067] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.539 [2024-12-05 12:14:12.636121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.539 [2024-12-05 12:14:12.636134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.539 [2024-12-05 12:14:12.636144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.539 [2024-12-05 12:14:12.636149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.539 [2024-12-05 12:14:12.636164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.539 qpair failed and we were unable to recover it. 00:30:38.539 [2024-12-05 12:14:12.646040] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.539 [2024-12-05 12:14:12.646096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.539 [2024-12-05 12:14:12.646109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.539 [2024-12-05 12:14:12.646116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.539 [2024-12-05 12:14:12.646122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.539 [2024-12-05 12:14:12.646136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.539 qpair failed and we were unable to recover it. 00:30:38.539 [2024-12-05 12:14:12.656068] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.539 [2024-12-05 12:14:12.656123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.539 [2024-12-05 12:14:12.656137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.539 [2024-12-05 12:14:12.656144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.539 [2024-12-05 12:14:12.656150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.539 [2024-12-05 12:14:12.656165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.539 qpair failed and we were unable to recover it. 00:30:38.539 [2024-12-05 12:14:12.666025] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.539 [2024-12-05 12:14:12.666078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.539 [2024-12-05 12:14:12.666091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.539 [2024-12-05 12:14:12.666097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.539 [2024-12-05 12:14:12.666103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.539 [2024-12-05 12:14:12.666117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.539 qpair failed and we were unable to recover it. 00:30:38.539 [2024-12-05 12:14:12.676150] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.539 [2024-12-05 12:14:12.676211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.539 [2024-12-05 12:14:12.676225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.539 [2024-12-05 12:14:12.676231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.539 [2024-12-05 12:14:12.676237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.539 [2024-12-05 12:14:12.676256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.539 qpair failed and we were unable to recover it. 00:30:38.539 [2024-12-05 12:14:12.686167] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.539 [2024-12-05 12:14:12.686223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.539 [2024-12-05 12:14:12.686237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.539 [2024-12-05 12:14:12.686243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.539 [2024-12-05 12:14:12.686249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.539 [2024-12-05 12:14:12.686264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.539 qpair failed and we were unable to recover it. 00:30:38.539 [2024-12-05 12:14:12.696189] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.539 [2024-12-05 12:14:12.696241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.539 [2024-12-05 12:14:12.696253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.539 [2024-12-05 12:14:12.696260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.539 [2024-12-05 12:14:12.696266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.539 [2024-12-05 12:14:12.696280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.539 qpair failed and we were unable to recover it. 00:30:38.539 [2024-12-05 12:14:12.706206] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.539 [2024-12-05 12:14:12.706261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.539 [2024-12-05 12:14:12.706274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.539 [2024-12-05 12:14:12.706280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.539 [2024-12-05 12:14:12.706286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.539 [2024-12-05 12:14:12.706301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.539 qpair failed and we were unable to recover it. 00:30:38.539 [2024-12-05 12:14:12.716236] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.539 [2024-12-05 12:14:12.716282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.539 [2024-12-05 12:14:12.716295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.540 [2024-12-05 12:14:12.716302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.540 [2024-12-05 12:14:12.716307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.540 [2024-12-05 12:14:12.716322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.540 qpair failed and we were unable to recover it. 00:30:38.540 [2024-12-05 12:14:12.726277] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.540 [2024-12-05 12:14:12.726380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.540 [2024-12-05 12:14:12.726394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.540 [2024-12-05 12:14:12.726400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.540 [2024-12-05 12:14:12.726406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.540 [2024-12-05 12:14:12.726421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.540 qpair failed and we were unable to recover it. 00:30:38.848 [2024-12-05 12:14:12.736302] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.848 [2024-12-05 12:14:12.736359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.848 [2024-12-05 12:14:12.736375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.848 [2024-12-05 12:14:12.736381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.848 [2024-12-05 12:14:12.736387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.848 [2024-12-05 12:14:12.736401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.848 qpair failed and we were unable to recover it. 00:30:38.848 [2024-12-05 12:14:12.746326] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.848 [2024-12-05 12:14:12.746382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.848 [2024-12-05 12:14:12.746395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.848 [2024-12-05 12:14:12.746402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.848 [2024-12-05 12:14:12.746408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.848 [2024-12-05 12:14:12.746422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.848 qpair failed and we were unable to recover it. 00:30:38.848 [2024-12-05 12:14:12.756379] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.848 [2024-12-05 12:14:12.756435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.848 [2024-12-05 12:14:12.756449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.848 [2024-12-05 12:14:12.756455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.848 [2024-12-05 12:14:12.756461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.848 [2024-12-05 12:14:12.756476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.848 qpair failed and we were unable to recover it. 00:30:38.848 [2024-12-05 12:14:12.766381] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.848 [2024-12-05 12:14:12.766436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.848 [2024-12-05 12:14:12.766451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.848 [2024-12-05 12:14:12.766458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.848 [2024-12-05 12:14:12.766465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.848 [2024-12-05 12:14:12.766480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.848 qpair failed and we were unable to recover it. 00:30:38.848 [2024-12-05 12:14:12.776345] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.848 [2024-12-05 12:14:12.776406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.848 [2024-12-05 12:14:12.776419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.848 [2024-12-05 12:14:12.776425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.848 [2024-12-05 12:14:12.776431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.848 [2024-12-05 12:14:12.776445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.848 qpair failed and we were unable to recover it. 00:30:38.848 [2024-12-05 12:14:12.786457] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.849 [2024-12-05 12:14:12.786510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.849 [2024-12-05 12:14:12.786523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.849 [2024-12-05 12:14:12.786530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.849 [2024-12-05 12:14:12.786535] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.849 [2024-12-05 12:14:12.786550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.849 qpair failed and we were unable to recover it. 00:30:38.849 [2024-12-05 12:14:12.796500] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.849 [2024-12-05 12:14:12.796555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.849 [2024-12-05 12:14:12.796568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.849 [2024-12-05 12:14:12.796576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.849 [2024-12-05 12:14:12.796582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.849 [2024-12-05 12:14:12.796597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.849 qpair failed and we were unable to recover it. 00:30:38.849 [2024-12-05 12:14:12.806484] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.849 [2024-12-05 12:14:12.806542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.849 [2024-12-05 12:14:12.806555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.849 [2024-12-05 12:14:12.806561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.849 [2024-12-05 12:14:12.806572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.849 [2024-12-05 12:14:12.806588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.849 qpair failed and we were unable to recover it. 00:30:38.849 [2024-12-05 12:14:12.816513] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.849 [2024-12-05 12:14:12.816567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.849 [2024-12-05 12:14:12.816579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.849 [2024-12-05 12:14:12.816586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.849 [2024-12-05 12:14:12.816592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.849 [2024-12-05 12:14:12.816607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.849 qpair failed and we were unable to recover it. 00:30:38.849 [2024-12-05 12:14:12.826574] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.849 [2024-12-05 12:14:12.826626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.849 [2024-12-05 12:14:12.826639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.849 [2024-12-05 12:14:12.826645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.849 [2024-12-05 12:14:12.826651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.849 [2024-12-05 12:14:12.826666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.849 qpair failed and we were unable to recover it. 00:30:38.849 [2024-12-05 12:14:12.836604] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.849 [2024-12-05 12:14:12.836656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.849 [2024-12-05 12:14:12.836669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.849 [2024-12-05 12:14:12.836675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.849 [2024-12-05 12:14:12.836681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.849 [2024-12-05 12:14:12.836695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.849 qpair failed and we were unable to recover it. 00:30:38.849 [2024-12-05 12:14:12.846602] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.849 [2024-12-05 12:14:12.846656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.849 [2024-12-05 12:14:12.846669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.849 [2024-12-05 12:14:12.846675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.849 [2024-12-05 12:14:12.846681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.849 [2024-12-05 12:14:12.846696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.849 qpair failed and we were unable to recover it. 00:30:38.849 [2024-12-05 12:14:12.856671] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.849 [2024-12-05 12:14:12.856745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.849 [2024-12-05 12:14:12.856759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.849 [2024-12-05 12:14:12.856765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.849 [2024-12-05 12:14:12.856771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.849 [2024-12-05 12:14:12.856786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.849 qpair failed and we were unable to recover it. 00:30:38.849 [2024-12-05 12:14:12.866660] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.849 [2024-12-05 12:14:12.866710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.849 [2024-12-05 12:14:12.866723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.849 [2024-12-05 12:14:12.866729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.849 [2024-12-05 12:14:12.866736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.849 [2024-12-05 12:14:12.866750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.849 qpair failed and we were unable to recover it. 00:30:38.849 [2024-12-05 12:14:12.876681] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.849 [2024-12-05 12:14:12.876776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.849 [2024-12-05 12:14:12.876788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.849 [2024-12-05 12:14:12.876795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.849 [2024-12-05 12:14:12.876801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.849 [2024-12-05 12:14:12.876815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.849 qpair failed and we were unable to recover it. 00:30:38.849 [2024-12-05 12:14:12.886638] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.849 [2024-12-05 12:14:12.886703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.849 [2024-12-05 12:14:12.886715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.849 [2024-12-05 12:14:12.886722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.849 [2024-12-05 12:14:12.886728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.849 [2024-12-05 12:14:12.886742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.849 qpair failed and we were unable to recover it. 00:30:38.849 [2024-12-05 12:14:12.896675] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.849 [2024-12-05 12:14:12.896733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.849 [2024-12-05 12:14:12.896749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.849 [2024-12-05 12:14:12.896755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.849 [2024-12-05 12:14:12.896761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.849 [2024-12-05 12:14:12.896775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.849 qpair failed and we were unable to recover it. 00:30:38.849 [2024-12-05 12:14:12.906690] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.849 [2024-12-05 12:14:12.906749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.849 [2024-12-05 12:14:12.906763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.849 [2024-12-05 12:14:12.906771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.849 [2024-12-05 12:14:12.906777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.849 [2024-12-05 12:14:12.906793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.849 qpair failed and we were unable to recover it. 00:30:38.849 [2024-12-05 12:14:12.916726] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.850 [2024-12-05 12:14:12.916780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.850 [2024-12-05 12:14:12.916792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.850 [2024-12-05 12:14:12.916799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.850 [2024-12-05 12:14:12.916805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.850 [2024-12-05 12:14:12.916819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.850 qpair failed and we were unable to recover it. 00:30:38.850 [2024-12-05 12:14:12.926850] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.850 [2024-12-05 12:14:12.926910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.850 [2024-12-05 12:14:12.926925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.850 [2024-12-05 12:14:12.926932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.850 [2024-12-05 12:14:12.926938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.850 [2024-12-05 12:14:12.926952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.850 qpair failed and we were unable to recover it. 00:30:38.850 [2024-12-05 12:14:12.936783] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.850 [2024-12-05 12:14:12.936839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.850 [2024-12-05 12:14:12.936853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.850 [2024-12-05 12:14:12.936859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.850 [2024-12-05 12:14:12.936868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.850 [2024-12-05 12:14:12.936883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.850 qpair failed and we were unable to recover it. 00:30:38.850 [2024-12-05 12:14:12.946801] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.850 [2024-12-05 12:14:12.946853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.850 [2024-12-05 12:14:12.946866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.850 [2024-12-05 12:14:12.946873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.850 [2024-12-05 12:14:12.946879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.850 [2024-12-05 12:14:12.946894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.850 qpair failed and we were unable to recover it. 00:30:38.850 [2024-12-05 12:14:12.956832] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.850 [2024-12-05 12:14:12.956884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.850 [2024-12-05 12:14:12.956897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.850 [2024-12-05 12:14:12.956904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.850 [2024-12-05 12:14:12.956910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.850 [2024-12-05 12:14:12.956925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.850 qpair failed and we were unable to recover it. 00:30:38.850 [2024-12-05 12:14:12.966948] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.850 [2024-12-05 12:14:12.967003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.850 [2024-12-05 12:14:12.967016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.850 [2024-12-05 12:14:12.967022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.850 [2024-12-05 12:14:12.967028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.850 [2024-12-05 12:14:12.967042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.850 qpair failed and we were unable to recover it. 00:30:38.850 [2024-12-05 12:14:12.976974] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.850 [2024-12-05 12:14:12.977029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.850 [2024-12-05 12:14:12.977042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.850 [2024-12-05 12:14:12.977049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.850 [2024-12-05 12:14:12.977055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.850 [2024-12-05 12:14:12.977070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.850 qpair failed and we were unable to recover it. 00:30:38.850 [2024-12-05 12:14:12.987025] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.850 [2024-12-05 12:14:12.987077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.850 [2024-12-05 12:14:12.987090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.850 [2024-12-05 12:14:12.987096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.850 [2024-12-05 12:14:12.987102] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.850 [2024-12-05 12:14:12.987116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.850 qpair failed and we were unable to recover it. 00:30:38.850 [2024-12-05 12:14:12.997021] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.850 [2024-12-05 12:14:12.997072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.850 [2024-12-05 12:14:12.997085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.850 [2024-12-05 12:14:12.997091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.850 [2024-12-05 12:14:12.997097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.850 [2024-12-05 12:14:12.997111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.850 qpair failed and we were unable to recover it. 00:30:38.850 [2024-12-05 12:14:13.006985] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.850 [2024-12-05 12:14:13.007042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.850 [2024-12-05 12:14:13.007054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.850 [2024-12-05 12:14:13.007061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.850 [2024-12-05 12:14:13.007066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.850 [2024-12-05 12:14:13.007081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.850 qpair failed and we were unable to recover it. 00:30:38.850 [2024-12-05 12:14:13.017087] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.850 [2024-12-05 12:14:13.017136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.850 [2024-12-05 12:14:13.017148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.850 [2024-12-05 12:14:13.017155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.850 [2024-12-05 12:14:13.017161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.850 [2024-12-05 12:14:13.017175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.850 qpair failed and we were unable to recover it. 00:30:38.850 [2024-12-05 12:14:13.027113] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.850 [2024-12-05 12:14:13.027171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.850 [2024-12-05 12:14:13.027184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.850 [2024-12-05 12:14:13.027190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.850 [2024-12-05 12:14:13.027196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.850 [2024-12-05 12:14:13.027210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.850 qpair failed and we were unable to recover it. 00:30:38.850 [2024-12-05 12:14:13.037061] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.850 [2024-12-05 12:14:13.037113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.850 [2024-12-05 12:14:13.037126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.850 [2024-12-05 12:14:13.037132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.850 [2024-12-05 12:14:13.037139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:38.850 [2024-12-05 12:14:13.037153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.850 qpair failed and we were unable to recover it. 00:30:39.110 [2024-12-05 12:14:13.047166] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.110 [2024-12-05 12:14:13.047263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.110 [2024-12-05 12:14:13.047277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.110 [2024-12-05 12:14:13.047283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.110 [2024-12-05 12:14:13.047288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.110 [2024-12-05 12:14:13.047303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.110 qpair failed and we were unable to recover it. 00:30:39.110 [2024-12-05 12:14:13.057119] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.110 [2024-12-05 12:14:13.057175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.110 [2024-12-05 12:14:13.057188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.110 [2024-12-05 12:14:13.057194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.110 [2024-12-05 12:14:13.057200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.110 [2024-12-05 12:14:13.057214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.110 qpair failed and we were unable to recover it. 00:30:39.110 [2024-12-05 12:14:13.067206] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.110 [2024-12-05 12:14:13.067259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.110 [2024-12-05 12:14:13.067273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.110 [2024-12-05 12:14:13.067284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.110 [2024-12-05 12:14:13.067290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.110 [2024-12-05 12:14:13.067304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.110 qpair failed and we were unable to recover it. 00:30:39.110 [2024-12-05 12:14:13.077215] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.110 [2024-12-05 12:14:13.077269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.110 [2024-12-05 12:14:13.077283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.110 [2024-12-05 12:14:13.077289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.110 [2024-12-05 12:14:13.077295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.110 [2024-12-05 12:14:13.077310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.110 qpair failed and we were unable to recover it. 00:30:39.110 [2024-12-05 12:14:13.087271] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.110 [2024-12-05 12:14:13.087324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.110 [2024-12-05 12:14:13.087337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.110 [2024-12-05 12:14:13.087344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.110 [2024-12-05 12:14:13.087350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.110 [2024-12-05 12:14:13.087365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.110 qpair failed and we were unable to recover it. 00:30:39.110 [2024-12-05 12:14:13.097291] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.110 [2024-12-05 12:14:13.097349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.110 [2024-12-05 12:14:13.097362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.110 [2024-12-05 12:14:13.097372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.110 [2024-12-05 12:14:13.097379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.110 [2024-12-05 12:14:13.097393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.110 qpair failed and we were unable to recover it. 00:30:39.110 [2024-12-05 12:14:13.107244] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.110 [2024-12-05 12:14:13.107297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.110 [2024-12-05 12:14:13.107311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.110 [2024-12-05 12:14:13.107318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.110 [2024-12-05 12:14:13.107325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.110 [2024-12-05 12:14:13.107344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.110 qpair failed and we were unable to recover it. 00:30:39.110 [2024-12-05 12:14:13.117275] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.110 [2024-12-05 12:14:13.117323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.110 [2024-12-05 12:14:13.117336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.110 [2024-12-05 12:14:13.117342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.110 [2024-12-05 12:14:13.117348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.110 [2024-12-05 12:14:13.117363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.110 qpair failed and we were unable to recover it. 00:30:39.111 [2024-12-05 12:14:13.127330] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.111 [2024-12-05 12:14:13.127388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.111 [2024-12-05 12:14:13.127401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.111 [2024-12-05 12:14:13.127408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.111 [2024-12-05 12:14:13.127414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.111 [2024-12-05 12:14:13.127428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.111 qpair failed and we were unable to recover it. 00:30:39.111 [2024-12-05 12:14:13.137331] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.111 [2024-12-05 12:14:13.137389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.111 [2024-12-05 12:14:13.137402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.111 [2024-12-05 12:14:13.137410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.111 [2024-12-05 12:14:13.137415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.111 [2024-12-05 12:14:13.137431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.111 qpair failed and we were unable to recover it. 00:30:39.111 [2024-12-05 12:14:13.147350] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.111 [2024-12-05 12:14:13.147409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.111 [2024-12-05 12:14:13.147422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.111 [2024-12-05 12:14:13.147429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.111 [2024-12-05 12:14:13.147435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.111 [2024-12-05 12:14:13.147449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.111 qpair failed and we were unable to recover it. 00:30:39.111 [2024-12-05 12:14:13.157393] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.111 [2024-12-05 12:14:13.157453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.111 [2024-12-05 12:14:13.157465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.111 [2024-12-05 12:14:13.157472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.111 [2024-12-05 12:14:13.157478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.111 [2024-12-05 12:14:13.157492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.111 qpair failed and we were unable to recover it. 00:30:39.111 [2024-12-05 12:14:13.167493] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.111 [2024-12-05 12:14:13.167549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.111 [2024-12-05 12:14:13.167562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.111 [2024-12-05 12:14:13.167569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.111 [2024-12-05 12:14:13.167575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.111 [2024-12-05 12:14:13.167589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.111 qpair failed and we were unable to recover it. 00:30:39.111 [2024-12-05 12:14:13.177521] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.111 [2024-12-05 12:14:13.177576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.111 [2024-12-05 12:14:13.177591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.111 [2024-12-05 12:14:13.177598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.111 [2024-12-05 12:14:13.177604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.111 [2024-12-05 12:14:13.177620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.111 qpair failed and we were unable to recover it. 00:30:39.111 [2024-12-05 12:14:13.187517] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.111 [2024-12-05 12:14:13.187582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.111 [2024-12-05 12:14:13.187596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.111 [2024-12-05 12:14:13.187602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.111 [2024-12-05 12:14:13.187608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.111 [2024-12-05 12:14:13.187623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.111 qpair failed and we were unable to recover it. 00:30:39.111 [2024-12-05 12:14:13.197569] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.111 [2024-12-05 12:14:13.197622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.111 [2024-12-05 12:14:13.197637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.111 [2024-12-05 12:14:13.197644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.111 [2024-12-05 12:14:13.197650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.111 [2024-12-05 12:14:13.197664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.111 qpair failed and we were unable to recover it. 00:30:39.111 [2024-12-05 12:14:13.207538] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.111 [2024-12-05 12:14:13.207599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.111 [2024-12-05 12:14:13.207612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.111 [2024-12-05 12:14:13.207619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.111 [2024-12-05 12:14:13.207625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.111 [2024-12-05 12:14:13.207639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.111 qpair failed and we were unable to recover it. 00:30:39.111 [2024-12-05 12:14:13.217608] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.111 [2024-12-05 12:14:13.217692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.111 [2024-12-05 12:14:13.217705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.111 [2024-12-05 12:14:13.217712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.111 [2024-12-05 12:14:13.217718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.111 [2024-12-05 12:14:13.217732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.111 qpair failed and we were unable to recover it. 00:30:39.111 [2024-12-05 12:14:13.227627] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.111 [2024-12-05 12:14:13.227692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.111 [2024-12-05 12:14:13.227705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.111 [2024-12-05 12:14:13.227712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.112 [2024-12-05 12:14:13.227718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.112 [2024-12-05 12:14:13.227733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.112 qpair failed and we were unable to recover it. 00:30:39.112 [2024-12-05 12:14:13.237720] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.112 [2024-12-05 12:14:13.237772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.112 [2024-12-05 12:14:13.237784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.112 [2024-12-05 12:14:13.237791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.112 [2024-12-05 12:14:13.237797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.112 [2024-12-05 12:14:13.237814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.112 qpair failed and we were unable to recover it. 00:30:39.112 [2024-12-05 12:14:13.247671] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.112 [2024-12-05 12:14:13.247728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.112 [2024-12-05 12:14:13.247741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.112 [2024-12-05 12:14:13.247747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.112 [2024-12-05 12:14:13.247753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.112 [2024-12-05 12:14:13.247768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.112 qpair failed and we were unable to recover it. 00:30:39.112 [2024-12-05 12:14:13.257672] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.112 [2024-12-05 12:14:13.257726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.112 [2024-12-05 12:14:13.257739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.112 [2024-12-05 12:14:13.257746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.112 [2024-12-05 12:14:13.257751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.112 [2024-12-05 12:14:13.257766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.112 qpair failed and we were unable to recover it. 00:30:39.112 [2024-12-05 12:14:13.267683] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.112 [2024-12-05 12:14:13.267731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.112 [2024-12-05 12:14:13.267743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.112 [2024-12-05 12:14:13.267750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.112 [2024-12-05 12:14:13.267756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.112 [2024-12-05 12:14:13.267770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.112 qpair failed and we were unable to recover it. 00:30:39.112 [2024-12-05 12:14:13.277779] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.112 [2024-12-05 12:14:13.277829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.112 [2024-12-05 12:14:13.277846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.112 [2024-12-05 12:14:13.277853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.112 [2024-12-05 12:14:13.277859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.112 [2024-12-05 12:14:13.277875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.112 qpair failed and we were unable to recover it. 00:30:39.112 [2024-12-05 12:14:13.287755] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.112 [2024-12-05 12:14:13.287812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.112 [2024-12-05 12:14:13.287824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.112 [2024-12-05 12:14:13.287831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.112 [2024-12-05 12:14:13.287837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.112 [2024-12-05 12:14:13.287852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.112 qpair failed and we were unable to recover it. 00:30:39.112 [2024-12-05 12:14:13.297871] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.112 [2024-12-05 12:14:13.297942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.112 [2024-12-05 12:14:13.297955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.112 [2024-12-05 12:14:13.297961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.112 [2024-12-05 12:14:13.297967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.112 [2024-12-05 12:14:13.297981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.112 qpair failed and we were unable to recover it. 00:30:39.372 [2024-12-05 12:14:13.307877] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.372 [2024-12-05 12:14:13.307928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.372 [2024-12-05 12:14:13.307942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.372 [2024-12-05 12:14:13.307948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.372 [2024-12-05 12:14:13.307954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.372 [2024-12-05 12:14:13.307968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.372 qpair failed and we were unable to recover it. 00:30:39.372 [2024-12-05 12:14:13.317933] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.372 [2024-12-05 12:14:13.317986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.372 [2024-12-05 12:14:13.317999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.372 [2024-12-05 12:14:13.318006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.372 [2024-12-05 12:14:13.318012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.372 [2024-12-05 12:14:13.318026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.372 qpair failed and we were unable to recover it. 00:30:39.372 [2024-12-05 12:14:13.327953] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.372 [2024-12-05 12:14:13.328007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.372 [2024-12-05 12:14:13.328022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.372 [2024-12-05 12:14:13.328029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.372 [2024-12-05 12:14:13.328035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.372 [2024-12-05 12:14:13.328049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.372 qpair failed and we were unable to recover it. 00:30:39.372 [2024-12-05 12:14:13.337977] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.372 [2024-12-05 12:14:13.338039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.372 [2024-12-05 12:14:13.338051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.372 [2024-12-05 12:14:13.338058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.372 [2024-12-05 12:14:13.338063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.372 [2024-12-05 12:14:13.338078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.372 qpair failed and we were unable to recover it. 00:30:39.372 [2024-12-05 12:14:13.348008] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.372 [2024-12-05 12:14:13.348059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.372 [2024-12-05 12:14:13.348072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.372 [2024-12-05 12:14:13.348078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.372 [2024-12-05 12:14:13.348084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.372 [2024-12-05 12:14:13.348098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.372 qpair failed and we were unable to recover it. 00:30:39.372 [2024-12-05 12:14:13.358018] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.372 [2024-12-05 12:14:13.358068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.372 [2024-12-05 12:14:13.358081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.372 [2024-12-05 12:14:13.358087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.372 [2024-12-05 12:14:13.358093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.372 [2024-12-05 12:14:13.358108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.372 qpair failed and we were unable to recover it. 00:30:39.372 [2024-12-05 12:14:13.368057] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.372 [2024-12-05 12:14:13.368110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.372 [2024-12-05 12:14:13.368123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.372 [2024-12-05 12:14:13.368129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.372 [2024-12-05 12:14:13.368137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.372 [2024-12-05 12:14:13.368152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.372 qpair failed and we were unable to recover it. 00:30:39.372 [2024-12-05 12:14:13.378082] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.372 [2024-12-05 12:14:13.378155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.372 [2024-12-05 12:14:13.378169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.372 [2024-12-05 12:14:13.378176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.372 [2024-12-05 12:14:13.378182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.372 [2024-12-05 12:14:13.378196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.372 qpair failed and we were unable to recover it. 00:30:39.372 [2024-12-05 12:14:13.388100] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.372 [2024-12-05 12:14:13.388147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.372 [2024-12-05 12:14:13.388161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.372 [2024-12-05 12:14:13.388167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.372 [2024-12-05 12:14:13.388173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.372 [2024-12-05 12:14:13.388187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.372 qpair failed and we were unable to recover it. 00:30:39.372 [2024-12-05 12:14:13.398129] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.372 [2024-12-05 12:14:13.398180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.372 [2024-12-05 12:14:13.398193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.372 [2024-12-05 12:14:13.398200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.372 [2024-12-05 12:14:13.398206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.372 [2024-12-05 12:14:13.398220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.372 qpair failed and we were unable to recover it. 00:30:39.372 [2024-12-05 12:14:13.408163] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.372 [2024-12-05 12:14:13.408217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.372 [2024-12-05 12:14:13.408230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.372 [2024-12-05 12:14:13.408236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.372 [2024-12-05 12:14:13.408242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.372 [2024-12-05 12:14:13.408256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.372 qpair failed and we were unable to recover it. 00:30:39.372 [2024-12-05 12:14:13.418190] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.373 [2024-12-05 12:14:13.418243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.373 [2024-12-05 12:14:13.418256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.373 [2024-12-05 12:14:13.418263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.373 [2024-12-05 12:14:13.418269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.373 [2024-12-05 12:14:13.418283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.373 qpair failed and we were unable to recover it. 00:30:39.373 [2024-12-05 12:14:13.428228] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.373 [2024-12-05 12:14:13.428288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.373 [2024-12-05 12:14:13.428302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.373 [2024-12-05 12:14:13.428309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.373 [2024-12-05 12:14:13.428315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.373 [2024-12-05 12:14:13.428330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.373 qpair failed and we were unable to recover it. 00:30:39.373 [2024-12-05 12:14:13.438272] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.373 [2024-12-05 12:14:13.438328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.373 [2024-12-05 12:14:13.438341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.373 [2024-12-05 12:14:13.438348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.373 [2024-12-05 12:14:13.438354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.373 [2024-12-05 12:14:13.438373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.373 qpair failed and we were unable to recover it. 00:30:39.373 [2024-12-05 12:14:13.448319] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.373 [2024-12-05 12:14:13.448381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.373 [2024-12-05 12:14:13.448394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.373 [2024-12-05 12:14:13.448401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.373 [2024-12-05 12:14:13.448408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.373 [2024-12-05 12:14:13.448423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.373 qpair failed and we were unable to recover it. 00:30:39.373 [2024-12-05 12:14:13.458296] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.373 [2024-12-05 12:14:13.458351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.373 [2024-12-05 12:14:13.458371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.373 [2024-12-05 12:14:13.458378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.373 [2024-12-05 12:14:13.458385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.373 [2024-12-05 12:14:13.458400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.373 qpair failed and we were unable to recover it. 00:30:39.373 [2024-12-05 12:14:13.468324] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.373 [2024-12-05 12:14:13.468381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.373 [2024-12-05 12:14:13.468394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.373 [2024-12-05 12:14:13.468400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.373 [2024-12-05 12:14:13.468406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.373 [2024-12-05 12:14:13.468421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.373 qpair failed and we were unable to recover it. 00:30:39.373 [2024-12-05 12:14:13.478357] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.373 [2024-12-05 12:14:13.478414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.373 [2024-12-05 12:14:13.478427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.373 [2024-12-05 12:14:13.478433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.373 [2024-12-05 12:14:13.478439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.373 [2024-12-05 12:14:13.478454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.373 qpair failed and we were unable to recover it. 00:30:39.373 [2024-12-05 12:14:13.488336] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.373 [2024-12-05 12:14:13.488418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.373 [2024-12-05 12:14:13.488435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.373 [2024-12-05 12:14:13.488444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.373 [2024-12-05 12:14:13.488450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.373 [2024-12-05 12:14:13.488467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.373 qpair failed and we were unable to recover it. 00:30:39.373 [2024-12-05 12:14:13.498425] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.373 [2024-12-05 12:14:13.498480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.373 [2024-12-05 12:14:13.498494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.373 [2024-12-05 12:14:13.498504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.373 [2024-12-05 12:14:13.498510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.373 [2024-12-05 12:14:13.498525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.373 qpair failed and we were unable to recover it. 00:30:39.373 [2024-12-05 12:14:13.508467] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.373 [2024-12-05 12:14:13.508521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.373 [2024-12-05 12:14:13.508534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.373 [2024-12-05 12:14:13.508541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.373 [2024-12-05 12:14:13.508547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.373 [2024-12-05 12:14:13.508562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.373 qpair failed and we were unable to recover it. 00:30:39.373 [2024-12-05 12:14:13.518486] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.373 [2024-12-05 12:14:13.518549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.373 [2024-12-05 12:14:13.518562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.373 [2024-12-05 12:14:13.518569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.373 [2024-12-05 12:14:13.518575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.373 [2024-12-05 12:14:13.518588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.373 qpair failed and we were unable to recover it. 00:30:39.373 [2024-12-05 12:14:13.528484] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.373 [2024-12-05 12:14:13.528541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.373 [2024-12-05 12:14:13.528554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.373 [2024-12-05 12:14:13.528561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.373 [2024-12-05 12:14:13.528567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.373 [2024-12-05 12:14:13.528581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.373 qpair failed and we were unable to recover it. 00:30:39.373 [2024-12-05 12:14:13.538545] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.373 [2024-12-05 12:14:13.538596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.373 [2024-12-05 12:14:13.538609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.373 [2024-12-05 12:14:13.538616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.373 [2024-12-05 12:14:13.538622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.373 [2024-12-05 12:14:13.538637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.373 qpair failed and we were unable to recover it. 00:30:39.373 [2024-12-05 12:14:13.548566] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.373 [2024-12-05 12:14:13.548619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.373 [2024-12-05 12:14:13.548632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.374 [2024-12-05 12:14:13.548638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.374 [2024-12-05 12:14:13.548644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.374 [2024-12-05 12:14:13.548658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.374 qpair failed and we were unable to recover it. 00:30:39.374 [2024-12-05 12:14:13.558536] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.374 [2024-12-05 12:14:13.558588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.374 [2024-12-05 12:14:13.558601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.374 [2024-12-05 12:14:13.558607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.374 [2024-12-05 12:14:13.558613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.374 [2024-12-05 12:14:13.558627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.374 qpair failed and we were unable to recover it. 00:30:39.634 [2024-12-05 12:14:13.568678] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.634 [2024-12-05 12:14:13.568733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.634 [2024-12-05 12:14:13.568746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.634 [2024-12-05 12:14:13.568752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.634 [2024-12-05 12:14:13.568758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.634 [2024-12-05 12:14:13.568772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.634 qpair failed and we were unable to recover it. 00:30:39.634 [2024-12-05 12:14:13.578597] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.634 [2024-12-05 12:14:13.578659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.634 [2024-12-05 12:14:13.578672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.634 [2024-12-05 12:14:13.578678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.634 [2024-12-05 12:14:13.578684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.634 [2024-12-05 12:14:13.578699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.634 qpair failed and we were unable to recover it. 00:30:39.634 [2024-12-05 12:14:13.588698] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.634 [2024-12-05 12:14:13.588806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.634 [2024-12-05 12:14:13.588820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.634 [2024-12-05 12:14:13.588826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.634 [2024-12-05 12:14:13.588832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.634 [2024-12-05 12:14:13.588846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.634 qpair failed and we were unable to recover it. 00:30:39.634 [2024-12-05 12:14:13.598746] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.634 [2024-12-05 12:14:13.598797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.634 [2024-12-05 12:14:13.598810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.634 [2024-12-05 12:14:13.598817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.634 [2024-12-05 12:14:13.598823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.634 [2024-12-05 12:14:13.598837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.634 qpair failed and we were unable to recover it. 00:30:39.634 [2024-12-05 12:14:13.608788] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.634 [2024-12-05 12:14:13.608842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.634 [2024-12-05 12:14:13.608855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.634 [2024-12-05 12:14:13.608861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.634 [2024-12-05 12:14:13.608867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.634 [2024-12-05 12:14:13.608882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.634 qpair failed and we were unable to recover it. 00:30:39.634 [2024-12-05 12:14:13.618774] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.634 [2024-12-05 12:14:13.618831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.634 [2024-12-05 12:14:13.618845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.634 [2024-12-05 12:14:13.618852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.634 [2024-12-05 12:14:13.618857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.634 [2024-12-05 12:14:13.618872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.634 qpair failed and we were unable to recover it. 00:30:39.634 [2024-12-05 12:14:13.628791] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.634 [2024-12-05 12:14:13.628842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.634 [2024-12-05 12:14:13.628855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.634 [2024-12-05 12:14:13.628865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.634 [2024-12-05 12:14:13.628870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.634 [2024-12-05 12:14:13.628885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.634 qpair failed and we were unable to recover it. 00:30:39.634 [2024-12-05 12:14:13.638817] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.634 [2024-12-05 12:14:13.638873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.634 [2024-12-05 12:14:13.638886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.634 [2024-12-05 12:14:13.638892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.634 [2024-12-05 12:14:13.638898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.634 [2024-12-05 12:14:13.638912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.634 qpair failed and we were unable to recover it. 00:30:39.634 [2024-12-05 12:14:13.648857] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.634 [2024-12-05 12:14:13.648910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.634 [2024-12-05 12:14:13.648923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.634 [2024-12-05 12:14:13.648930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.634 [2024-12-05 12:14:13.648936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.634 [2024-12-05 12:14:13.648950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.634 qpair failed and we were unable to recover it. 00:30:39.634 [2024-12-05 12:14:13.658886] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.634 [2024-12-05 12:14:13.658940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.634 [2024-12-05 12:14:13.658952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.634 [2024-12-05 12:14:13.658959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.634 [2024-12-05 12:14:13.658965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.634 [2024-12-05 12:14:13.658979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.634 qpair failed and we were unable to recover it. 00:30:39.634 [2024-12-05 12:14:13.668908] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.634 [2024-12-05 12:14:13.669009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.634 [2024-12-05 12:14:13.669022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.634 [2024-12-05 12:14:13.669028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.634 [2024-12-05 12:14:13.669034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.634 [2024-12-05 12:14:13.669051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.634 qpair failed and we were unable to recover it. 00:30:39.634 [2024-12-05 12:14:13.679006] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.634 [2024-12-05 12:14:13.679066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.634 [2024-12-05 12:14:13.679080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.634 [2024-12-05 12:14:13.679087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.634 [2024-12-05 12:14:13.679093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.634 [2024-12-05 12:14:13.679108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.634 qpair failed and we were unable to recover it. 00:30:39.634 [2024-12-05 12:14:13.689014] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.634 [2024-12-05 12:14:13.689072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.634 [2024-12-05 12:14:13.689085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.634 [2024-12-05 12:14:13.689092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.635 [2024-12-05 12:14:13.689098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.635 [2024-12-05 12:14:13.689112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.635 qpair failed and we were unable to recover it. 00:30:39.635 [2024-12-05 12:14:13.699007] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.635 [2024-12-05 12:14:13.699060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.635 [2024-12-05 12:14:13.699073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.635 [2024-12-05 12:14:13.699080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.635 [2024-12-05 12:14:13.699086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.635 [2024-12-05 12:14:13.699100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.635 qpair failed and we were unable to recover it. 00:30:39.635 [2024-12-05 12:14:13.709057] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.635 [2024-12-05 12:14:13.709109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.635 [2024-12-05 12:14:13.709121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.635 [2024-12-05 12:14:13.709128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.635 [2024-12-05 12:14:13.709134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.635 [2024-12-05 12:14:13.709148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.635 qpair failed and we were unable to recover it. 00:30:39.635 [2024-12-05 12:14:13.719046] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.635 [2024-12-05 12:14:13.719100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.635 [2024-12-05 12:14:13.719113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.635 [2024-12-05 12:14:13.719120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.635 [2024-12-05 12:14:13.719126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.635 [2024-12-05 12:14:13.719141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.635 qpair failed and we were unable to recover it. 00:30:39.635 [2024-12-05 12:14:13.729088] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.635 [2024-12-05 12:14:13.729140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.635 [2024-12-05 12:14:13.729153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.635 [2024-12-05 12:14:13.729160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.635 [2024-12-05 12:14:13.729166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.635 [2024-12-05 12:14:13.729180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.635 qpair failed and we were unable to recover it. 00:30:39.635 [2024-12-05 12:14:13.739108] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.635 [2024-12-05 12:14:13.739166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.635 [2024-12-05 12:14:13.739178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.635 [2024-12-05 12:14:13.739185] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.635 [2024-12-05 12:14:13.739191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.635 [2024-12-05 12:14:13.739205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.635 qpair failed and we were unable to recover it. 00:30:39.635 [2024-12-05 12:14:13.749139] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.635 [2024-12-05 12:14:13.749203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.635 [2024-12-05 12:14:13.749217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.635 [2024-12-05 12:14:13.749223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.635 [2024-12-05 12:14:13.749229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.635 [2024-12-05 12:14:13.749244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.635 qpair failed and we were unable to recover it. 00:30:39.635 [2024-12-05 12:14:13.759169] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.635 [2024-12-05 12:14:13.759239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.635 [2024-12-05 12:14:13.759255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.635 [2024-12-05 12:14:13.759261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.635 [2024-12-05 12:14:13.759267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.635 [2024-12-05 12:14:13.759281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.635 qpair failed and we were unable to recover it. 00:30:39.635 [2024-12-05 12:14:13.769205] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.635 [2024-12-05 12:14:13.769260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.635 [2024-12-05 12:14:13.769273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.635 [2024-12-05 12:14:13.769279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.635 [2024-12-05 12:14:13.769285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.635 [2024-12-05 12:14:13.769299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.635 qpair failed and we were unable to recover it. 00:30:39.635 [2024-12-05 12:14:13.779226] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.635 [2024-12-05 12:14:13.779327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.635 [2024-12-05 12:14:13.779341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.635 [2024-12-05 12:14:13.779347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.635 [2024-12-05 12:14:13.779353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.635 [2024-12-05 12:14:13.779372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.635 qpair failed and we were unable to recover it. 00:30:39.635 [2024-12-05 12:14:13.789222] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.635 [2024-12-05 12:14:13.789275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.635 [2024-12-05 12:14:13.789289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.635 [2024-12-05 12:14:13.789295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.635 [2024-12-05 12:14:13.789301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.635 [2024-12-05 12:14:13.789316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.635 qpair failed and we were unable to recover it. 00:30:39.635 [2024-12-05 12:14:13.799282] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.635 [2024-12-05 12:14:13.799338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.635 [2024-12-05 12:14:13.799351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.635 [2024-12-05 12:14:13.799357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.635 [2024-12-05 12:14:13.799363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.635 [2024-12-05 12:14:13.799385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.635 qpair failed and we were unable to recover it. 00:30:39.635 [2024-12-05 12:14:13.809309] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.635 [2024-12-05 12:14:13.809458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.635 [2024-12-05 12:14:13.809482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.635 [2024-12-05 12:14:13.809492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.635 [2024-12-05 12:14:13.809498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.635 [2024-12-05 12:14:13.809526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.635 qpair failed and we were unable to recover it. 00:30:39.635 [2024-12-05 12:14:13.819400] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.635 [2024-12-05 12:14:13.819497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.635 [2024-12-05 12:14:13.819510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.635 [2024-12-05 12:14:13.819517] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.635 [2024-12-05 12:14:13.819523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.636 [2024-12-05 12:14:13.819538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.636 qpair failed and we were unable to recover it. 00:30:39.636 [2024-12-05 12:14:13.829373] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.636 [2024-12-05 12:14:13.829429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.636 [2024-12-05 12:14:13.829444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.636 [2024-12-05 12:14:13.829451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.636 [2024-12-05 12:14:13.829458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.636 [2024-12-05 12:14:13.829473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.636 qpair failed and we were unable to recover it. 00:30:39.895 [2024-12-05 12:14:13.839391] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.895 [2024-12-05 12:14:13.839446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.895 [2024-12-05 12:14:13.839459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.895 [2024-12-05 12:14:13.839466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.895 [2024-12-05 12:14:13.839471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.895 [2024-12-05 12:14:13.839486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.895 qpair failed and we were unable to recover it. 00:30:39.895 [2024-12-05 12:14:13.849436] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.895 [2024-12-05 12:14:13.849492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.895 [2024-12-05 12:14:13.849505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.895 [2024-12-05 12:14:13.849512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.895 [2024-12-05 12:14:13.849518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.895 [2024-12-05 12:14:13.849533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.895 qpair failed and we were unable to recover it. 00:30:39.895 [2024-12-05 12:14:13.859454] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.895 [2024-12-05 12:14:13.859512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.895 [2024-12-05 12:14:13.859526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.895 [2024-12-05 12:14:13.859532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.895 [2024-12-05 12:14:13.859538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.895 [2024-12-05 12:14:13.859553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.895 qpair failed and we were unable to recover it. 00:30:39.895 [2024-12-05 12:14:13.869484] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.895 [2024-12-05 12:14:13.869537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.895 [2024-12-05 12:14:13.869549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.895 [2024-12-05 12:14:13.869556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.895 [2024-12-05 12:14:13.869562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.895 [2024-12-05 12:14:13.869577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.895 qpair failed and we were unable to recover it. 00:30:39.895 [2024-12-05 12:14:13.879480] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.895 [2024-12-05 12:14:13.879532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.896 [2024-12-05 12:14:13.879545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.896 [2024-12-05 12:14:13.879552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.896 [2024-12-05 12:14:13.879558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.896 [2024-12-05 12:14:13.879573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.896 qpair failed and we were unable to recover it. 00:30:39.896 [2024-12-05 12:14:13.889547] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.896 [2024-12-05 12:14:13.889617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.896 [2024-12-05 12:14:13.889633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.896 [2024-12-05 12:14:13.889640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.896 [2024-12-05 12:14:13.889645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.896 [2024-12-05 12:14:13.889660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.896 qpair failed and we were unable to recover it. 00:30:39.896 [2024-12-05 12:14:13.899588] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.896 [2024-12-05 12:14:13.899641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.896 [2024-12-05 12:14:13.899653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.896 [2024-12-05 12:14:13.899660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.896 [2024-12-05 12:14:13.899666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.896 [2024-12-05 12:14:13.899680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.896 qpair failed and we were unable to recover it. 00:30:39.896 [2024-12-05 12:14:13.909604] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.896 [2024-12-05 12:14:13.909657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.896 [2024-12-05 12:14:13.909669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.896 [2024-12-05 12:14:13.909675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.896 [2024-12-05 12:14:13.909681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.896 [2024-12-05 12:14:13.909696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.896 qpair failed and we were unable to recover it. 00:30:39.896 [2024-12-05 12:14:13.919622] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.896 [2024-12-05 12:14:13.919676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.896 [2024-12-05 12:14:13.919689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.896 [2024-12-05 12:14:13.919695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.896 [2024-12-05 12:14:13.919701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.896 [2024-12-05 12:14:13.919715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.896 qpair failed and we were unable to recover it. 00:30:39.896 [2024-12-05 12:14:13.929677] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.896 [2024-12-05 12:14:13.929736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.896 [2024-12-05 12:14:13.929750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.896 [2024-12-05 12:14:13.929757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.896 [2024-12-05 12:14:13.929767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.896 [2024-12-05 12:14:13.929782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.896 qpair failed and we were unable to recover it. 00:30:39.896 [2024-12-05 12:14:13.939689] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.896 [2024-12-05 12:14:13.939741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.896 [2024-12-05 12:14:13.939754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.896 [2024-12-05 12:14:13.939761] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.896 [2024-12-05 12:14:13.939767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.896 [2024-12-05 12:14:13.939782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.896 qpair failed and we were unable to recover it. 00:30:39.896 [2024-12-05 12:14:13.949707] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.896 [2024-12-05 12:14:13.949761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.896 [2024-12-05 12:14:13.949774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.896 [2024-12-05 12:14:13.949781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.896 [2024-12-05 12:14:13.949787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.896 [2024-12-05 12:14:13.949802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.896 qpair failed and we were unable to recover it. 00:30:39.896 [2024-12-05 12:14:13.959734] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.896 [2024-12-05 12:14:13.959781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.896 [2024-12-05 12:14:13.959794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.896 [2024-12-05 12:14:13.959800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.896 [2024-12-05 12:14:13.959806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.896 [2024-12-05 12:14:13.959821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.896 qpair failed and we were unable to recover it. 00:30:39.896 [2024-12-05 12:14:13.969766] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.896 [2024-12-05 12:14:13.969823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.896 [2024-12-05 12:14:13.969835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.896 [2024-12-05 12:14:13.969842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.896 [2024-12-05 12:14:13.969848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.896 [2024-12-05 12:14:13.969862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.896 qpair failed and we were unable to recover it. 00:30:39.896 [2024-12-05 12:14:13.979794] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.896 [2024-12-05 12:14:13.979850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.896 [2024-12-05 12:14:13.979863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.896 [2024-12-05 12:14:13.979869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.896 [2024-12-05 12:14:13.979876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.896 [2024-12-05 12:14:13.979890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.896 qpair failed and we were unable to recover it. 00:30:39.896 [2024-12-05 12:14:13.989815] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.896 [2024-12-05 12:14:13.989872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.896 [2024-12-05 12:14:13.989886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.896 [2024-12-05 12:14:13.989892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.896 [2024-12-05 12:14:13.989898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.896 [2024-12-05 12:14:13.989912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.896 qpair failed and we were unable to recover it. 00:30:39.896 [2024-12-05 12:14:13.999842] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.896 [2024-12-05 12:14:13.999889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.896 [2024-12-05 12:14:13.999903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.896 [2024-12-05 12:14:13.999909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.896 [2024-12-05 12:14:13.999915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.896 [2024-12-05 12:14:13.999929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.896 qpair failed and we were unable to recover it. 00:30:39.896 [2024-12-05 12:14:14.009853] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.897 [2024-12-05 12:14:14.009908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.897 [2024-12-05 12:14:14.009921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.897 [2024-12-05 12:14:14.009927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.897 [2024-12-05 12:14:14.009933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.897 [2024-12-05 12:14:14.009947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.897 qpair failed and we were unable to recover it. 00:30:39.897 [2024-12-05 12:14:14.019967] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.897 [2024-12-05 12:14:14.020026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.897 [2024-12-05 12:14:14.020042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.897 [2024-12-05 12:14:14.020049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.897 [2024-12-05 12:14:14.020054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.897 [2024-12-05 12:14:14.020069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.897 qpair failed and we were unable to recover it. 00:30:39.897 [2024-12-05 12:14:14.029924] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.897 [2024-12-05 12:14:14.029975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.897 [2024-12-05 12:14:14.029988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.897 [2024-12-05 12:14:14.029994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.897 [2024-12-05 12:14:14.030000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.897 [2024-12-05 12:14:14.030014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.897 qpair failed and we were unable to recover it. 00:30:39.897 [2024-12-05 12:14:14.039976] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.897 [2024-12-05 12:14:14.040027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.897 [2024-12-05 12:14:14.040040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.897 [2024-12-05 12:14:14.040046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.897 [2024-12-05 12:14:14.040052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.897 [2024-12-05 12:14:14.040067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.897 qpair failed and we were unable to recover it. 00:30:39.897 [2024-12-05 12:14:14.049995] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.897 [2024-12-05 12:14:14.050049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.897 [2024-12-05 12:14:14.050062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.897 [2024-12-05 12:14:14.050068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.897 [2024-12-05 12:14:14.050075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.897 [2024-12-05 12:14:14.050089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.897 qpair failed and we were unable to recover it. 00:30:39.897 [2024-12-05 12:14:14.060020] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.897 [2024-12-05 12:14:14.060077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.897 [2024-12-05 12:14:14.060089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.897 [2024-12-05 12:14:14.060098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.897 [2024-12-05 12:14:14.060104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.897 [2024-12-05 12:14:14.060118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.897 qpair failed and we were unable to recover it. 00:30:39.897 [2024-12-05 12:14:14.070030] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.897 [2024-12-05 12:14:14.070085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.897 [2024-12-05 12:14:14.070098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.897 [2024-12-05 12:14:14.070104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.897 [2024-12-05 12:14:14.070111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.897 [2024-12-05 12:14:14.070124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.897 qpair failed and we were unable to recover it. 00:30:39.897 [2024-12-05 12:14:14.080064] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.897 [2024-12-05 12:14:14.080113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.897 [2024-12-05 12:14:14.080126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.897 [2024-12-05 12:14:14.080132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.897 [2024-12-05 12:14:14.080137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.897 [2024-12-05 12:14:14.080152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.897 qpair failed and we were unable to recover it. 00:30:39.897 [2024-12-05 12:14:14.090096] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.897 [2024-12-05 12:14:14.090149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.897 [2024-12-05 12:14:14.090162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.897 [2024-12-05 12:14:14.090168] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.897 [2024-12-05 12:14:14.090174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:39.897 [2024-12-05 12:14:14.090187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.897 qpair failed and we were unable to recover it. 00:30:40.155 [2024-12-05 12:14:14.100184] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.155 [2024-12-05 12:14:14.100237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.155 [2024-12-05 12:14:14.100250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.155 [2024-12-05 12:14:14.100256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.155 [2024-12-05 12:14:14.100262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.155 [2024-12-05 12:14:14.100276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.155 qpair failed and we were unable to recover it. 00:30:40.155 [2024-12-05 12:14:14.110144] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.155 [2024-12-05 12:14:14.110195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.155 [2024-12-05 12:14:14.110208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.155 [2024-12-05 12:14:14.110215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.155 [2024-12-05 12:14:14.110221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.155 [2024-12-05 12:14:14.110235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.155 qpair failed and we were unable to recover it. 00:30:40.155 [2024-12-05 12:14:14.120178] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.155 [2024-12-05 12:14:14.120230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.155 [2024-12-05 12:14:14.120243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.155 [2024-12-05 12:14:14.120250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.155 [2024-12-05 12:14:14.120256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.155 [2024-12-05 12:14:14.120270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.155 qpair failed and we were unable to recover it. 00:30:40.155 [2024-12-05 12:14:14.130226] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.155 [2024-12-05 12:14:14.130297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.155 [2024-12-05 12:14:14.130310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.155 [2024-12-05 12:14:14.130316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.155 [2024-12-05 12:14:14.130323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.155 [2024-12-05 12:14:14.130337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.155 qpair failed and we were unable to recover it. 00:30:40.155 [2024-12-05 12:14:14.140277] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.155 [2024-12-05 12:14:14.140334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.155 [2024-12-05 12:14:14.140347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.155 [2024-12-05 12:14:14.140354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.155 [2024-12-05 12:14:14.140360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.155 [2024-12-05 12:14:14.140384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.155 qpair failed and we were unable to recover it. 00:30:40.155 [2024-12-05 12:14:14.150264] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.155 [2024-12-05 12:14:14.150321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.155 [2024-12-05 12:14:14.150333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.156 [2024-12-05 12:14:14.150340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.156 [2024-12-05 12:14:14.150346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.156 [2024-12-05 12:14:14.150360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.156 qpair failed and we were unable to recover it. 00:30:40.156 [2024-12-05 12:14:14.160286] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.156 [2024-12-05 12:14:14.160337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.156 [2024-12-05 12:14:14.160349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.156 [2024-12-05 12:14:14.160356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.156 [2024-12-05 12:14:14.160362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.156 [2024-12-05 12:14:14.160381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.156 qpair failed and we were unable to recover it. 00:30:40.156 [2024-12-05 12:14:14.170319] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.156 [2024-12-05 12:14:14.170379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.156 [2024-12-05 12:14:14.170392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.156 [2024-12-05 12:14:14.170398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.156 [2024-12-05 12:14:14.170404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.156 [2024-12-05 12:14:14.170419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.156 qpair failed and we were unable to recover it. 00:30:40.156 [2024-12-05 12:14:14.180354] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.156 [2024-12-05 12:14:14.180418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.156 [2024-12-05 12:14:14.180432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.156 [2024-12-05 12:14:14.180439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.156 [2024-12-05 12:14:14.180445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.156 [2024-12-05 12:14:14.180460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.156 qpair failed and we were unable to recover it. 00:30:40.156 [2024-12-05 12:14:14.190385] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.156 [2024-12-05 12:14:14.190466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.156 [2024-12-05 12:14:14.190479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.156 [2024-12-05 12:14:14.190488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.156 [2024-12-05 12:14:14.190494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.156 [2024-12-05 12:14:14.190508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.156 qpair failed and we were unable to recover it. 00:30:40.156 [2024-12-05 12:14:14.200402] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.156 [2024-12-05 12:14:14.200457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.156 [2024-12-05 12:14:14.200470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.156 [2024-12-05 12:14:14.200477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.156 [2024-12-05 12:14:14.200483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.156 [2024-12-05 12:14:14.200498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.156 qpair failed and we were unable to recover it. 00:30:40.156 [2024-12-05 12:14:14.210400] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.156 [2024-12-05 12:14:14.210466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.156 [2024-12-05 12:14:14.210480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.156 [2024-12-05 12:14:14.210486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.156 [2024-12-05 12:14:14.210492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.156 [2024-12-05 12:14:14.210507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.156 qpair failed and we were unable to recover it. 00:30:40.156 [2024-12-05 12:14:14.220494] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.156 [2024-12-05 12:14:14.220551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.156 [2024-12-05 12:14:14.220563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.156 [2024-12-05 12:14:14.220569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.156 [2024-12-05 12:14:14.220575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.156 [2024-12-05 12:14:14.220590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.156 qpair failed and we were unable to recover it. 00:30:40.156 [2024-12-05 12:14:14.230485] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.156 [2024-12-05 12:14:14.230537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.156 [2024-12-05 12:14:14.230550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.156 [2024-12-05 12:14:14.230557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.156 [2024-12-05 12:14:14.230563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.156 [2024-12-05 12:14:14.230580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.156 qpair failed and we were unable to recover it. 00:30:40.156 [2024-12-05 12:14:14.240518] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.156 [2024-12-05 12:14:14.240575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.156 [2024-12-05 12:14:14.240589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.156 [2024-12-05 12:14:14.240595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.156 [2024-12-05 12:14:14.240602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.156 [2024-12-05 12:14:14.240616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.156 qpair failed and we were unable to recover it. 00:30:40.156 [2024-12-05 12:14:14.250564] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.156 [2024-12-05 12:14:14.250619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.156 [2024-12-05 12:14:14.250632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.156 [2024-12-05 12:14:14.250638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.156 [2024-12-05 12:14:14.250644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.156 [2024-12-05 12:14:14.250658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.156 qpair failed and we were unable to recover it. 00:30:40.156 [2024-12-05 12:14:14.260596] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.157 [2024-12-05 12:14:14.260651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.157 [2024-12-05 12:14:14.260664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.157 [2024-12-05 12:14:14.260670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.157 [2024-12-05 12:14:14.260676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.157 [2024-12-05 12:14:14.260690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.157 qpair failed and we were unable to recover it. 00:30:40.157 [2024-12-05 12:14:14.270622] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.157 [2024-12-05 12:14:14.270673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.157 [2024-12-05 12:14:14.270685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.157 [2024-12-05 12:14:14.270692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.157 [2024-12-05 12:14:14.270698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.157 [2024-12-05 12:14:14.270712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.157 qpair failed and we were unable to recover it. 00:30:40.157 [2024-12-05 12:14:14.280644] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.157 [2024-12-05 12:14:14.280696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.157 [2024-12-05 12:14:14.280709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.157 [2024-12-05 12:14:14.280715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.157 [2024-12-05 12:14:14.280721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.157 [2024-12-05 12:14:14.280736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.157 qpair failed and we were unable to recover it. 00:30:40.157 [2024-12-05 12:14:14.290686] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.157 [2024-12-05 12:14:14.290743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.157 [2024-12-05 12:14:14.290755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.157 [2024-12-05 12:14:14.290762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.157 [2024-12-05 12:14:14.290768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.157 [2024-12-05 12:14:14.290781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.157 qpair failed and we were unable to recover it. 00:30:40.157 [2024-12-05 12:14:14.300752] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.157 [2024-12-05 12:14:14.300811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.157 [2024-12-05 12:14:14.300824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.157 [2024-12-05 12:14:14.300830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.157 [2024-12-05 12:14:14.300836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.157 [2024-12-05 12:14:14.300850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.157 qpair failed and we were unable to recover it. 00:30:40.157 [2024-12-05 12:14:14.310748] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.157 [2024-12-05 12:14:14.310797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.157 [2024-12-05 12:14:14.310810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.157 [2024-12-05 12:14:14.310817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.157 [2024-12-05 12:14:14.310823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.157 [2024-12-05 12:14:14.310838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.157 qpair failed and we were unable to recover it. 00:30:40.157 [2024-12-05 12:14:14.320801] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.157 [2024-12-05 12:14:14.320856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.157 [2024-12-05 12:14:14.320874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.157 [2024-12-05 12:14:14.320881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.157 [2024-12-05 12:14:14.320887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.157 [2024-12-05 12:14:14.320901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.157 qpair failed and we were unable to recover it. 00:30:40.157 [2024-12-05 12:14:14.330814] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.157 [2024-12-05 12:14:14.330871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.157 [2024-12-05 12:14:14.330883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.157 [2024-12-05 12:14:14.330890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.157 [2024-12-05 12:14:14.330896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.157 [2024-12-05 12:14:14.330910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.157 qpair failed and we were unable to recover it. 00:30:40.157 [2024-12-05 12:14:14.340774] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.157 [2024-12-05 12:14:14.340829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.157 [2024-12-05 12:14:14.340842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.157 [2024-12-05 12:14:14.340849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.157 [2024-12-05 12:14:14.340854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.157 [2024-12-05 12:14:14.340869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.157 qpair failed and we were unable to recover it. 00:30:40.157 [2024-12-05 12:14:14.350874] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.157 [2024-12-05 12:14:14.350923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.157 [2024-12-05 12:14:14.350935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.157 [2024-12-05 12:14:14.350942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.157 [2024-12-05 12:14:14.350947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.157 [2024-12-05 12:14:14.350962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.157 qpair failed and we were unable to recover it. 00:30:40.416 [2024-12-05 12:14:14.360828] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.416 [2024-12-05 12:14:14.360886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.416 [2024-12-05 12:14:14.360900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.416 [2024-12-05 12:14:14.360906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.416 [2024-12-05 12:14:14.360915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.416 [2024-12-05 12:14:14.360929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.416 qpair failed and we were unable to recover it. 00:30:40.416 [2024-12-05 12:14:14.370928] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.416 [2024-12-05 12:14:14.370981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.416 [2024-12-05 12:14:14.370994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.416 [2024-12-05 12:14:14.371001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.416 [2024-12-05 12:14:14.371007] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.416 [2024-12-05 12:14:14.371021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.416 qpair failed and we were unable to recover it. 00:30:40.416 [2024-12-05 12:14:14.380929] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.416 [2024-12-05 12:14:14.380980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.416 [2024-12-05 12:14:14.380992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.416 [2024-12-05 12:14:14.380999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.416 [2024-12-05 12:14:14.381004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.416 [2024-12-05 12:14:14.381019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.416 qpair failed and we were unable to recover it. 00:30:40.416 [2024-12-05 12:14:14.390959] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.416 [2024-12-05 12:14:14.391010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.416 [2024-12-05 12:14:14.391023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.416 [2024-12-05 12:14:14.391030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.416 [2024-12-05 12:14:14.391036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.416 [2024-12-05 12:14:14.391049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.416 qpair failed and we were unable to recover it. 00:30:40.416 [2024-12-05 12:14:14.401032] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.416 [2024-12-05 12:14:14.401086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.416 [2024-12-05 12:14:14.401099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.416 [2024-12-05 12:14:14.401106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.416 [2024-12-05 12:14:14.401112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.416 [2024-12-05 12:14:14.401126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.416 qpair failed and we were unable to recover it. 00:30:40.416 [2024-12-05 12:14:14.410963] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.416 [2024-12-05 12:14:14.411018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.416 [2024-12-05 12:14:14.411031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.416 [2024-12-05 12:14:14.411037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.416 [2024-12-05 12:14:14.411044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.416 [2024-12-05 12:14:14.411058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.416 qpair failed and we were unable to recover it. 00:30:40.416 [2024-12-05 12:14:14.421039] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.416 [2024-12-05 12:14:14.421100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.416 [2024-12-05 12:14:14.421112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.416 [2024-12-05 12:14:14.421119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.416 [2024-12-05 12:14:14.421125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.416 [2024-12-05 12:14:14.421139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.416 qpair failed and we were unable to recover it. 00:30:40.416 [2024-12-05 12:14:14.431136] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.416 [2024-12-05 12:14:14.431196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.416 [2024-12-05 12:14:14.431209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.416 [2024-12-05 12:14:14.431216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.416 [2024-12-05 12:14:14.431222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.416 [2024-12-05 12:14:14.431236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.416 qpair failed and we were unable to recover it. 00:30:40.417 [2024-12-05 12:14:14.441129] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.417 [2024-12-05 12:14:14.441194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.417 [2024-12-05 12:14:14.441207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.417 [2024-12-05 12:14:14.441214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.417 [2024-12-05 12:14:14.441220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.417 [2024-12-05 12:14:14.441234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.417 qpair failed and we were unable to recover it. 00:30:40.417 [2024-12-05 12:14:14.451190] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.417 [2024-12-05 12:14:14.451281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.417 [2024-12-05 12:14:14.451298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.417 [2024-12-05 12:14:14.451304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.417 [2024-12-05 12:14:14.451310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.417 [2024-12-05 12:14:14.451325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.417 qpair failed and we were unable to recover it. 00:30:40.417 [2024-12-05 12:14:14.461173] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.417 [2024-12-05 12:14:14.461226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.417 [2024-12-05 12:14:14.461239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.417 [2024-12-05 12:14:14.461246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.417 [2024-12-05 12:14:14.461251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.417 [2024-12-05 12:14:14.461266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.417 qpair failed and we were unable to recover it. 00:30:40.417 [2024-12-05 12:14:14.471234] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.417 [2024-12-05 12:14:14.471289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.417 [2024-12-05 12:14:14.471302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.417 [2024-12-05 12:14:14.471308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.417 [2024-12-05 12:14:14.471314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.417 [2024-12-05 12:14:14.471328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.417 qpair failed and we were unable to recover it. 00:30:40.417 [2024-12-05 12:14:14.481218] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.417 [2024-12-05 12:14:14.481271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.417 [2024-12-05 12:14:14.481284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.417 [2024-12-05 12:14:14.481290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.417 [2024-12-05 12:14:14.481296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.417 [2024-12-05 12:14:14.481310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.417 qpair failed and we were unable to recover it. 00:30:40.417 [2024-12-05 12:14:14.491175] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.417 [2024-12-05 12:14:14.491235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.417 [2024-12-05 12:14:14.491248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.417 [2024-12-05 12:14:14.491255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.417 [2024-12-05 12:14:14.491263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.417 [2024-12-05 12:14:14.491278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.417 qpair failed and we were unable to recover it. 00:30:40.417 [2024-12-05 12:14:14.501269] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.417 [2024-12-05 12:14:14.501318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.417 [2024-12-05 12:14:14.501331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.417 [2024-12-05 12:14:14.501337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.417 [2024-12-05 12:14:14.501343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.417 [2024-12-05 12:14:14.501357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.417 qpair failed and we were unable to recover it. 00:30:40.417 [2024-12-05 12:14:14.511245] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.417 [2024-12-05 12:14:14.511300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.417 [2024-12-05 12:14:14.511313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.417 [2024-12-05 12:14:14.511319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.417 [2024-12-05 12:14:14.511325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.417 [2024-12-05 12:14:14.511340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.417 qpair failed and we were unable to recover it. 00:30:40.417 [2024-12-05 12:14:14.521318] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.417 [2024-12-05 12:14:14.521376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.417 [2024-12-05 12:14:14.521388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.417 [2024-12-05 12:14:14.521395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.417 [2024-12-05 12:14:14.521403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.417 [2024-12-05 12:14:14.521418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.417 qpair failed and we were unable to recover it. 00:30:40.417 [2024-12-05 12:14:14.531383] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.417 [2024-12-05 12:14:14.531440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.417 [2024-12-05 12:14:14.531454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.417 [2024-12-05 12:14:14.531460] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.417 [2024-12-05 12:14:14.531466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.417 [2024-12-05 12:14:14.531481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.418 qpair failed and we were unable to recover it. 00:30:40.418 [2024-12-05 12:14:14.541387] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.418 [2024-12-05 12:14:14.541443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.418 [2024-12-05 12:14:14.541457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.418 [2024-12-05 12:14:14.541464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.418 [2024-12-05 12:14:14.541471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.418 [2024-12-05 12:14:14.541485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.418 qpair failed and we were unable to recover it. 00:30:40.418 [2024-12-05 12:14:14.551341] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.418 [2024-12-05 12:14:14.551399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.418 [2024-12-05 12:14:14.551412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.418 [2024-12-05 12:14:14.551419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.418 [2024-12-05 12:14:14.551425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.418 [2024-12-05 12:14:14.551440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.418 qpair failed and we were unable to recover it. 00:30:40.418 [2024-12-05 12:14:14.561421] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.418 [2024-12-05 12:14:14.561472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.418 [2024-12-05 12:14:14.561485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.418 [2024-12-05 12:14:14.561491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.418 [2024-12-05 12:14:14.561497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.418 [2024-12-05 12:14:14.561511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.418 qpair failed and we were unable to recover it. 00:30:40.418 [2024-12-05 12:14:14.571453] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.418 [2024-12-05 12:14:14.571506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.418 [2024-12-05 12:14:14.571519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.418 [2024-12-05 12:14:14.571526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.418 [2024-12-05 12:14:14.571532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.418 [2024-12-05 12:14:14.571546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.418 qpair failed and we were unable to recover it. 00:30:40.418 [2024-12-05 12:14:14.581505] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.418 [2024-12-05 12:14:14.581558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.418 [2024-12-05 12:14:14.581574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.418 [2024-12-05 12:14:14.581581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.418 [2024-12-05 12:14:14.581587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.418 [2024-12-05 12:14:14.581602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.418 qpair failed and we were unable to recover it. 00:30:40.418 [2024-12-05 12:14:14.591511] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.418 [2024-12-05 12:14:14.591561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.418 [2024-12-05 12:14:14.591574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.418 [2024-12-05 12:14:14.591580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.418 [2024-12-05 12:14:14.591586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.418 [2024-12-05 12:14:14.591600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.418 qpair failed and we were unable to recover it. 00:30:40.418 [2024-12-05 12:14:14.601489] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.418 [2024-12-05 12:14:14.601544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.418 [2024-12-05 12:14:14.601556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.418 [2024-12-05 12:14:14.601563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.418 [2024-12-05 12:14:14.601569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.418 [2024-12-05 12:14:14.601583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.418 qpair failed and we were unable to recover it. 00:30:40.418 [2024-12-05 12:14:14.611617] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.418 [2024-12-05 12:14:14.611704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.418 [2024-12-05 12:14:14.611717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.418 [2024-12-05 12:14:14.611723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.418 [2024-12-05 12:14:14.611729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.418 [2024-12-05 12:14:14.611743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.418 qpair failed and we were unable to recover it. 00:30:40.678 [2024-12-05 12:14:14.621654] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.678 [2024-12-05 12:14:14.621704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.678 [2024-12-05 12:14:14.621718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.678 [2024-12-05 12:14:14.621728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.678 [2024-12-05 12:14:14.621734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.678 [2024-12-05 12:14:14.621748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.678 qpair failed and we were unable to recover it. 00:30:40.678 [2024-12-05 12:14:14.631666] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.678 [2024-12-05 12:14:14.631715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.678 [2024-12-05 12:14:14.631728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.678 [2024-12-05 12:14:14.631734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.678 [2024-12-05 12:14:14.631740] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.678 [2024-12-05 12:14:14.631755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.678 qpair failed and we were unable to recover it. 00:30:40.678 [2024-12-05 12:14:14.641701] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.678 [2024-12-05 12:14:14.641757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.678 [2024-12-05 12:14:14.641770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.678 [2024-12-05 12:14:14.641777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.678 [2024-12-05 12:14:14.641783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.678 [2024-12-05 12:14:14.641797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.678 qpair failed and we were unable to recover it. 00:30:40.678 [2024-12-05 12:14:14.651706] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.678 [2024-12-05 12:14:14.651764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.678 [2024-12-05 12:14:14.651777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.678 [2024-12-05 12:14:14.651783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.678 [2024-12-05 12:14:14.651789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.678 [2024-12-05 12:14:14.651803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.678 qpair failed and we were unable to recover it. 00:30:40.678 [2024-12-05 12:14:14.661674] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.678 [2024-12-05 12:14:14.661732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.678 [2024-12-05 12:14:14.661745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.678 [2024-12-05 12:14:14.661752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.678 [2024-12-05 12:14:14.661758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.678 [2024-12-05 12:14:14.661772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.678 qpair failed and we were unable to recover it. 00:30:40.678 [2024-12-05 12:14:14.671688] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.678 [2024-12-05 12:14:14.671743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.678 [2024-12-05 12:14:14.671755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.678 [2024-12-05 12:14:14.671762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.678 [2024-12-05 12:14:14.671768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.678 [2024-12-05 12:14:14.671783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.678 qpair failed and we were unable to recover it. 00:30:40.678 [2024-12-05 12:14:14.681780] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.678 [2024-12-05 12:14:14.681842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.678 [2024-12-05 12:14:14.681855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.678 [2024-12-05 12:14:14.681861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.678 [2024-12-05 12:14:14.681867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.678 [2024-12-05 12:14:14.681882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.678 qpair failed and we were unable to recover it. 00:30:40.678 [2024-12-05 12:14:14.691760] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.678 [2024-12-05 12:14:14.691819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.678 [2024-12-05 12:14:14.691832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.678 [2024-12-05 12:14:14.691838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.678 [2024-12-05 12:14:14.691845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.678 [2024-12-05 12:14:14.691859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.678 qpair failed and we were unable to recover it. 00:30:40.678 [2024-12-05 12:14:14.701793] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.679 [2024-12-05 12:14:14.701845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.679 [2024-12-05 12:14:14.701858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.679 [2024-12-05 12:14:14.701864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.679 [2024-12-05 12:14:14.701871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.679 [2024-12-05 12:14:14.701886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.679 qpair failed and we were unable to recover it. 00:30:40.679 [2024-12-05 12:14:14.711816] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.679 [2024-12-05 12:14:14.711870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.679 [2024-12-05 12:14:14.711883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.679 [2024-12-05 12:14:14.711890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.679 [2024-12-05 12:14:14.711896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.679 [2024-12-05 12:14:14.711910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.679 qpair failed and we were unable to recover it. 00:30:40.679 [2024-12-05 12:14:14.721897] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.679 [2024-12-05 12:14:14.721951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.679 [2024-12-05 12:14:14.721964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.679 [2024-12-05 12:14:14.721970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.679 [2024-12-05 12:14:14.721976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.679 [2024-12-05 12:14:14.721990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.679 qpair failed and we were unable to recover it. 00:30:40.679 [2024-12-05 12:14:14.731927] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.679 [2024-12-05 12:14:14.731986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.679 [2024-12-05 12:14:14.732002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.679 [2024-12-05 12:14:14.732010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.679 [2024-12-05 12:14:14.732017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.679 [2024-12-05 12:14:14.732034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.679 qpair failed and we were unable to recover it. 00:30:40.679 [2024-12-05 12:14:14.741979] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.679 [2024-12-05 12:14:14.742033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.679 [2024-12-05 12:14:14.742045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.679 [2024-12-05 12:14:14.742052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.679 [2024-12-05 12:14:14.742058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.679 [2024-12-05 12:14:14.742072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.679 qpair failed and we were unable to recover it. 00:30:40.679 [2024-12-05 12:14:14.751983] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.679 [2024-12-05 12:14:14.752037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.679 [2024-12-05 12:14:14.752050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.679 [2024-12-05 12:14:14.752059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.679 [2024-12-05 12:14:14.752065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.679 [2024-12-05 12:14:14.752079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.679 qpair failed and we were unable to recover it. 00:30:40.679 [2024-12-05 12:14:14.762049] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.679 [2024-12-05 12:14:14.762103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.679 [2024-12-05 12:14:14.762115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.679 [2024-12-05 12:14:14.762122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.679 [2024-12-05 12:14:14.762128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.679 [2024-12-05 12:14:14.762142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.679 qpair failed and we were unable to recover it. 00:30:40.679 [2024-12-05 12:14:14.772056] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.679 [2024-12-05 12:14:14.772110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.679 [2024-12-05 12:14:14.772123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.679 [2024-12-05 12:14:14.772130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.679 [2024-12-05 12:14:14.772135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.679 [2024-12-05 12:14:14.772150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.679 qpair failed and we were unable to recover it. 00:30:40.679 [2024-12-05 12:14:14.782072] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.679 [2024-12-05 12:14:14.782126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.679 [2024-12-05 12:14:14.782139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.679 [2024-12-05 12:14:14.782145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.679 [2024-12-05 12:14:14.782151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.679 [2024-12-05 12:14:14.782165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.679 qpair failed and we were unable to recover it. 00:30:40.679 [2024-12-05 12:14:14.792190] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.679 [2024-12-05 12:14:14.792271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.679 [2024-12-05 12:14:14.792284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.679 [2024-12-05 12:14:14.792290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.679 [2024-12-05 12:14:14.792296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.679 [2024-12-05 12:14:14.792313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.679 qpair failed and we were unable to recover it. 00:30:40.679 [2024-12-05 12:14:14.802130] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.679 [2024-12-05 12:14:14.802182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.679 [2024-12-05 12:14:14.802195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.679 [2024-12-05 12:14:14.802201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.679 [2024-12-05 12:14:14.802207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.679 [2024-12-05 12:14:14.802221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.679 qpair failed and we were unable to recover it. 00:30:40.679 [2024-12-05 12:14:14.812167] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.679 [2024-12-05 12:14:14.812221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.679 [2024-12-05 12:14:14.812233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.679 [2024-12-05 12:14:14.812240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.679 [2024-12-05 12:14:14.812246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.679 [2024-12-05 12:14:14.812259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.679 qpair failed and we were unable to recover it. 00:30:40.679 [2024-12-05 12:14:14.822184] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.679 [2024-12-05 12:14:14.822242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.679 [2024-12-05 12:14:14.822254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.679 [2024-12-05 12:14:14.822261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.679 [2024-12-05 12:14:14.822267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.679 [2024-12-05 12:14:14.822281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.679 qpair failed and we were unable to recover it. 00:30:40.679 [2024-12-05 12:14:14.832175] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.680 [2024-12-05 12:14:14.832269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.680 [2024-12-05 12:14:14.832281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.680 [2024-12-05 12:14:14.832288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.680 [2024-12-05 12:14:14.832294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.680 [2024-12-05 12:14:14.832308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.680 qpair failed and we were unable to recover it. 00:30:40.680 [2024-12-05 12:14:14.842287] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.680 [2024-12-05 12:14:14.842340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.680 [2024-12-05 12:14:14.842353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.680 [2024-12-05 12:14:14.842359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.680 [2024-12-05 12:14:14.842365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.680 [2024-12-05 12:14:14.842385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.680 qpair failed and we were unable to recover it. 00:30:40.680 [2024-12-05 12:14:14.852285] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.680 [2024-12-05 12:14:14.852345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.680 [2024-12-05 12:14:14.852358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.680 [2024-12-05 12:14:14.852365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.680 [2024-12-05 12:14:14.852375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.680 [2024-12-05 12:14:14.852389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.680 qpair failed and we were unable to recover it. 00:30:40.680 [2024-12-05 12:14:14.862304] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.680 [2024-12-05 12:14:14.862357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.680 [2024-12-05 12:14:14.862376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.680 [2024-12-05 12:14:14.862382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.680 [2024-12-05 12:14:14.862389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.680 [2024-12-05 12:14:14.862405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.680 qpair failed and we were unable to recover it. 00:30:40.680 [2024-12-05 12:14:14.872330] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.680 [2024-12-05 12:14:14.872384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.680 [2024-12-05 12:14:14.872396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.680 [2024-12-05 12:14:14.872403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.680 [2024-12-05 12:14:14.872409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.680 [2024-12-05 12:14:14.872423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.680 qpair failed and we were unable to recover it. 00:30:40.940 [2024-12-05 12:14:14.882346] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.940 [2024-12-05 12:14:14.882395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.940 [2024-12-05 12:14:14.882411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.940 [2024-12-05 12:14:14.882417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.940 [2024-12-05 12:14:14.882423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.940 [2024-12-05 12:14:14.882437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.940 qpair failed and we were unable to recover it. 00:30:40.940 [2024-12-05 12:14:14.892404] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.940 [2024-12-05 12:14:14.892461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.940 [2024-12-05 12:14:14.892473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.940 [2024-12-05 12:14:14.892480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.940 [2024-12-05 12:14:14.892486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.940 [2024-12-05 12:14:14.892500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.940 qpair failed and we were unable to recover it. 00:30:40.940 [2024-12-05 12:14:14.902420] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.940 [2024-12-05 12:14:14.902475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.940 [2024-12-05 12:14:14.902488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.940 [2024-12-05 12:14:14.902495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.940 [2024-12-05 12:14:14.902501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.940 [2024-12-05 12:14:14.902515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.940 qpair failed and we were unable to recover it. 00:30:40.940 [2024-12-05 12:14:14.912434] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.940 [2024-12-05 12:14:14.912489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.940 [2024-12-05 12:14:14.912501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.940 [2024-12-05 12:14:14.912508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.940 [2024-12-05 12:14:14.912514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.940 [2024-12-05 12:14:14.912528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.940 qpair failed and we were unable to recover it. 00:30:40.940 [2024-12-05 12:14:14.922470] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.940 [2024-12-05 12:14:14.922521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.940 [2024-12-05 12:14:14.922533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.940 [2024-12-05 12:14:14.922540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.940 [2024-12-05 12:14:14.922549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.940 [2024-12-05 12:14:14.922563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.940 qpair failed and we were unable to recover it. 00:30:40.940 [2024-12-05 12:14:14.932545] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.940 [2024-12-05 12:14:14.932618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.940 [2024-12-05 12:14:14.932631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.940 [2024-12-05 12:14:14.932638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.940 [2024-12-05 12:14:14.932644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.940 [2024-12-05 12:14:14.932658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.940 qpair failed and we were unable to recover it. 00:30:40.940 [2024-12-05 12:14:14.942547] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.940 [2024-12-05 12:14:14.942602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.940 [2024-12-05 12:14:14.942615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.940 [2024-12-05 12:14:14.942621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.940 [2024-12-05 12:14:14.942627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.940 [2024-12-05 12:14:14.942641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.940 qpair failed and we were unable to recover it. 00:30:40.940 [2024-12-05 12:14:14.952560] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.940 [2024-12-05 12:14:14.952613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.940 [2024-12-05 12:14:14.952626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.940 [2024-12-05 12:14:14.952632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.940 [2024-12-05 12:14:14.952638] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.940 [2024-12-05 12:14:14.952653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.940 qpair failed and we were unable to recover it. 00:30:40.940 [2024-12-05 12:14:14.962611] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.940 [2024-12-05 12:14:14.962660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.940 [2024-12-05 12:14:14.962673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.940 [2024-12-05 12:14:14.962679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.940 [2024-12-05 12:14:14.962686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.940 [2024-12-05 12:14:14.962700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.940 qpair failed and we were unable to recover it. 00:30:40.940 [2024-12-05 12:14:14.972624] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.940 [2024-12-05 12:14:14.972677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.940 [2024-12-05 12:14:14.972690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.940 [2024-12-05 12:14:14.972697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.940 [2024-12-05 12:14:14.972703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.940 [2024-12-05 12:14:14.972717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.940 qpair failed and we were unable to recover it. 00:30:40.940 [2024-12-05 12:14:14.982648] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.940 [2024-12-05 12:14:14.982700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.940 [2024-12-05 12:14:14.982713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.940 [2024-12-05 12:14:14.982719] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.940 [2024-12-05 12:14:14.982725] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.940 [2024-12-05 12:14:14.982739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.940 qpair failed and we were unable to recover it. 00:30:40.940 [2024-12-05 12:14:14.992671] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.940 [2024-12-05 12:14:14.992717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.940 [2024-12-05 12:14:14.992730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.940 [2024-12-05 12:14:14.992736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.940 [2024-12-05 12:14:14.992742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.940 [2024-12-05 12:14:14.992757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.940 qpair failed and we were unable to recover it. 00:30:40.940 [2024-12-05 12:14:15.002705] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.940 [2024-12-05 12:14:15.002757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.940 [2024-12-05 12:14:15.002770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.940 [2024-12-05 12:14:15.002776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.941 [2024-12-05 12:14:15.002782] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.941 [2024-12-05 12:14:15.002795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.941 qpair failed and we were unable to recover it. 00:30:40.941 [2024-12-05 12:14:15.012737] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.941 [2024-12-05 12:14:15.012789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.941 [2024-12-05 12:14:15.012807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.941 [2024-12-05 12:14:15.012813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.941 [2024-12-05 12:14:15.012819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.941 [2024-12-05 12:14:15.012833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.941 qpair failed and we were unable to recover it. 00:30:40.941 [2024-12-05 12:14:15.022693] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.941 [2024-12-05 12:14:15.022746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.941 [2024-12-05 12:14:15.022759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.941 [2024-12-05 12:14:15.022765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.941 [2024-12-05 12:14:15.022772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.941 [2024-12-05 12:14:15.022786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.941 qpair failed and we were unable to recover it. 00:30:40.941 [2024-12-05 12:14:15.032765] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.941 [2024-12-05 12:14:15.032820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.941 [2024-12-05 12:14:15.032832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.941 [2024-12-05 12:14:15.032839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.941 [2024-12-05 12:14:15.032844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.941 [2024-12-05 12:14:15.032858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.941 qpair failed and we were unable to recover it. 00:30:40.941 [2024-12-05 12:14:15.042763] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.941 [2024-12-05 12:14:15.042820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.941 [2024-12-05 12:14:15.042832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.941 [2024-12-05 12:14:15.042839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.941 [2024-12-05 12:14:15.042845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.941 [2024-12-05 12:14:15.042859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.941 qpair failed and we were unable to recover it. 00:30:40.941 [2024-12-05 12:14:15.052850] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.941 [2024-12-05 12:14:15.052903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.941 [2024-12-05 12:14:15.052916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.941 [2024-12-05 12:14:15.052922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.941 [2024-12-05 12:14:15.052931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.941 [2024-12-05 12:14:15.052945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.941 qpair failed and we were unable to recover it. 00:30:40.941 [2024-12-05 12:14:15.062906] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.941 [2024-12-05 12:14:15.062957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.941 [2024-12-05 12:14:15.062969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.941 [2024-12-05 12:14:15.062975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.941 [2024-12-05 12:14:15.062981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.941 [2024-12-05 12:14:15.062996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.941 qpair failed and we were unable to recover it. 00:30:40.941 [2024-12-05 12:14:15.072903] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.941 [2024-12-05 12:14:15.072955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.941 [2024-12-05 12:14:15.072967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.941 [2024-12-05 12:14:15.072974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.941 [2024-12-05 12:14:15.072980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.941 [2024-12-05 12:14:15.072994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.941 qpair failed and we were unable to recover it. 00:30:40.941 [2024-12-05 12:14:15.082923] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.941 [2024-12-05 12:14:15.082977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.941 [2024-12-05 12:14:15.082990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.941 [2024-12-05 12:14:15.082996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.941 [2024-12-05 12:14:15.083002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.941 [2024-12-05 12:14:15.083016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.941 qpair failed and we were unable to recover it. 00:30:40.941 [2024-12-05 12:14:15.092965] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.941 [2024-12-05 12:14:15.093020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.941 [2024-12-05 12:14:15.093033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.941 [2024-12-05 12:14:15.093040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.941 [2024-12-05 12:14:15.093046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.941 [2024-12-05 12:14:15.093060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.941 qpair failed and we were unable to recover it. 00:30:40.941 [2024-12-05 12:14:15.102992] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.941 [2024-12-05 12:14:15.103048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.941 [2024-12-05 12:14:15.103060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.941 [2024-12-05 12:14:15.103067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.941 [2024-12-05 12:14:15.103073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.941 [2024-12-05 12:14:15.103087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.941 qpair failed and we were unable to recover it. 00:30:40.941 [2024-12-05 12:14:15.113043] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.941 [2024-12-05 12:14:15.113096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.941 [2024-12-05 12:14:15.113108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.941 [2024-12-05 12:14:15.113114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.941 [2024-12-05 12:14:15.113120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.941 [2024-12-05 12:14:15.113134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.941 qpair failed and we were unable to recover it. 00:30:40.941 [2024-12-05 12:14:15.123041] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.941 [2024-12-05 12:14:15.123095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.941 [2024-12-05 12:14:15.123108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.941 [2024-12-05 12:14:15.123114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.941 [2024-12-05 12:14:15.123120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.941 [2024-12-05 12:14:15.123134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.941 qpair failed and we were unable to recover it. 00:30:40.941 [2024-12-05 12:14:15.133133] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.941 [2024-12-05 12:14:15.133234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.941 [2024-12-05 12:14:15.133247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.941 [2024-12-05 12:14:15.133253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.942 [2024-12-05 12:14:15.133259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:40.942 [2024-12-05 12:14:15.133274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:40.942 qpair failed and we were unable to recover it. 00:30:41.200 [2024-12-05 12:14:15.143106] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.200 [2024-12-05 12:14:15.143162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.200 [2024-12-05 12:14:15.143178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.200 [2024-12-05 12:14:15.143185] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.200 [2024-12-05 12:14:15.143191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.200 [2024-12-05 12:14:15.143205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.200 qpair failed and we were unable to recover it. 00:30:41.200 [2024-12-05 12:14:15.153122] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.200 [2024-12-05 12:14:15.153186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.200 [2024-12-05 12:14:15.153199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.200 [2024-12-05 12:14:15.153206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.200 [2024-12-05 12:14:15.153212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.200 [2024-12-05 12:14:15.153226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.200 qpair failed and we were unable to recover it. 00:30:41.200 [2024-12-05 12:14:15.163158] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.200 [2024-12-05 12:14:15.163210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.200 [2024-12-05 12:14:15.163223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.200 [2024-12-05 12:14:15.163229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.200 [2024-12-05 12:14:15.163235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.200 [2024-12-05 12:14:15.163250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.200 qpair failed and we were unable to recover it. 00:30:41.200 [2024-12-05 12:14:15.173186] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.200 [2024-12-05 12:14:15.173240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.200 [2024-12-05 12:14:15.173253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.200 [2024-12-05 12:14:15.173260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.200 [2024-12-05 12:14:15.173265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.200 [2024-12-05 12:14:15.173279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.200 qpair failed and we were unable to recover it. 00:30:41.200 [2024-12-05 12:14:15.183235] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.200 [2024-12-05 12:14:15.183291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.200 [2024-12-05 12:14:15.183305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.200 [2024-12-05 12:14:15.183314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.200 [2024-12-05 12:14:15.183320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.200 [2024-12-05 12:14:15.183334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.200 qpair failed and we were unable to recover it. 00:30:41.200 [2024-12-05 12:14:15.193284] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.200 [2024-12-05 12:14:15.193342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.200 [2024-12-05 12:14:15.193355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.200 [2024-12-05 12:14:15.193362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.200 [2024-12-05 12:14:15.193372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.200 [2024-12-05 12:14:15.193388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.200 qpair failed and we were unable to recover it. 00:30:41.200 [2024-12-05 12:14:15.203221] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.200 [2024-12-05 12:14:15.203272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.200 [2024-12-05 12:14:15.203285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.200 [2024-12-05 12:14:15.203291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.200 [2024-12-05 12:14:15.203297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.200 [2024-12-05 12:14:15.203311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.200 qpair failed and we were unable to recover it. 00:30:41.200 [2024-12-05 12:14:15.213288] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.200 [2024-12-05 12:14:15.213343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.200 [2024-12-05 12:14:15.213356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.200 [2024-12-05 12:14:15.213362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.200 [2024-12-05 12:14:15.213372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.200 [2024-12-05 12:14:15.213386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.200 qpair failed and we were unable to recover it. 00:30:41.200 [2024-12-05 12:14:15.223340] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.200 [2024-12-05 12:14:15.223434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.200 [2024-12-05 12:14:15.223447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.200 [2024-12-05 12:14:15.223454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.200 [2024-12-05 12:14:15.223459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.200 [2024-12-05 12:14:15.223474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.200 qpair failed and we were unable to recover it. 00:30:41.200 [2024-12-05 12:14:15.233385] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.200 [2024-12-05 12:14:15.233453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.200 [2024-12-05 12:14:15.233465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.200 [2024-12-05 12:14:15.233471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.200 [2024-12-05 12:14:15.233477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.200 [2024-12-05 12:14:15.233492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.200 qpair failed and we were unable to recover it. 00:30:41.200 [2024-12-05 12:14:15.243391] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.200 [2024-12-05 12:14:15.243443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.200 [2024-12-05 12:14:15.243456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.200 [2024-12-05 12:14:15.243462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.200 [2024-12-05 12:14:15.243468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.200 [2024-12-05 12:14:15.243483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.200 qpair failed and we were unable to recover it. 00:30:41.200 [2024-12-05 12:14:15.253430] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.200 [2024-12-05 12:14:15.253485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.200 [2024-12-05 12:14:15.253497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.200 [2024-12-05 12:14:15.253503] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.200 [2024-12-05 12:14:15.253509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.201 [2024-12-05 12:14:15.253524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.201 qpair failed and we were unable to recover it. 00:30:41.201 [2024-12-05 12:14:15.263460] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.201 [2024-12-05 12:14:15.263520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.201 [2024-12-05 12:14:15.263532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.201 [2024-12-05 12:14:15.263539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.201 [2024-12-05 12:14:15.263545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.201 [2024-12-05 12:14:15.263559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.201 qpair failed and we were unable to recover it. 00:30:41.201 [2024-12-05 12:14:15.273475] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.201 [2024-12-05 12:14:15.273530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.201 [2024-12-05 12:14:15.273542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.201 [2024-12-05 12:14:15.273549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.201 [2024-12-05 12:14:15.273555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.201 [2024-12-05 12:14:15.273569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.201 qpair failed and we were unable to recover it. 00:30:41.201 [2024-12-05 12:14:15.283515] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.201 [2024-12-05 12:14:15.283568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.201 [2024-12-05 12:14:15.283581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.201 [2024-12-05 12:14:15.283587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.201 [2024-12-05 12:14:15.283593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.201 [2024-12-05 12:14:15.283608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.201 qpair failed and we were unable to recover it. 00:30:41.201 [2024-12-05 12:14:15.293552] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.201 [2024-12-05 12:14:15.293603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.201 [2024-12-05 12:14:15.293616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.201 [2024-12-05 12:14:15.293622] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.201 [2024-12-05 12:14:15.293628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.201 [2024-12-05 12:14:15.293642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.201 qpair failed and we were unable to recover it. 00:30:41.201 [2024-12-05 12:14:15.303603] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.201 [2024-12-05 12:14:15.303653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.201 [2024-12-05 12:14:15.303666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.201 [2024-12-05 12:14:15.303672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.201 [2024-12-05 12:14:15.303678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.201 [2024-12-05 12:14:15.303692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.201 qpair failed and we were unable to recover it. 00:30:41.201 [2024-12-05 12:14:15.313588] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.201 [2024-12-05 12:14:15.313637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.201 [2024-12-05 12:14:15.313650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.201 [2024-12-05 12:14:15.313659] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.201 [2024-12-05 12:14:15.313665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.201 [2024-12-05 12:14:15.313680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.201 qpair failed and we were unable to recover it. 00:30:41.201 [2024-12-05 12:14:15.323618] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.201 [2024-12-05 12:14:15.323670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.201 [2024-12-05 12:14:15.323683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.201 [2024-12-05 12:14:15.323689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.201 [2024-12-05 12:14:15.323695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.201 [2024-12-05 12:14:15.323709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.201 qpair failed and we were unable to recover it. 00:30:41.201 [2024-12-05 12:14:15.333663] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.201 [2024-12-05 12:14:15.333734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.201 [2024-12-05 12:14:15.333747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.201 [2024-12-05 12:14:15.333754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.201 [2024-12-05 12:14:15.333759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.201 [2024-12-05 12:14:15.333774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.201 qpair failed and we were unable to recover it. 00:30:41.201 [2024-12-05 12:14:15.343680] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.201 [2024-12-05 12:14:15.343732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.201 [2024-12-05 12:14:15.343744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.201 [2024-12-05 12:14:15.343751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.201 [2024-12-05 12:14:15.343756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.201 [2024-12-05 12:14:15.343771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.201 qpair failed and we were unable to recover it. 00:30:41.201 [2024-12-05 12:14:15.353720] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.201 [2024-12-05 12:14:15.353774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.201 [2024-12-05 12:14:15.353787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.201 [2024-12-05 12:14:15.353793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.201 [2024-12-05 12:14:15.353799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.201 [2024-12-05 12:14:15.353816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.201 qpair failed and we were unable to recover it. 00:30:41.201 [2024-12-05 12:14:15.363740] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.201 [2024-12-05 12:14:15.363793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.201 [2024-12-05 12:14:15.363805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.201 [2024-12-05 12:14:15.363812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.201 [2024-12-05 12:14:15.363818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.201 [2024-12-05 12:14:15.363831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.201 qpair failed and we were unable to recover it. 00:30:41.201 [2024-12-05 12:14:15.373775] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.201 [2024-12-05 12:14:15.373864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.201 [2024-12-05 12:14:15.373876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.201 [2024-12-05 12:14:15.373882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.201 [2024-12-05 12:14:15.373888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.201 [2024-12-05 12:14:15.373902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.201 qpair failed and we were unable to recover it. 00:30:41.201 [2024-12-05 12:14:15.383798] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.201 [2024-12-05 12:14:15.383852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.201 [2024-12-05 12:14:15.383864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.201 [2024-12-05 12:14:15.383871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.201 [2024-12-05 12:14:15.383876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.201 [2024-12-05 12:14:15.383891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.201 qpair failed and we were unable to recover it. 00:30:41.201 [2024-12-05 12:14:15.393799] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.201 [2024-12-05 12:14:15.393888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.201 [2024-12-05 12:14:15.393901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.201 [2024-12-05 12:14:15.393907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.201 [2024-12-05 12:14:15.393913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.201 [2024-12-05 12:14:15.393927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.201 qpair failed and we were unable to recover it. 00:30:41.459 [2024-12-05 12:14:15.403851] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.459 [2024-12-05 12:14:15.403904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.459 [2024-12-05 12:14:15.403916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.459 [2024-12-05 12:14:15.403922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.459 [2024-12-05 12:14:15.403928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.459 [2024-12-05 12:14:15.403942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.459 qpair failed and we were unable to recover it. 00:30:41.459 [2024-12-05 12:14:15.413889] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.459 [2024-12-05 12:14:15.413943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.459 [2024-12-05 12:14:15.413956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.459 [2024-12-05 12:14:15.413963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.459 [2024-12-05 12:14:15.413969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.459 [2024-12-05 12:14:15.413983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.459 qpair failed and we were unable to recover it. 00:30:41.459 [2024-12-05 12:14:15.423919] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.460 [2024-12-05 12:14:15.423966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.460 [2024-12-05 12:14:15.423980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.460 [2024-12-05 12:14:15.423986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.460 [2024-12-05 12:14:15.423992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.460 [2024-12-05 12:14:15.424006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.460 qpair failed and we were unable to recover it. 00:30:41.460 [2024-12-05 12:14:15.433976] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.460 [2024-12-05 12:14:15.434033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.460 [2024-12-05 12:14:15.434046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.460 [2024-12-05 12:14:15.434053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.460 [2024-12-05 12:14:15.434059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.460 [2024-12-05 12:14:15.434073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.460 qpair failed and we were unable to recover it. 00:30:41.460 [2024-12-05 12:14:15.443999] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.460 [2024-12-05 12:14:15.444106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.460 [2024-12-05 12:14:15.444122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.460 [2024-12-05 12:14:15.444129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.460 [2024-12-05 12:14:15.444135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.460 [2024-12-05 12:14:15.444149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.460 qpair failed and we were unable to recover it. 00:30:41.460 [2024-12-05 12:14:15.454011] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.460 [2024-12-05 12:14:15.454067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.460 [2024-12-05 12:14:15.454080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.460 [2024-12-05 12:14:15.454086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.460 [2024-12-05 12:14:15.454092] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.460 [2024-12-05 12:14:15.454106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.460 qpair failed and we were unable to recover it. 00:30:41.460 [2024-12-05 12:14:15.464051] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.460 [2024-12-05 12:14:15.464104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.460 [2024-12-05 12:14:15.464117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.460 [2024-12-05 12:14:15.464123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.460 [2024-12-05 12:14:15.464129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.460 [2024-12-05 12:14:15.464144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.460 qpair failed and we were unable to recover it. 00:30:41.460 [2024-12-05 12:14:15.474064] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.460 [2024-12-05 12:14:15.474118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.460 [2024-12-05 12:14:15.474130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.460 [2024-12-05 12:14:15.474136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.460 [2024-12-05 12:14:15.474142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.460 [2024-12-05 12:14:15.474157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.460 qpair failed and we were unable to recover it. 00:30:41.460 [2024-12-05 12:14:15.484085] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.460 [2024-12-05 12:14:15.484132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.460 [2024-12-05 12:14:15.484145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.460 [2024-12-05 12:14:15.484152] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.460 [2024-12-05 12:14:15.484164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.460 [2024-12-05 12:14:15.484179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.460 qpair failed and we were unable to recover it. 00:30:41.460 [2024-12-05 12:14:15.494041] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.460 [2024-12-05 12:14:15.494094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.460 [2024-12-05 12:14:15.494107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.460 [2024-12-05 12:14:15.494113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.460 [2024-12-05 12:14:15.494119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.460 [2024-12-05 12:14:15.494133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.460 qpair failed and we were unable to recover it. 00:30:41.460 [2024-12-05 12:14:15.504159] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.460 [2024-12-05 12:14:15.504213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.460 [2024-12-05 12:14:15.504226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.460 [2024-12-05 12:14:15.504232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.460 [2024-12-05 12:14:15.504238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.460 [2024-12-05 12:14:15.504252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.460 qpair failed and we were unable to recover it. 00:30:41.460 [2024-12-05 12:14:15.514185] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.460 [2024-12-05 12:14:15.514234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.460 [2024-12-05 12:14:15.514247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.460 [2024-12-05 12:14:15.514254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.460 [2024-12-05 12:14:15.514260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.460 [2024-12-05 12:14:15.514275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.460 qpair failed and we were unable to recover it. 00:30:41.460 [2024-12-05 12:14:15.524212] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.460 [2024-12-05 12:14:15.524264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.460 [2024-12-05 12:14:15.524277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.460 [2024-12-05 12:14:15.524284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.460 [2024-12-05 12:14:15.524290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.460 [2024-12-05 12:14:15.524304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.460 qpair failed and we were unable to recover it. 00:30:41.460 [2024-12-05 12:14:15.534242] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.460 [2024-12-05 12:14:15.534295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.460 [2024-12-05 12:14:15.534308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.460 [2024-12-05 12:14:15.534314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.460 [2024-12-05 12:14:15.534320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.460 [2024-12-05 12:14:15.534335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.460 qpair failed and we were unable to recover it. 00:30:41.460 [2024-12-05 12:14:15.544274] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.460 [2024-12-05 12:14:15.544322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.460 [2024-12-05 12:14:15.544335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.460 [2024-12-05 12:14:15.544342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.460 [2024-12-05 12:14:15.544347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.460 [2024-12-05 12:14:15.544361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.461 qpair failed and we were unable to recover it. 00:30:41.461 [2024-12-05 12:14:15.554332] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.461 [2024-12-05 12:14:15.554386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.461 [2024-12-05 12:14:15.554398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.461 [2024-12-05 12:14:15.554405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.461 [2024-12-05 12:14:15.554411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.461 [2024-12-05 12:14:15.554425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.461 qpair failed and we were unable to recover it. 00:30:41.461 [2024-12-05 12:14:15.564408] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.461 [2024-12-05 12:14:15.564490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.461 [2024-12-05 12:14:15.564503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.461 [2024-12-05 12:14:15.564509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.461 [2024-12-05 12:14:15.564515] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.461 [2024-12-05 12:14:15.564529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.461 qpair failed and we were unable to recover it. 00:30:41.461 [2024-12-05 12:14:15.574377] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.461 [2024-12-05 12:14:15.574436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.461 [2024-12-05 12:14:15.574452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.461 [2024-12-05 12:14:15.574459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.461 [2024-12-05 12:14:15.574464] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.461 [2024-12-05 12:14:15.574478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.461 qpair failed and we were unable to recover it. 00:30:41.461 [2024-12-05 12:14:15.584405] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.461 [2024-12-05 12:14:15.584467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.461 [2024-12-05 12:14:15.584480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.461 [2024-12-05 12:14:15.584486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.461 [2024-12-05 12:14:15.584492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.461 [2024-12-05 12:14:15.584507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.461 qpair failed and we were unable to recover it. 00:30:41.461 [2024-12-05 12:14:15.594402] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.461 [2024-12-05 12:14:15.594458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.461 [2024-12-05 12:14:15.594470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.461 [2024-12-05 12:14:15.594477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.461 [2024-12-05 12:14:15.594483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.461 [2024-12-05 12:14:15.594497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.461 qpair failed and we were unable to recover it. 00:30:41.461 [2024-12-05 12:14:15.604427] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.461 [2024-12-05 12:14:15.604481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.461 [2024-12-05 12:14:15.604494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.461 [2024-12-05 12:14:15.604501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.461 [2024-12-05 12:14:15.604507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.461 [2024-12-05 12:14:15.604522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.461 qpair failed and we were unable to recover it. 00:30:41.461 [2024-12-05 12:14:15.614503] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.461 [2024-12-05 12:14:15.614560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.461 [2024-12-05 12:14:15.614574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.461 [2024-12-05 12:14:15.614581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.461 [2024-12-05 12:14:15.614590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.461 [2024-12-05 12:14:15.614605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.461 qpair failed and we were unable to recover it. 00:30:41.461 [2024-12-05 12:14:15.624496] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.461 [2024-12-05 12:14:15.624553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.461 [2024-12-05 12:14:15.624567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.461 [2024-12-05 12:14:15.624574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.461 [2024-12-05 12:14:15.624580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.461 [2024-12-05 12:14:15.624594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.461 qpair failed and we were unable to recover it. 00:30:41.461 [2024-12-05 12:14:15.634526] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.461 [2024-12-05 12:14:15.634579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.461 [2024-12-05 12:14:15.634592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.461 [2024-12-05 12:14:15.634599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.461 [2024-12-05 12:14:15.634605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.461 [2024-12-05 12:14:15.634620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.461 qpair failed and we were unable to recover it. 00:30:41.461 [2024-12-05 12:14:15.644517] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.461 [2024-12-05 12:14:15.644611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.461 [2024-12-05 12:14:15.644625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.461 [2024-12-05 12:14:15.644631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.461 [2024-12-05 12:14:15.644637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.461 [2024-12-05 12:14:15.644651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.461 qpair failed and we were unable to recover it. 00:30:41.461 [2024-12-05 12:14:15.654588] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.461 [2024-12-05 12:14:15.654643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.461 [2024-12-05 12:14:15.654656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.461 [2024-12-05 12:14:15.654662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.461 [2024-12-05 12:14:15.654668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.461 [2024-12-05 12:14:15.654682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.461 qpair failed and we were unable to recover it. 00:30:41.720 [2024-12-05 12:14:15.664616] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.720 [2024-12-05 12:14:15.664668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.720 [2024-12-05 12:14:15.664681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.720 [2024-12-05 12:14:15.664687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.720 [2024-12-05 12:14:15.664694] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.720 [2024-12-05 12:14:15.664708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.720 qpair failed and we were unable to recover it. 00:30:41.720 [2024-12-05 12:14:15.674580] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.720 [2024-12-05 12:14:15.674633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.720 [2024-12-05 12:14:15.674646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.720 [2024-12-05 12:14:15.674653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.720 [2024-12-05 12:14:15.674659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.720 [2024-12-05 12:14:15.674673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.720 qpair failed and we were unable to recover it. 00:30:41.720 [2024-12-05 12:14:15.684691] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.720 [2024-12-05 12:14:15.684746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.720 [2024-12-05 12:14:15.684760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.720 [2024-12-05 12:14:15.684767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.720 [2024-12-05 12:14:15.684773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.720 [2024-12-05 12:14:15.684788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.720 qpair failed and we were unable to recover it. 00:30:41.720 [2024-12-05 12:14:15.694693] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.720 [2024-12-05 12:14:15.694748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.720 [2024-12-05 12:14:15.694761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.720 [2024-12-05 12:14:15.694768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.720 [2024-12-05 12:14:15.694774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.720 [2024-12-05 12:14:15.694788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.720 qpair failed and we were unable to recover it. 00:30:41.720 [2024-12-05 12:14:15.704740] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.720 [2024-12-05 12:14:15.704799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.720 [2024-12-05 12:14:15.704818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.720 [2024-12-05 12:14:15.704824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.720 [2024-12-05 12:14:15.704830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.720 [2024-12-05 12:14:15.704845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.720 qpair failed and we were unable to recover it. 00:30:41.720 [2024-12-05 12:14:15.714775] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.720 [2024-12-05 12:14:15.714850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.720 [2024-12-05 12:14:15.714863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.720 [2024-12-05 12:14:15.714870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.720 [2024-12-05 12:14:15.714876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.720 [2024-12-05 12:14:15.714890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.720 qpair failed and we were unable to recover it. 00:30:41.720 [2024-12-05 12:14:15.724806] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.720 [2024-12-05 12:14:15.724859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.720 [2024-12-05 12:14:15.724872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.720 [2024-12-05 12:14:15.724878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.720 [2024-12-05 12:14:15.724885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.720 [2024-12-05 12:14:15.724899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.720 qpair failed and we were unable to recover it. 00:30:41.720 [2024-12-05 12:14:15.734790] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.720 [2024-12-05 12:14:15.734874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.720 [2024-12-05 12:14:15.734887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.720 [2024-12-05 12:14:15.734893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.720 [2024-12-05 12:14:15.734899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.720 [2024-12-05 12:14:15.734913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.720 qpair failed and we were unable to recover it. 00:30:41.720 [2024-12-05 12:14:15.744876] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.720 [2024-12-05 12:14:15.744986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.720 [2024-12-05 12:14:15.744999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.720 [2024-12-05 12:14:15.745008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.720 [2024-12-05 12:14:15.745014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.720 [2024-12-05 12:14:15.745029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.720 qpair failed and we were unable to recover it. 00:30:41.720 [2024-12-05 12:14:15.754878] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.720 [2024-12-05 12:14:15.754933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.720 [2024-12-05 12:14:15.754947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.720 [2024-12-05 12:14:15.754954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.720 [2024-12-05 12:14:15.754959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.720 [2024-12-05 12:14:15.754974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.720 qpair failed and we were unable to recover it. 00:30:41.721 [2024-12-05 12:14:15.764916] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.721 [2024-12-05 12:14:15.764970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.721 [2024-12-05 12:14:15.764983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.721 [2024-12-05 12:14:15.764990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.721 [2024-12-05 12:14:15.764996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.721 [2024-12-05 12:14:15.765010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.721 qpair failed and we were unable to recover it. 00:30:41.721 [2024-12-05 12:14:15.774925] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.721 [2024-12-05 12:14:15.774981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.721 [2024-12-05 12:14:15.774994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.721 [2024-12-05 12:14:15.775001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.721 [2024-12-05 12:14:15.775007] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.721 [2024-12-05 12:14:15.775022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.721 qpair failed and we were unable to recover it. 00:30:41.721 [2024-12-05 12:14:15.784893] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.721 [2024-12-05 12:14:15.784948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.721 [2024-12-05 12:14:15.784961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.721 [2024-12-05 12:14:15.784968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.721 [2024-12-05 12:14:15.784976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.721 [2024-12-05 12:14:15.784994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.721 qpair failed and we were unable to recover it. 00:30:41.721 [2024-12-05 12:14:15.795003] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.721 [2024-12-05 12:14:15.795055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.721 [2024-12-05 12:14:15.795068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.721 [2024-12-05 12:14:15.795075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.721 [2024-12-05 12:14:15.795081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.721 [2024-12-05 12:14:15.795096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.721 qpair failed and we were unable to recover it. 00:30:41.721 [2024-12-05 12:14:15.805093] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.721 [2024-12-05 12:14:15.805147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.721 [2024-12-05 12:14:15.805160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.721 [2024-12-05 12:14:15.805167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.721 [2024-12-05 12:14:15.805173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.721 [2024-12-05 12:14:15.805187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.721 qpair failed and we were unable to recover it. 00:30:41.721 [2024-12-05 12:14:15.815043] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.721 [2024-12-05 12:14:15.815102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.721 [2024-12-05 12:14:15.815115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.721 [2024-12-05 12:14:15.815122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.721 [2024-12-05 12:14:15.815128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.721 [2024-12-05 12:14:15.815142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.721 qpair failed and we were unable to recover it. 00:30:41.721 [2024-12-05 12:14:15.825069] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.721 [2024-12-05 12:14:15.825121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.721 [2024-12-05 12:14:15.825134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.721 [2024-12-05 12:14:15.825140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.721 [2024-12-05 12:14:15.825146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.721 [2024-12-05 12:14:15.825161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.721 qpair failed and we were unable to recover it. 00:30:41.721 [2024-12-05 12:14:15.835126] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.721 [2024-12-05 12:14:15.835190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.721 [2024-12-05 12:14:15.835202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.721 [2024-12-05 12:14:15.835209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.721 [2024-12-05 12:14:15.835215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.721 [2024-12-05 12:14:15.835229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.721 qpair failed and we were unable to recover it. 00:30:41.721 [2024-12-05 12:14:15.845149] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.721 [2024-12-05 12:14:15.845200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.721 [2024-12-05 12:14:15.845213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.721 [2024-12-05 12:14:15.845219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.721 [2024-12-05 12:14:15.845225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.721 [2024-12-05 12:14:15.845239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.721 qpair failed and we were unable to recover it. 00:30:41.721 [2024-12-05 12:14:15.855213] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.721 [2024-12-05 12:14:15.855268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.721 [2024-12-05 12:14:15.855282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.721 [2024-12-05 12:14:15.855288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.721 [2024-12-05 12:14:15.855294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.721 [2024-12-05 12:14:15.855309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.721 qpair failed and we were unable to recover it. 00:30:41.721 [2024-12-05 12:14:15.865198] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.721 [2024-12-05 12:14:15.865270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.721 [2024-12-05 12:14:15.865284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.721 [2024-12-05 12:14:15.865290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.721 [2024-12-05 12:14:15.865296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.721 [2024-12-05 12:14:15.865310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.721 qpair failed and we were unable to recover it. 00:30:41.721 [2024-12-05 12:14:15.875160] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.721 [2024-12-05 12:14:15.875216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.721 [2024-12-05 12:14:15.875230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.721 [2024-12-05 12:14:15.875240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.721 [2024-12-05 12:14:15.875246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.721 [2024-12-05 12:14:15.875261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.721 qpair failed and we were unable to recover it. 00:30:41.721 [2024-12-05 12:14:15.885257] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.721 [2024-12-05 12:14:15.885310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.721 [2024-12-05 12:14:15.885323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.721 [2024-12-05 12:14:15.885330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.721 [2024-12-05 12:14:15.885335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.721 [2024-12-05 12:14:15.885351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.721 qpair failed and we were unable to recover it. 00:30:41.721 [2024-12-05 12:14:15.895214] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.721 [2024-12-05 12:14:15.895270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.721 [2024-12-05 12:14:15.895284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.721 [2024-12-05 12:14:15.895290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.721 [2024-12-05 12:14:15.895296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.721 [2024-12-05 12:14:15.895311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.721 qpair failed and we were unable to recover it. 00:30:41.721 [2024-12-05 12:14:15.905346] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.721 [2024-12-05 12:14:15.905404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.721 [2024-12-05 12:14:15.905418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.721 [2024-12-05 12:14:15.905424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.721 [2024-12-05 12:14:15.905430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.721 [2024-12-05 12:14:15.905445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.721 qpair failed and we were unable to recover it. 00:30:41.721 [2024-12-05 12:14:15.915375] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.721 [2024-12-05 12:14:15.915472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.721 [2024-12-05 12:14:15.915485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.721 [2024-12-05 12:14:15.915492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.721 [2024-12-05 12:14:15.915498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.721 [2024-12-05 12:14:15.915515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.721 qpair failed and we were unable to recover it. 00:30:41.980 [2024-12-05 12:14:15.925378] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.980 [2024-12-05 12:14:15.925475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.980 [2024-12-05 12:14:15.925488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.980 [2024-12-05 12:14:15.925495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.980 [2024-12-05 12:14:15.925500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.980 [2024-12-05 12:14:15.925514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.980 qpair failed and we were unable to recover it. 00:30:41.980 [2024-12-05 12:14:15.935434] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.980 [2024-12-05 12:14:15.935496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.980 [2024-12-05 12:14:15.935509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.980 [2024-12-05 12:14:15.935515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.980 [2024-12-05 12:14:15.935521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.980 [2024-12-05 12:14:15.935536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.980 qpair failed and we were unable to recover it. 00:30:41.980 [2024-12-05 12:14:15.945498] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.980 [2024-12-05 12:14:15.945604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.980 [2024-12-05 12:14:15.945616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.980 [2024-12-05 12:14:15.945623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.980 [2024-12-05 12:14:15.945629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.980 [2024-12-05 12:14:15.945643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.980 qpair failed and we were unable to recover it. 00:30:41.980 [2024-12-05 12:14:15.955390] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.980 [2024-12-05 12:14:15.955442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.980 [2024-12-05 12:14:15.955455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.980 [2024-12-05 12:14:15.955461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.980 [2024-12-05 12:14:15.955467] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.980 [2024-12-05 12:14:15.955482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.980 qpair failed and we were unable to recover it. 00:30:41.980 [2024-12-05 12:14:15.965512] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.980 [2024-12-05 12:14:15.965571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.980 [2024-12-05 12:14:15.965584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.980 [2024-12-05 12:14:15.965590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.981 [2024-12-05 12:14:15.965597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.981 [2024-12-05 12:14:15.965611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.981 qpair failed and we were unable to recover it. 00:30:41.981 [2024-12-05 12:14:15.975498] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.981 [2024-12-05 12:14:15.975559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.981 [2024-12-05 12:14:15.975572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.981 [2024-12-05 12:14:15.975578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.981 [2024-12-05 12:14:15.975584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.981 [2024-12-05 12:14:15.975599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.981 qpair failed and we were unable to recover it. 00:30:41.981 [2024-12-05 12:14:15.985542] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.981 [2024-12-05 12:14:15.985596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.981 [2024-12-05 12:14:15.985608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.981 [2024-12-05 12:14:15.985615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.981 [2024-12-05 12:14:15.985620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.981 [2024-12-05 12:14:15.985635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.981 qpair failed and we were unable to recover it. 00:30:41.981 [2024-12-05 12:14:15.995532] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.981 [2024-12-05 12:14:15.995594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.981 [2024-12-05 12:14:15.995607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.981 [2024-12-05 12:14:15.995613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.981 [2024-12-05 12:14:15.995619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.981 [2024-12-05 12:14:15.995634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.981 qpair failed and we were unable to recover it. 00:30:41.981 [2024-12-05 12:14:16.005583] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.981 [2024-12-05 12:14:16.005633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.981 [2024-12-05 12:14:16.005649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.981 [2024-12-05 12:14:16.005655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.981 [2024-12-05 12:14:16.005661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.981 [2024-12-05 12:14:16.005676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.981 qpair failed and we were unable to recover it. 00:30:41.981 [2024-12-05 12:14:16.015642] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.981 [2024-12-05 12:14:16.015707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.981 [2024-12-05 12:14:16.015721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.981 [2024-12-05 12:14:16.015727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.981 [2024-12-05 12:14:16.015733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.981 [2024-12-05 12:14:16.015747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.981 qpair failed and we were unable to recover it. 00:30:41.981 [2024-12-05 12:14:16.025648] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.981 [2024-12-05 12:14:16.025701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.981 [2024-12-05 12:14:16.025715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.981 [2024-12-05 12:14:16.025721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.981 [2024-12-05 12:14:16.025727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.981 [2024-12-05 12:14:16.025741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.981 qpair failed and we were unable to recover it. 00:30:41.981 [2024-12-05 12:14:16.035688] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.981 [2024-12-05 12:14:16.035737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.981 [2024-12-05 12:14:16.035750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.981 [2024-12-05 12:14:16.035757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.981 [2024-12-05 12:14:16.035763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.981 [2024-12-05 12:14:16.035778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.981 qpair failed and we were unable to recover it. 00:30:41.981 [2024-12-05 12:14:16.045712] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.981 [2024-12-05 12:14:16.045763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.981 [2024-12-05 12:14:16.045776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.981 [2024-12-05 12:14:16.045782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.981 [2024-12-05 12:14:16.045791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.981 [2024-12-05 12:14:16.045805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.981 qpair failed and we were unable to recover it. 00:30:41.981 [2024-12-05 12:14:16.055743] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.981 [2024-12-05 12:14:16.055800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.981 [2024-12-05 12:14:16.055813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.981 [2024-12-05 12:14:16.055820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.981 [2024-12-05 12:14:16.055825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.981 [2024-12-05 12:14:16.055839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.981 qpair failed and we were unable to recover it. 00:30:41.981 [2024-12-05 12:14:16.065783] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.981 [2024-12-05 12:14:16.065855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.981 [2024-12-05 12:14:16.065867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.981 [2024-12-05 12:14:16.065874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.981 [2024-12-05 12:14:16.065880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.981 [2024-12-05 12:14:16.065895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.981 qpair failed and we were unable to recover it. 00:30:41.981 [2024-12-05 12:14:16.075808] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.981 [2024-12-05 12:14:16.075860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.981 [2024-12-05 12:14:16.075873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.981 [2024-12-05 12:14:16.075879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.981 [2024-12-05 12:14:16.075885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.981 [2024-12-05 12:14:16.075899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.981 qpair failed and we were unable to recover it. 00:30:41.981 [2024-12-05 12:14:16.085819] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.981 [2024-12-05 12:14:16.085874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.981 [2024-12-05 12:14:16.085886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.981 [2024-12-05 12:14:16.085893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.981 [2024-12-05 12:14:16.085899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.981 [2024-12-05 12:14:16.085913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.981 qpair failed and we were unable to recover it. 00:30:41.981 [2024-12-05 12:14:16.095875] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.981 [2024-12-05 12:14:16.095931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.981 [2024-12-05 12:14:16.095943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.981 [2024-12-05 12:14:16.095949] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.981 [2024-12-05 12:14:16.095956] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.982 [2024-12-05 12:14:16.095970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.982 qpair failed and we were unable to recover it. 00:30:41.982 [2024-12-05 12:14:16.105939] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.982 [2024-12-05 12:14:16.105992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.982 [2024-12-05 12:14:16.106005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.982 [2024-12-05 12:14:16.106011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.982 [2024-12-05 12:14:16.106017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.982 [2024-12-05 12:14:16.106031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.982 qpair failed and we were unable to recover it. 00:30:41.982 [2024-12-05 12:14:16.115938] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.982 [2024-12-05 12:14:16.115994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.982 [2024-12-05 12:14:16.116007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.982 [2024-12-05 12:14:16.116014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.982 [2024-12-05 12:14:16.116020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.982 [2024-12-05 12:14:16.116034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.982 qpair failed and we were unable to recover it. 00:30:41.982 [2024-12-05 12:14:16.125918] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.982 [2024-12-05 12:14:16.125975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.982 [2024-12-05 12:14:16.125987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.982 [2024-12-05 12:14:16.125994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.982 [2024-12-05 12:14:16.126000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.982 [2024-12-05 12:14:16.126014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.982 qpair failed and we were unable to recover it. 00:30:41.982 [2024-12-05 12:14:16.135975] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.982 [2024-12-05 12:14:16.136031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.982 [2024-12-05 12:14:16.136047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.982 [2024-12-05 12:14:16.136053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.982 [2024-12-05 12:14:16.136059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.982 [2024-12-05 12:14:16.136073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.982 qpair failed and we were unable to recover it. 00:30:41.982 [2024-12-05 12:14:16.145985] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.982 [2024-12-05 12:14:16.146046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.982 [2024-12-05 12:14:16.146060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.982 [2024-12-05 12:14:16.146066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.982 [2024-12-05 12:14:16.146072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.982 [2024-12-05 12:14:16.146087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.982 qpair failed and we were unable to recover it. 00:30:41.982 [2024-12-05 12:14:16.156030] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.982 [2024-12-05 12:14:16.156083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.982 [2024-12-05 12:14:16.156096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.982 [2024-12-05 12:14:16.156102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.982 [2024-12-05 12:14:16.156108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.982 [2024-12-05 12:14:16.156123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.982 qpair failed and we were unable to recover it. 00:30:41.982 [2024-12-05 12:14:16.166088] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.982 [2024-12-05 12:14:16.166137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.982 [2024-12-05 12:14:16.166150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.982 [2024-12-05 12:14:16.166156] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.982 [2024-12-05 12:14:16.166162] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.982 [2024-12-05 12:14:16.166176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.982 qpair failed and we were unable to recover it. 00:30:41.982 [2024-12-05 12:14:16.176142] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.982 [2024-12-05 12:14:16.176211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.982 [2024-12-05 12:14:16.176224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.982 [2024-12-05 12:14:16.176231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.982 [2024-12-05 12:14:16.176241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:41.982 [2024-12-05 12:14:16.176255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.982 qpair failed and we were unable to recover it. 00:30:42.242 [2024-12-05 12:14:16.186143] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.242 [2024-12-05 12:14:16.186201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.242 [2024-12-05 12:14:16.186214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.242 [2024-12-05 12:14:16.186221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.242 [2024-12-05 12:14:16.186227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:42.242 [2024-12-05 12:14:16.186241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.242 qpair failed and we were unable to recover it. 00:30:42.242 [2024-12-05 12:14:16.196132] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.242 [2024-12-05 12:14:16.196187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.242 [2024-12-05 12:14:16.196200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.242 [2024-12-05 12:14:16.196206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.242 [2024-12-05 12:14:16.196212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:42.242 [2024-12-05 12:14:16.196226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.242 qpair failed and we were unable to recover it. 00:30:42.242 [2024-12-05 12:14:16.206149] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.242 [2024-12-05 12:14:16.206204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.242 [2024-12-05 12:14:16.206217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.242 [2024-12-05 12:14:16.206224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.242 [2024-12-05 12:14:16.206230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:42.242 [2024-12-05 12:14:16.206244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.242 qpair failed and we were unable to recover it. 00:30:42.242 [2024-12-05 12:14:16.216212] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.242 [2024-12-05 12:14:16.216274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.242 [2024-12-05 12:14:16.216287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.242 [2024-12-05 12:14:16.216293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.242 [2024-12-05 12:14:16.216299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:42.242 [2024-12-05 12:14:16.216313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.242 qpair failed and we were unable to recover it. 00:30:42.242 [2024-12-05 12:14:16.226218] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.242 [2024-12-05 12:14:16.226276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.242 [2024-12-05 12:14:16.226289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.242 [2024-12-05 12:14:16.226296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.242 [2024-12-05 12:14:16.226302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:42.242 [2024-12-05 12:14:16.226316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.242 qpair failed and we were unable to recover it. 00:30:42.242 [2024-12-05 12:14:16.236288] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.242 [2024-12-05 12:14:16.236345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.242 [2024-12-05 12:14:16.236358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.242 [2024-12-05 12:14:16.236365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.242 [2024-12-05 12:14:16.236375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:42.242 [2024-12-05 12:14:16.236390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.242 qpair failed and we were unable to recover it. 00:30:42.242 [2024-12-05 12:14:16.246265] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.242 [2024-12-05 12:14:16.246315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.242 [2024-12-05 12:14:16.246328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.242 [2024-12-05 12:14:16.246335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.242 [2024-12-05 12:14:16.246341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:42.242 [2024-12-05 12:14:16.246356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.242 qpair failed and we were unable to recover it. 00:30:42.242 [2024-12-05 12:14:16.256312] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.242 [2024-12-05 12:14:16.256370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.242 [2024-12-05 12:14:16.256383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.242 [2024-12-05 12:14:16.256390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.242 [2024-12-05 12:14:16.256396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:42.242 [2024-12-05 12:14:16.256410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.242 qpair failed and we were unable to recover it. 00:30:42.242 [2024-12-05 12:14:16.266332] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.242 [2024-12-05 12:14:16.266391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.242 [2024-12-05 12:14:16.266407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.242 [2024-12-05 12:14:16.266414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.242 [2024-12-05 12:14:16.266419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:42.242 [2024-12-05 12:14:16.266434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.242 qpair failed and we were unable to recover it. 00:30:42.242 [2024-12-05 12:14:16.276347] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.242 [2024-12-05 12:14:16.276402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.242 [2024-12-05 12:14:16.276414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.242 [2024-12-05 12:14:16.276421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.242 [2024-12-05 12:14:16.276427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:42.242 [2024-12-05 12:14:16.276441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.242 qpair failed and we were unable to recover it. 00:30:42.242 [2024-12-05 12:14:16.286388] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.242 [2024-12-05 12:14:16.286442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.242 [2024-12-05 12:14:16.286455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.242 [2024-12-05 12:14:16.286462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.242 [2024-12-05 12:14:16.286468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:42.242 [2024-12-05 12:14:16.286482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.242 qpair failed and we were unable to recover it. 00:30:42.242 [2024-12-05 12:14:16.296459] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.242 [2024-12-05 12:14:16.296515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.242 [2024-12-05 12:14:16.296528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.242 [2024-12-05 12:14:16.296535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.242 [2024-12-05 12:14:16.296541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:42.243 [2024-12-05 12:14:16.296555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.243 qpair failed and we were unable to recover it. 00:30:42.243 [2024-12-05 12:14:16.306458] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.243 [2024-12-05 12:14:16.306513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.243 [2024-12-05 12:14:16.306525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.243 [2024-12-05 12:14:16.306535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.243 [2024-12-05 12:14:16.306541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:42.243 [2024-12-05 12:14:16.306556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.243 qpair failed and we were unable to recover it. 00:30:42.243 [2024-12-05 12:14:16.316467] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.243 [2024-12-05 12:14:16.316521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.243 [2024-12-05 12:14:16.316534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.243 [2024-12-05 12:14:16.316541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.243 [2024-12-05 12:14:16.316547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:42.243 [2024-12-05 12:14:16.316561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.243 qpair failed and we were unable to recover it. 00:30:42.243 [2024-12-05 12:14:16.326504] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.243 [2024-12-05 12:14:16.326559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.243 [2024-12-05 12:14:16.326572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.243 [2024-12-05 12:14:16.326579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.243 [2024-12-05 12:14:16.326585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:42.243 [2024-12-05 12:14:16.326599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.243 qpair failed and we were unable to recover it. 00:30:42.243 [2024-12-05 12:14:16.336606] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.243 [2024-12-05 12:14:16.336663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.243 [2024-12-05 12:14:16.336675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.243 [2024-12-05 12:14:16.336682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.243 [2024-12-05 12:14:16.336688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:42.243 [2024-12-05 12:14:16.336703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.243 qpair failed and we were unable to recover it. 00:30:42.243 [2024-12-05 12:14:16.346584] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.243 [2024-12-05 12:14:16.346639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.243 [2024-12-05 12:14:16.346652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.243 [2024-12-05 12:14:16.346659] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.243 [2024-12-05 12:14:16.346665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:42.243 [2024-12-05 12:14:16.346683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.243 qpair failed and we were unable to recover it. 00:30:42.243 [2024-12-05 12:14:16.356611] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.243 [2024-12-05 12:14:16.356668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.243 [2024-12-05 12:14:16.356681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.243 [2024-12-05 12:14:16.356687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.243 [2024-12-05 12:14:16.356693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:42.243 [2024-12-05 12:14:16.356708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.243 qpair failed and we were unable to recover it. 00:30:42.243 [2024-12-05 12:14:16.366634] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.243 [2024-12-05 12:14:16.366692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.243 [2024-12-05 12:14:16.366704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.243 [2024-12-05 12:14:16.366711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.243 [2024-12-05 12:14:16.366717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:42.243 [2024-12-05 12:14:16.366731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.243 qpair failed and we were unable to recover it. 00:30:42.243 [2024-12-05 12:14:16.376659] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.243 [2024-12-05 12:14:16.376715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.243 [2024-12-05 12:14:16.376727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.243 [2024-12-05 12:14:16.376734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.243 [2024-12-05 12:14:16.376739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:42.243 [2024-12-05 12:14:16.376753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.243 qpair failed and we were unable to recover it. 00:30:42.243 [2024-12-05 12:14:16.386714] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.243 [2024-12-05 12:14:16.386780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.243 [2024-12-05 12:14:16.386793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.243 [2024-12-05 12:14:16.386800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.243 [2024-12-05 12:14:16.386806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:42.243 [2024-12-05 12:14:16.386820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.243 qpair failed and we were unable to recover it. 00:30:42.243 [2024-12-05 12:14:16.396714] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.243 [2024-12-05 12:14:16.396767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.243 [2024-12-05 12:14:16.396781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.243 [2024-12-05 12:14:16.396787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.243 [2024-12-05 12:14:16.396793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:42.243 [2024-12-05 12:14:16.396807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.243 qpair failed and we were unable to recover it. 00:30:42.243 [2024-12-05 12:14:16.406731] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.243 [2024-12-05 12:14:16.406783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.243 [2024-12-05 12:14:16.406795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.243 [2024-12-05 12:14:16.406802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.243 [2024-12-05 12:14:16.406808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:42.243 [2024-12-05 12:14:16.406821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.243 qpair failed and we were unable to recover it. 00:30:42.243 [2024-12-05 12:14:16.416778] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.243 [2024-12-05 12:14:16.416850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.243 [2024-12-05 12:14:16.416862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.243 [2024-12-05 12:14:16.416868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.243 [2024-12-05 12:14:16.416874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:42.243 [2024-12-05 12:14:16.416889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.243 qpair failed and we were unable to recover it. 00:30:42.243 [2024-12-05 12:14:16.426799] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.243 [2024-12-05 12:14:16.426856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.243 [2024-12-05 12:14:16.426868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.243 [2024-12-05 12:14:16.426875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.243 [2024-12-05 12:14:16.426881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:42.243 [2024-12-05 12:14:16.426895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.243 qpair failed and we were unable to recover it. 00:30:42.244 [2024-12-05 12:14:16.436829] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.244 [2024-12-05 12:14:16.436889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.244 [2024-12-05 12:14:16.436902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.244 [2024-12-05 12:14:16.436914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.244 [2024-12-05 12:14:16.436920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:42.244 [2024-12-05 12:14:16.436934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.244 qpair failed and we were unable to recover it. 00:30:42.504 [2024-12-05 12:14:16.446839] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.504 [2024-12-05 12:14:16.446892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.504 [2024-12-05 12:14:16.446905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.504 [2024-12-05 12:14:16.446911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.504 [2024-12-05 12:14:16.446917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:42.504 [2024-12-05 12:14:16.446932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.504 qpair failed and we were unable to recover it. 00:30:42.504 [2024-12-05 12:14:16.456886] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.504 [2024-12-05 12:14:16.456942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.504 [2024-12-05 12:14:16.456955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.504 [2024-12-05 12:14:16.456961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.504 [2024-12-05 12:14:16.456967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:42.504 [2024-12-05 12:14:16.456982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.504 qpair failed and we were unable to recover it. 00:30:42.504 [2024-12-05 12:14:16.466930] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.504 [2024-12-05 12:14:16.466985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.504 [2024-12-05 12:14:16.466997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.504 [2024-12-05 12:14:16.467004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.504 [2024-12-05 12:14:16.467010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:42.504 [2024-12-05 12:14:16.467024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.504 qpair failed and we were unable to recover it. 00:30:42.504 [2024-12-05 12:14:16.476924] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.504 [2024-12-05 12:14:16.476978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.504 [2024-12-05 12:14:16.476991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.504 [2024-12-05 12:14:16.476997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.504 [2024-12-05 12:14:16.477003] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:42.504 [2024-12-05 12:14:16.477021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.504 qpair failed and we were unable to recover it. 00:30:42.504 [2024-12-05 12:14:16.486945] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.504 [2024-12-05 12:14:16.486996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.504 [2024-12-05 12:14:16.487008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.504 [2024-12-05 12:14:16.487014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.504 [2024-12-05 12:14:16.487021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:42.504 [2024-12-05 12:14:16.487035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.504 qpair failed and we were unable to recover it. 00:30:42.504 [2024-12-05 12:14:16.496985] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.504 [2024-12-05 12:14:16.497039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.504 [2024-12-05 12:14:16.497052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.504 [2024-12-05 12:14:16.497058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.504 [2024-12-05 12:14:16.497065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:42.504 [2024-12-05 12:14:16.497079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.504 qpair failed and we were unable to recover it. 00:30:42.504 [2024-12-05 12:14:16.506993] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.504 [2024-12-05 12:14:16.507051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.504 [2024-12-05 12:14:16.507063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.504 [2024-12-05 12:14:16.507071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.504 [2024-12-05 12:14:16.507076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:42.504 [2024-12-05 12:14:16.507091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.504 qpair failed and we were unable to recover it. 00:30:42.504 [2024-12-05 12:14:16.517037] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.504 [2024-12-05 12:14:16.517086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.504 [2024-12-05 12:14:16.517099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.504 [2024-12-05 12:14:16.517105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.504 [2024-12-05 12:14:16.517111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:42.504 [2024-12-05 12:14:16.517125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.504 qpair failed and we were unable to recover it. 00:30:42.504 [2024-12-05 12:14:16.527069] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.504 [2024-12-05 12:14:16.527117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.504 [2024-12-05 12:14:16.527129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.504 [2024-12-05 12:14:16.527136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.504 [2024-12-05 12:14:16.527142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:42.504 [2024-12-05 12:14:16.527156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.504 qpair failed and we were unable to recover it. 00:30:42.504 [2024-12-05 12:14:16.537091] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.504 [2024-12-05 12:14:16.537148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.504 [2024-12-05 12:14:16.537160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.504 [2024-12-05 12:14:16.537166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.504 [2024-12-05 12:14:16.537172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:42.504 [2024-12-05 12:14:16.537185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.504 qpair failed and we were unable to recover it. 00:30:42.504 [2024-12-05 12:14:16.547108] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.504 [2024-12-05 12:14:16.547161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.504 [2024-12-05 12:14:16.547174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.504 [2024-12-05 12:14:16.547181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.504 [2024-12-05 12:14:16.547187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:42.504 [2024-12-05 12:14:16.547201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.504 qpair failed and we were unable to recover it. 00:30:42.505 [2024-12-05 12:14:16.557135] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.505 [2024-12-05 12:14:16.557202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.505 [2024-12-05 12:14:16.557215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.505 [2024-12-05 12:14:16.557221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.505 [2024-12-05 12:14:16.557227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:42.505 [2024-12-05 12:14:16.557241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.505 qpair failed and we were unable to recover it. 00:30:42.505 [2024-12-05 12:14:16.567164] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.505 [2024-12-05 12:14:16.567241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.505 [2024-12-05 12:14:16.567257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.505 [2024-12-05 12:14:16.567264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.505 [2024-12-05 12:14:16.567269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:42.505 [2024-12-05 12:14:16.567284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.505 qpair failed and we were unable to recover it. 00:30:42.505 [2024-12-05 12:14:16.577208] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.505 [2024-12-05 12:14:16.577265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.505 [2024-12-05 12:14:16.577277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.505 [2024-12-05 12:14:16.577284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.505 [2024-12-05 12:14:16.577290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:42.505 [2024-12-05 12:14:16.577304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.505 qpair failed and we were unable to recover it. 00:30:42.505 [2024-12-05 12:14:16.587233] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.505 [2024-12-05 12:14:16.587306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.505 [2024-12-05 12:14:16.587319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.505 [2024-12-05 12:14:16.587325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.505 [2024-12-05 12:14:16.587331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:42.505 [2024-12-05 12:14:16.587346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.505 qpair failed and we were unable to recover it. 00:30:42.505 [2024-12-05 12:14:16.597251] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.505 [2024-12-05 12:14:16.597306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.505 [2024-12-05 12:14:16.597319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.505 [2024-12-05 12:14:16.597325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.505 [2024-12-05 12:14:16.597331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:42.505 [2024-12-05 12:14:16.597346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.505 qpair failed and we were unable to recover it. 00:30:42.505 [2024-12-05 12:14:16.607292] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.505 [2024-12-05 12:14:16.607344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.505 [2024-12-05 12:14:16.607357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.505 [2024-12-05 12:14:16.607363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.505 [2024-12-05 12:14:16.607375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:42.505 [2024-12-05 12:14:16.607391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.505 qpair failed and we were unable to recover it. 00:30:42.505 [2024-12-05 12:14:16.617323] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.505 [2024-12-05 12:14:16.617386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.505 [2024-12-05 12:14:16.617400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.505 [2024-12-05 12:14:16.617407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.505 [2024-12-05 12:14:16.617414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:42.505 [2024-12-05 12:14:16.617429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.505 qpair failed and we were unable to recover it. 00:30:42.505 [2024-12-05 12:14:16.627361] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.505 [2024-12-05 12:14:16.627416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.505 [2024-12-05 12:14:16.627429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.505 [2024-12-05 12:14:16.627436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.505 [2024-12-05 12:14:16.627442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:42.505 [2024-12-05 12:14:16.627457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.505 qpair failed and we were unable to recover it. 00:30:42.505 [2024-12-05 12:14:16.637380] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.505 [2024-12-05 12:14:16.637436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.505 [2024-12-05 12:14:16.637450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.505 [2024-12-05 12:14:16.637458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.505 [2024-12-05 12:14:16.637465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc42c000b90 00:30:42.505 [2024-12-05 12:14:16.637479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:42.505 qpair failed and we were unable to recover it. 00:30:42.505 [2024-12-05 12:14:16.647647] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.505 [2024-12-05 12:14:16.647758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.505 [2024-12-05 12:14:16.647812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.505 [2024-12-05 12:14:16.647838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.505 [2024-12-05 12:14:16.647859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc424000b90 00:30:42.505 [2024-12-05 12:14:16.647910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.505 qpair failed and we were unable to recover it. 00:30:42.505 [2024-12-05 12:14:16.657441] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.505 [2024-12-05 12:14:16.657531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.505 [2024-12-05 12:14:16.657561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.505 [2024-12-05 12:14:16.657578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.505 [2024-12-05 12:14:16.657594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc424000b90 00:30:42.505 [2024-12-05 12:14:16.657626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.505 qpair failed and we were unable to recover it. 00:30:42.505 [2024-12-05 12:14:16.667538] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.505 [2024-12-05 12:14:16.667647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.505 [2024-12-05 12:14:16.667701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.505 [2024-12-05 12:14:16.667727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.505 [2024-12-05 12:14:16.667747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc4cbe0 00:30:42.505 [2024-12-05 12:14:16.667797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.505 qpair failed and we were unable to recover it. 00:30:42.505 [2024-12-05 12:14:16.677509] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.505 [2024-12-05 12:14:16.677586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.505 [2024-12-05 12:14:16.677613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.505 [2024-12-05 12:14:16.677627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.505 [2024-12-05 12:14:16.677641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc4cbe0 00:30:42.505 [2024-12-05 12:14:16.677669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.505 qpair failed and we were unable to recover it. 00:30:42.505 [2024-12-05 12:14:16.687563] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.506 [2024-12-05 12:14:16.687672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.506 [2024-12-05 12:14:16.687727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.506 [2024-12-05 12:14:16.687753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.506 [2024-12-05 12:14:16.687774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc420000b90 00:30:42.506 [2024-12-05 12:14:16.687825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.506 qpair failed and we were unable to recover it. 00:30:42.506 [2024-12-05 12:14:16.697571] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.506 [2024-12-05 12:14:16.697648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.506 [2024-12-05 12:14:16.697681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.506 [2024-12-05 12:14:16.697696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.506 [2024-12-05 12:14:16.697709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc420000b90 00:30:42.506 [2024-12-05 12:14:16.697740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:42.506 qpair failed and we were unable to recover it. 00:30:42.506 [2024-12-05 12:14:16.697847] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:30:42.506 A controller has encountered a failure and is being reset. 00:30:42.765 Controller properly reset. 00:30:42.765 Initializing NVMe Controllers 00:30:42.765 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:42.765 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:42.765 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:30:42.765 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:30:42.765 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:30:42.765 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:30:42.765 Initialization complete. Launching workers. 00:30:42.765 Starting thread on core 1 00:30:42.765 Starting thread on core 2 00:30:42.765 Starting thread on core 3 00:30:42.765 Starting thread on core 0 00:30:42.765 12:14:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:30:42.765 00:30:42.765 real 0m10.732s 00:30:42.765 user 0m19.472s 00:30:42.765 sys 0m4.580s 00:30:42.765 12:14:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:42.765 12:14:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:42.765 ************************************ 00:30:42.765 END TEST nvmf_target_disconnect_tc2 00:30:42.765 ************************************ 00:30:42.765 12:14:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:30:42.765 12:14:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:30:42.765 12:14:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:30:42.765 12:14:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # nvmfcleanup 00:30:42.765 12:14:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@99 -- # sync 00:30:42.765 12:14:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:30:42.765 12:14:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@102 -- # set +e 00:30:42.765 12:14:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@103 -- # for i in {1..20} 00:30:42.766 12:14:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:30:42.766 rmmod nvme_tcp 00:30:42.766 rmmod nvme_fabrics 00:30:42.766 rmmod nvme_keyring 00:30:42.766 12:14:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:30:42.766 12:14:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # set -e 00:30:42.766 12:14:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # return 0 00:30:42.766 12:14:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # '[' -n 223693 ']' 00:30:42.766 12:14:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@337 -- # killprocess 223693 00:30:42.766 12:14:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 223693 ']' 00:30:42.766 12:14:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 223693 00:30:42.766 12:14:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:30:42.766 12:14:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:42.766 12:14:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 223693 00:30:42.766 12:14:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:30:42.766 12:14:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:30:42.766 12:14:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 223693' 00:30:42.766 killing process with pid 223693 00:30:42.766 12:14:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 223693 00:30:42.766 12:14:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 223693 00:30:43.025 12:14:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:30:43.025 12:14:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # nvmf_fini 00:30:43.025 12:14:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@264 -- # local dev 00:30:43.025 12:14:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@267 -- # remove_target_ns 00:30:43.025 12:14:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:30:43.025 12:14:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:30:43.025 12:14:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_target_ns 00:30:44.930 12:14:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@268 -- # delete_main_bridge 00:30:44.930 12:14:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:30:44.930 12:14:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@130 -- # return 0 00:30:44.930 12:14:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:30:44.930 12:14:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:30:44.930 12:14:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:30:44.930 12:14:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:30:45.189 12:14:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:30:45.189 12:14:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:30:45.189 12:14:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:30:45.189 12:14:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:30:45.189 12:14:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:30:45.190 12:14:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:30:45.190 12:14:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:30:45.190 12:14:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:30:45.190 12:14:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:30:45.190 12:14:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:30:45.190 12:14:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:30:45.190 12:14:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:30:45.190 12:14:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:30:45.190 12:14:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@41 -- # _dev=0 00:30:45.190 12:14:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@41 -- # dev_map=() 00:30:45.190 12:14:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@284 -- # iptr 00:30:45.190 12:14:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@542 -- # iptables-save 00:30:45.190 12:14:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:30:45.190 12:14:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@542 -- # iptables-restore 00:30:45.190 00:30:45.190 real 0m19.649s 00:30:45.190 user 0m46.995s 00:30:45.190 sys 0m9.561s 00:30:45.190 12:14:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:45.190 12:14:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:45.190 ************************************ 00:30:45.190 END TEST nvmf_target_disconnect 00:30:45.190 ************************************ 00:30:45.190 12:14:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # [[ tcp == \t\c\p ]] 00:30:45.190 12:14:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@31 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:30:45.190 12:14:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:45.190 12:14:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:45.190 12:14:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:45.190 ************************************ 00:30:45.190 START TEST nvmf_digest 00:30:45.190 ************************************ 00:30:45.190 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:30:45.190 * Looking for test storage... 00:30:45.190 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:45.190 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:45.190 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:30:45.190 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:45.190 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:45.190 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:45.190 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:45.190 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:45.190 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:30:45.190 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:30:45.190 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:30:45.190 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:30:45.190 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:30:45.190 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:30:45.190 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:30:45.190 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:45.190 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:30:45.190 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:30:45.190 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:45.190 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:45.190 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:30:45.190 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:30:45.190 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:45.190 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:30:45.190 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:30:45.190 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:30:45.449 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:30:45.449 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:45.449 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:30:45.449 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:30:45.449 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:45.449 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:45.449 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:30:45.449 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:45.449 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:45.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.449 --rc genhtml_branch_coverage=1 00:30:45.449 --rc genhtml_function_coverage=1 00:30:45.449 --rc genhtml_legend=1 00:30:45.449 --rc geninfo_all_blocks=1 00:30:45.449 --rc geninfo_unexecuted_blocks=1 00:30:45.449 00:30:45.449 ' 00:30:45.449 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:45.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.449 --rc genhtml_branch_coverage=1 00:30:45.449 --rc genhtml_function_coverage=1 00:30:45.449 --rc genhtml_legend=1 00:30:45.449 --rc geninfo_all_blocks=1 00:30:45.449 --rc geninfo_unexecuted_blocks=1 00:30:45.449 00:30:45.449 ' 00:30:45.449 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:45.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.449 --rc genhtml_branch_coverage=1 00:30:45.449 --rc genhtml_function_coverage=1 00:30:45.449 --rc genhtml_legend=1 00:30:45.449 --rc geninfo_all_blocks=1 00:30:45.449 --rc geninfo_unexecuted_blocks=1 00:30:45.449 00:30:45.449 ' 00:30:45.449 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:45.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.449 --rc genhtml_branch_coverage=1 00:30:45.449 --rc genhtml_function_coverage=1 00:30:45.449 --rc genhtml_legend=1 00:30:45.449 --rc geninfo_all_blocks=1 00:30:45.450 --rc geninfo_unexecuted_blocks=1 00:30:45.450 00:30:45.450 ' 00:30:45.450 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:45.450 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:30:45.450 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:45.450 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:45.450 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:45.450 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:45.450 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:45.450 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:30:45.450 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:45.450 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:30:45.450 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:30:45.450 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:30:45.450 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:45.450 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:30:45.450 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:30:45.450 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:45.450 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:45.450 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:30:45.450 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:45.450 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:45.450 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:45.450 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.450 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.450 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.450 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:30:45.450 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.450 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:30:45.450 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:30:45.450 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:30:45.450 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:30:45.450 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@50 -- # : 0 00:30:45.450 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:30:45.450 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:30:45.450 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:30:45.450 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:45.450 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:45.450 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:30:45.450 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:30:45.450 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:30:45.450 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:30:45.450 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@54 -- # have_pci_nics=0 00:30:45.450 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:30:45.450 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:30:45.450 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:30:45.450 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:30:45.450 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:30:45.450 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:30:45.450 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:45.450 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # prepare_net_devs 00:30:45.450 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # local -g is_hw=no 00:30:45.450 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@260 -- # remove_target_ns 00:30:45.450 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:30:45.450 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:30:45.450 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_target_ns 00:30:45.450 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:30:45.450 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:30:45.450 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # xtrace_disable 00:30:45.450 12:14:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@131 -- # pci_devs=() 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@131 -- # local -a pci_devs 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@132 -- # pci_net_devs=() 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@133 -- # pci_drivers=() 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@133 -- # local -A pci_drivers 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@135 -- # net_devs=() 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@135 -- # local -ga net_devs 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@136 -- # e810=() 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@136 -- # local -ga e810 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@137 -- # x722=() 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@137 -- # local -ga x722 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@138 -- # mlx=() 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@138 -- # local -ga mlx 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:52.024 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:52.024 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # [[ up == up ]] 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:52.024 Found net devices under 0000:86:00.0: cvl_0_0 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # [[ up == up ]] 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:52.024 Found net devices under 0000:86:00.1: cvl_0_1 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # is_hw=yes 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@257 -- # create_target_ns 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:30:52.024 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@27 -- # local -gA dev_map 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@28 -- # local -g _dev 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@44 -- # ips=() 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@11 -- # local val=167772161 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:30:52.025 10.0.0.1 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@11 -- # local val=167772162 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:30:52.025 10.0.0.2 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@38 -- # ping_ips 1 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@107 -- # local dev=initiator0 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:30:52.025 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:52.025 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.447 ms 00:30:52.025 00:30:52.025 --- 10.0.0.1 ping statistics --- 00:30:52.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:52.025 rtt min/avg/max/mdev = 0.447/0.447/0.447/0.000 ms 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@168 -- # get_net_dev target0 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@107 -- # local dev=target0 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:30:52.025 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:30:52.026 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:52.026 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:30:52.026 00:30:52.026 --- 10.0.0.2 ping statistics --- 00:30:52.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:52.026 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@98 -- # (( pair++ )) 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@270 -- # return 0 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@107 -- # local dev=initiator0 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@107 -- # local dev=initiator1 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@109 -- # return 1 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@168 -- # dev= 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@169 -- # return 0 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@168 -- # get_net_dev target0 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@107 -- # local dev=target0 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@168 -- # get_net_dev target1 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@107 -- # local dev=target1 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@109 -- # return 1 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@168 -- # dev= 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@169 -- # return 0 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:52.026 ************************************ 00:30:52.026 START TEST nvmf_digest_clean 00:30:52.026 ************************************ 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@328 -- # nvmfpid=228229 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@329 -- # waitforlisten 228229 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 228229 ']' 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:52.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:52.026 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:52.026 [2024-12-05 12:14:25.577425] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:30:52.027 [2024-12-05 12:14:25.577464] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:52.027 [2024-12-05 12:14:25.656084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:52.027 [2024-12-05 12:14:25.701419] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:52.027 [2024-12-05 12:14:25.701451] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:52.027 [2024-12-05 12:14:25.701459] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:52.027 [2024-12-05 12:14:25.701464] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:52.027 [2024-12-05 12:14:25.701470] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:52.027 [2024-12-05 12:14:25.702031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:52.027 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:52.027 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:30:52.027 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:30:52.027 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:52.027 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:52.027 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:52.027 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:30:52.027 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:30:52.027 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:30:52.027 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.027 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:52.027 null0 00:30:52.027 [2024-12-05 12:14:25.853286] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:52.027 [2024-12-05 12:14:25.877494] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:52.027 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.027 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:30:52.027 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:52.027 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:52.027 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:30:52.027 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:30:52.027 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:30:52.027 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:52.027 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=228249 00:30:52.027 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 228249 /var/tmp/bperf.sock 00:30:52.027 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:30:52.027 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 228249 ']' 00:30:52.027 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:52.027 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:52.027 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:52.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:52.027 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:52.027 12:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:52.027 [2024-12-05 12:14:25.928526] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:30:52.027 [2024-12-05 12:14:25.928566] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid228249 ] 00:30:52.027 [2024-12-05 12:14:26.001684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:52.027 [2024-12-05 12:14:26.043673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:52.027 12:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:52.027 12:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:30:52.027 12:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:52.027 12:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:52.027 12:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:52.286 12:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:52.286 12:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:52.543 nvme0n1 00:30:52.543 12:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:52.543 12:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:52.800 Running I/O for 2 seconds... 00:30:54.673 25671.00 IOPS, 100.28 MiB/s [2024-12-05T11:14:28.869Z] 25917.50 IOPS, 101.24 MiB/s 00:30:54.673 Latency(us) 00:30:54.673 [2024-12-05T11:14:28.869Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:54.673 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:30:54.673 nvme0n1 : 2.00 25933.44 101.30 0.00 0.00 4930.30 2605.84 15666.22 00:30:54.673 [2024-12-05T11:14:28.869Z] =================================================================================================================== 00:30:54.673 [2024-12-05T11:14:28.869Z] Total : 25933.44 101.30 0.00 0.00 4930.30 2605.84 15666.22 00:30:54.673 { 00:30:54.673 "results": [ 00:30:54.673 { 00:30:54.673 "job": "nvme0n1", 00:30:54.673 "core_mask": "0x2", 00:30:54.673 "workload": "randread", 00:30:54.673 "status": "finished", 00:30:54.673 "queue_depth": 128, 00:30:54.673 "io_size": 4096, 00:30:54.673 "runtime": 2.004516, 00:30:54.673 "iops": 25933.442287315243, 00:30:54.673 "mibps": 101.30250893482517, 00:30:54.673 "io_failed": 0, 00:30:54.673 "io_timeout": 0, 00:30:54.673 "avg_latency_us": 4930.297730290639, 00:30:54.673 "min_latency_us": 2605.8361904761905, 00:30:54.673 "max_latency_us": 15666.224761904761 00:30:54.673 } 00:30:54.673 ], 00:30:54.673 "core_count": 1 00:30:54.673 } 00:30:54.673 12:14:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:54.673 12:14:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:54.673 12:14:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:54.673 12:14:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:54.673 | select(.opcode=="crc32c") 00:30:54.673 | "\(.module_name) \(.executed)"' 00:30:54.673 12:14:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:54.932 12:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:54.932 12:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:54.932 12:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:54.932 12:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:54.932 12:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 228249 00:30:54.932 12:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 228249 ']' 00:30:54.932 12:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 228249 00:30:54.932 12:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:30:54.932 12:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:54.932 12:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 228249 00:30:54.932 12:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:54.932 12:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:54.932 12:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 228249' 00:30:54.932 killing process with pid 228249 00:30:54.932 12:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 228249 00:30:54.932 Received shutdown signal, test time was about 2.000000 seconds 00:30:54.932 00:30:54.932 Latency(us) 00:30:54.932 [2024-12-05T11:14:29.128Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:54.932 [2024-12-05T11:14:29.128Z] =================================================================================================================== 00:30:54.932 [2024-12-05T11:14:29.128Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:54.932 12:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 228249 00:30:55.191 12:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:30:55.191 12:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:55.191 12:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:55.191 12:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:30:55.191 12:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:30:55.191 12:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:30:55.191 12:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:55.191 12:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=228938 00:30:55.191 12:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:30:55.191 12:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 228938 /var/tmp/bperf.sock 00:30:55.191 12:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 228938 ']' 00:30:55.191 12:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:55.191 12:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:55.191 12:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:55.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:55.191 12:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:55.191 12:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:55.191 [2024-12-05 12:14:29.292860] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:30:55.191 [2024-12-05 12:14:29.292908] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid228938 ] 00:30:55.191 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:55.191 Zero copy mechanism will not be used. 00:30:55.191 [2024-12-05 12:14:29.366764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:55.450 [2024-12-05 12:14:29.407600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:55.450 12:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:55.450 12:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:30:55.450 12:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:55.450 12:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:55.450 12:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:55.708 12:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:55.708 12:14:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:55.967 nvme0n1 00:30:55.967 12:14:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:55.967 12:14:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:55.967 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:55.967 Zero copy mechanism will not be used. 00:30:55.967 Running I/O for 2 seconds... 00:30:58.279 5992.00 IOPS, 749.00 MiB/s [2024-12-05T11:14:32.475Z] 5990.00 IOPS, 748.75 MiB/s 00:30:58.279 Latency(us) 00:30:58.279 [2024-12-05T11:14:32.475Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:58.279 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:30:58.279 nvme0n1 : 2.00 5985.41 748.18 0.00 0.00 2670.33 612.45 7739.49 00:30:58.279 [2024-12-05T11:14:32.475Z] =================================================================================================================== 00:30:58.279 [2024-12-05T11:14:32.475Z] Total : 5985.41 748.18 0.00 0.00 2670.33 612.45 7739.49 00:30:58.279 { 00:30:58.279 "results": [ 00:30:58.279 { 00:30:58.279 "job": "nvme0n1", 00:30:58.279 "core_mask": "0x2", 00:30:58.279 "workload": "randread", 00:30:58.279 "status": "finished", 00:30:58.279 "queue_depth": 16, 00:30:58.279 "io_size": 131072, 00:30:58.279 "runtime": 2.004208, 00:30:58.279 "iops": 5985.406704294165, 00:30:58.279 "mibps": 748.1758380367706, 00:30:58.279 "io_failed": 0, 00:30:58.279 "io_timeout": 0, 00:30:58.279 "avg_latency_us": 2670.329540958097, 00:30:58.279 "min_latency_us": 612.4495238095238, 00:30:58.279 "max_latency_us": 7739.489523809524 00:30:58.279 } 00:30:58.279 ], 00:30:58.279 "core_count": 1 00:30:58.279 } 00:30:58.279 12:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:58.279 12:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:58.279 12:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:58.279 12:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:58.279 | select(.opcode=="crc32c") 00:30:58.279 | "\(.module_name) \(.executed)"' 00:30:58.279 12:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:58.279 12:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:58.279 12:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:58.279 12:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:58.279 12:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:58.279 12:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 228938 00:30:58.279 12:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 228938 ']' 00:30:58.279 12:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 228938 00:30:58.279 12:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:30:58.279 12:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:58.279 12:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 228938 00:30:58.279 12:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:58.279 12:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:58.279 12:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 228938' 00:30:58.279 killing process with pid 228938 00:30:58.279 12:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 228938 00:30:58.279 Received shutdown signal, test time was about 2.000000 seconds 00:30:58.279 00:30:58.279 Latency(us) 00:30:58.279 [2024-12-05T11:14:32.475Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:58.279 [2024-12-05T11:14:32.475Z] =================================================================================================================== 00:30:58.279 [2024-12-05T11:14:32.475Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:58.279 12:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 228938 00:30:58.538 12:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:30:58.538 12:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:58.538 12:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:58.538 12:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:30:58.538 12:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:30:58.538 12:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:30:58.538 12:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:58.538 12:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=229412 00:30:58.538 12:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 229412 /var/tmp/bperf.sock 00:30:58.539 12:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:30:58.539 12:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 229412 ']' 00:30:58.539 12:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:58.539 12:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:58.539 12:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:58.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:58.539 12:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:58.539 12:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:58.539 [2024-12-05 12:14:32.616556] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:30:58.539 [2024-12-05 12:14:32.616604] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid229412 ] 00:30:58.539 [2024-12-05 12:14:32.691222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:58.539 [2024-12-05 12:14:32.733380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:58.797 12:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:58.797 12:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:30:58.797 12:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:58.797 12:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:58.798 12:14:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:59.056 12:14:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:59.056 12:14:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:59.315 nvme0n1 00:30:59.315 12:14:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:59.315 12:14:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:59.574 Running I/O for 2 seconds... 00:31:01.449 28584.00 IOPS, 111.66 MiB/s [2024-12-05T11:14:35.645Z] 28358.00 IOPS, 110.77 MiB/s 00:31:01.449 Latency(us) 00:31:01.449 [2024-12-05T11:14:35.645Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:01.449 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:01.449 nvme0n1 : 2.00 28349.48 110.74 0.00 0.00 4507.74 1778.83 9299.87 00:31:01.449 [2024-12-05T11:14:35.645Z] =================================================================================================================== 00:31:01.449 [2024-12-05T11:14:35.645Z] Total : 28349.48 110.74 0.00 0.00 4507.74 1778.83 9299.87 00:31:01.449 { 00:31:01.449 "results": [ 00:31:01.449 { 00:31:01.449 "job": "nvme0n1", 00:31:01.449 "core_mask": "0x2", 00:31:01.449 "workload": "randwrite", 00:31:01.449 "status": "finished", 00:31:01.449 "queue_depth": 128, 00:31:01.449 "io_size": 4096, 00:31:01.449 "runtime": 2.004552, 00:31:01.449 "iops": 28349.47659127825, 00:31:01.449 "mibps": 110.74014293468066, 00:31:01.449 "io_failed": 0, 00:31:01.449 "io_timeout": 0, 00:31:01.449 "avg_latency_us": 4507.743231069861, 00:31:01.449 "min_latency_us": 1778.8342857142857, 00:31:01.449 "max_latency_us": 9299.870476190476 00:31:01.449 } 00:31:01.449 ], 00:31:01.449 "core_count": 1 00:31:01.449 } 00:31:01.449 12:14:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:31:01.449 12:14:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:31:01.449 12:14:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:01.449 12:14:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:01.449 | select(.opcode=="crc32c") 00:31:01.449 | "\(.module_name) \(.executed)"' 00:31:01.449 12:14:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:01.773 12:14:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:31:01.773 12:14:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:31:01.773 12:14:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:31:01.773 12:14:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:01.773 12:14:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 229412 00:31:01.773 12:14:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 229412 ']' 00:31:01.773 12:14:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 229412 00:31:01.773 12:14:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:31:01.773 12:14:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:01.773 12:14:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 229412 00:31:01.773 12:14:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:01.773 12:14:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:01.773 12:14:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 229412' 00:31:01.773 killing process with pid 229412 00:31:01.773 12:14:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 229412 00:31:01.773 Received shutdown signal, test time was about 2.000000 seconds 00:31:01.773 00:31:01.773 Latency(us) 00:31:01.773 [2024-12-05T11:14:35.969Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:01.773 [2024-12-05T11:14:35.969Z] =================================================================================================================== 00:31:01.773 [2024-12-05T11:14:35.969Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:01.773 12:14:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 229412 00:31:02.122 12:14:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:31:02.122 12:14:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:31:02.122 12:14:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:02.122 12:14:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:31:02.122 12:14:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:31:02.122 12:14:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:31:02.122 12:14:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:31:02.122 12:14:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=229892 00:31:02.122 12:14:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 229892 /var/tmp/bperf.sock 00:31:02.122 12:14:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:31:02.122 12:14:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 229892 ']' 00:31:02.122 12:14:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:02.122 12:14:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:02.122 12:14:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:02.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:02.122 12:14:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:02.122 12:14:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:02.122 [2024-12-05 12:14:36.032487] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:31:02.122 [2024-12-05 12:14:36.032537] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid229892 ] 00:31:02.122 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:02.122 Zero copy mechanism will not be used. 00:31:02.122 [2024-12-05 12:14:36.109188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:02.122 [2024-12-05 12:14:36.148335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:02.122 12:14:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:02.122 12:14:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:31:02.122 12:14:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:31:02.122 12:14:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:31:02.122 12:14:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:02.381 12:14:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:02.381 12:14:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:02.948 nvme0n1 00:31:02.948 12:14:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:31:02.948 12:14:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:02.948 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:02.948 Zero copy mechanism will not be used. 00:31:02.948 Running I/O for 2 seconds... 00:31:04.817 6334.00 IOPS, 791.75 MiB/s [2024-12-05T11:14:39.013Z] 6595.50 IOPS, 824.44 MiB/s 00:31:04.817 Latency(us) 00:31:04.817 [2024-12-05T11:14:39.013Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:04.817 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:31:04.817 nvme0n1 : 2.00 6595.07 824.38 0.00 0.00 2422.21 1490.16 8238.81 00:31:04.817 [2024-12-05T11:14:39.013Z] =================================================================================================================== 00:31:04.817 [2024-12-05T11:14:39.013Z] Total : 6595.07 824.38 0.00 0.00 2422.21 1490.16 8238.81 00:31:04.817 { 00:31:04.817 "results": [ 00:31:04.817 { 00:31:04.817 "job": "nvme0n1", 00:31:04.817 "core_mask": "0x2", 00:31:04.817 "workload": "randwrite", 00:31:04.817 "status": "finished", 00:31:04.817 "queue_depth": 16, 00:31:04.817 "io_size": 131072, 00:31:04.817 "runtime": 2.003315, 00:31:04.817 "iops": 6595.068673673386, 00:31:04.817 "mibps": 824.3835842091733, 00:31:04.817 "io_failed": 0, 00:31:04.817 "io_timeout": 0, 00:31:04.817 "avg_latency_us": 2422.2100474316276, 00:31:04.817 "min_latency_us": 1490.1638095238095, 00:31:04.817 "max_latency_us": 8238.81142857143 00:31:04.817 } 00:31:04.817 ], 00:31:04.817 "core_count": 1 00:31:04.817 } 00:31:04.817 12:14:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:31:04.817 12:14:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:31:04.817 12:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:04.817 12:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:04.817 | select(.opcode=="crc32c") 00:31:04.817 | "\(.module_name) \(.executed)"' 00:31:04.817 12:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:05.075 12:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:31:05.075 12:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:31:05.075 12:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:31:05.075 12:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:05.075 12:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 229892 00:31:05.075 12:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 229892 ']' 00:31:05.075 12:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 229892 00:31:05.075 12:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:31:05.075 12:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:05.075 12:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 229892 00:31:05.075 12:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:05.075 12:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:05.075 12:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 229892' 00:31:05.075 killing process with pid 229892 00:31:05.075 12:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 229892 00:31:05.075 Received shutdown signal, test time was about 2.000000 seconds 00:31:05.075 00:31:05.075 Latency(us) 00:31:05.075 [2024-12-05T11:14:39.271Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:05.075 [2024-12-05T11:14:39.271Z] =================================================================================================================== 00:31:05.075 [2024-12-05T11:14:39.271Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:05.075 12:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 229892 00:31:05.334 12:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 228229 00:31:05.334 12:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 228229 ']' 00:31:05.334 12:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 228229 00:31:05.334 12:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:31:05.334 12:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:05.334 12:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 228229 00:31:05.334 12:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:05.334 12:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:05.334 12:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 228229' 00:31:05.334 killing process with pid 228229 00:31:05.334 12:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 228229 00:31:05.334 12:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 228229 00:31:05.594 00:31:05.594 real 0m14.117s 00:31:05.594 user 0m26.913s 00:31:05.594 sys 0m4.672s 00:31:05.594 12:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:05.594 12:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:05.594 ************************************ 00:31:05.594 END TEST nvmf_digest_clean 00:31:05.594 ************************************ 00:31:05.594 12:14:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:31:05.594 12:14:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:05.594 12:14:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:05.594 12:14:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:31:05.594 ************************************ 00:31:05.594 START TEST nvmf_digest_error 00:31:05.594 ************************************ 00:31:05.594 12:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:31:05.594 12:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:31:05.594 12:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:31:05.594 12:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:05.594 12:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:05.594 12:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@328 -- # nvmfpid=230606 00:31:05.594 12:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@329 -- # waitforlisten 230606 00:31:05.594 12:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:31:05.594 12:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 230606 ']' 00:31:05.594 12:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:05.594 12:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:05.594 12:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:05.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:05.594 12:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:05.594 12:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:05.594 [2024-12-05 12:14:39.766196] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:31:05.594 [2024-12-05 12:14:39.766237] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:05.853 [2024-12-05 12:14:39.845150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:05.853 [2024-12-05 12:14:39.885353] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:05.853 [2024-12-05 12:14:39.885391] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:05.853 [2024-12-05 12:14:39.885398] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:05.853 [2024-12-05 12:14:39.885404] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:05.853 [2024-12-05 12:14:39.885409] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:05.853 [2024-12-05 12:14:39.885940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:05.853 12:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:05.853 12:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:31:05.853 12:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:31:05.853 12:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:05.853 12:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:05.853 12:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:05.853 12:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:31:05.853 12:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.853 12:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:05.853 [2024-12-05 12:14:39.954384] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:31:05.853 12:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.854 12:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:31:05.854 12:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:31:05.854 12:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.854 12:14:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:05.854 null0 00:31:06.113 [2024-12-05 12:14:40.052288] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:06.113 [2024-12-05 12:14:40.076429] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:06.113 12:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.113 12:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:31:06.113 12:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:31:06.113 12:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:31:06.113 12:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:31:06.113 12:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:31:06.113 12:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=230625 00:31:06.113 12:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 230625 /var/tmp/bperf.sock 00:31:06.113 12:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:31:06.113 12:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 230625 ']' 00:31:06.113 12:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:06.113 12:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:06.113 12:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:06.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:06.113 12:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:06.113 12:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:06.113 [2024-12-05 12:14:40.127656] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:31:06.113 [2024-12-05 12:14:40.127698] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid230625 ] 00:31:06.113 [2024-12-05 12:14:40.201505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:06.113 [2024-12-05 12:14:40.242029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:06.372 12:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:06.372 12:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:31:06.372 12:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:06.372 12:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:06.372 12:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:06.372 12:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.372 12:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:06.372 12:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.372 12:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:06.372 12:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:06.631 nvme0n1 00:31:06.631 12:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:31:06.631 12:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.631 12:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:06.631 12:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.631 12:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:06.631 12:14:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:06.891 Running I/O for 2 seconds... 00:31:06.891 [2024-12-05 12:14:40.922251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:06.891 [2024-12-05 12:14:40.922285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:9052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.891 [2024-12-05 12:14:40.922296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:06.891 [2024-12-05 12:14:40.930776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:06.891 [2024-12-05 12:14:40.930799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:11337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.891 [2024-12-05 12:14:40.930808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:06.891 [2024-12-05 12:14:40.943059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:06.891 [2024-12-05 12:14:40.943081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:23681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.891 [2024-12-05 12:14:40.943090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:06.891 [2024-12-05 12:14:40.954350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:06.891 [2024-12-05 12:14:40.954379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:15384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.891 [2024-12-05 12:14:40.954388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:06.891 [2024-12-05 12:14:40.962515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:06.891 [2024-12-05 12:14:40.962536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:7346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.891 [2024-12-05 12:14:40.962544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:06.891 [2024-12-05 12:14:40.972042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:06.891 [2024-12-05 12:14:40.972062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.891 [2024-12-05 12:14:40.972074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:06.891 [2024-12-05 12:14:40.982326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:06.891 [2024-12-05 12:14:40.982347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.891 [2024-12-05 12:14:40.982355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:06.891 [2024-12-05 12:14:40.994878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:06.891 [2024-12-05 12:14:40.994897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.891 [2024-12-05 12:14:40.994906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:06.891 [2024-12-05 12:14:41.007667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:06.891 [2024-12-05 12:14:41.007686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:13029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.891 [2024-12-05 12:14:41.007694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:06.891 [2024-12-05 12:14:41.019710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:06.891 [2024-12-05 12:14:41.019731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:4541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.891 [2024-12-05 12:14:41.019739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:06.891 [2024-12-05 12:14:41.028048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:06.891 [2024-12-05 12:14:41.028068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:6044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.891 [2024-12-05 12:14:41.028076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:06.891 [2024-12-05 12:14:41.039431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:06.891 [2024-12-05 12:14:41.039452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.891 [2024-12-05 12:14:41.039460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:06.891 [2024-12-05 12:14:41.050111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:06.891 [2024-12-05 12:14:41.050131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:5035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.891 [2024-12-05 12:14:41.050140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:06.891 [2024-12-05 12:14:41.059487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:06.891 [2024-12-05 12:14:41.059506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:14144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.891 [2024-12-05 12:14:41.059514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:06.891 [2024-12-05 12:14:41.068079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:06.891 [2024-12-05 12:14:41.068099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:1164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.891 [2024-12-05 12:14:41.068107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:06.891 [2024-12-05 12:14:41.077800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:06.891 [2024-12-05 12:14:41.077820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.891 [2024-12-05 12:14:41.077828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:06.891 [2024-12-05 12:14:41.086746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:06.891 [2024-12-05 12:14:41.086767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.891 [2024-12-05 12:14:41.086776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.156 [2024-12-05 12:14:41.096011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.156 [2024-12-05 12:14:41.096031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:16615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.156 [2024-12-05 12:14:41.096039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.156 [2024-12-05 12:14:41.108161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.156 [2024-12-05 12:14:41.108181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:14383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.156 [2024-12-05 12:14:41.108190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.156 [2024-12-05 12:14:41.116771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.156 [2024-12-05 12:14:41.116790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:14715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.156 [2024-12-05 12:14:41.116799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.156 [2024-12-05 12:14:41.128518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.156 [2024-12-05 12:14:41.128539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.156 [2024-12-05 12:14:41.128547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.156 [2024-12-05 12:14:41.137102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.156 [2024-12-05 12:14:41.137122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.156 [2024-12-05 12:14:41.137130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.156 [2024-12-05 12:14:41.148746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.156 [2024-12-05 12:14:41.148766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:6322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.156 [2024-12-05 12:14:41.148781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.156 [2024-12-05 12:14:41.159982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.156 [2024-12-05 12:14:41.160001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:19262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.156 [2024-12-05 12:14:41.160009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.156 [2024-12-05 12:14:41.168562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.156 [2024-12-05 12:14:41.168582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:16373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.156 [2024-12-05 12:14:41.168589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.156 [2024-12-05 12:14:41.180978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.156 [2024-12-05 12:14:41.180998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:8868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.156 [2024-12-05 12:14:41.181005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.156 [2024-12-05 12:14:41.193618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.156 [2024-12-05 12:14:41.193637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.156 [2024-12-05 12:14:41.193645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.156 [2024-12-05 12:14:41.206135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.156 [2024-12-05 12:14:41.206155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:13249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.156 [2024-12-05 12:14:41.206163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.156 [2024-12-05 12:14:41.217220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.156 [2024-12-05 12:14:41.217240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:8184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.156 [2024-12-05 12:14:41.217248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.156 [2024-12-05 12:14:41.226406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.156 [2024-12-05 12:14:41.226425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.156 [2024-12-05 12:14:41.226433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.156 [2024-12-05 12:14:41.237654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.156 [2024-12-05 12:14:41.237675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:5770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.156 [2024-12-05 12:14:41.237682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.156 [2024-12-05 12:14:41.250674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.156 [2024-12-05 12:14:41.250697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:7151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.156 [2024-12-05 12:14:41.250705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.156 [2024-12-05 12:14:41.262464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.156 [2024-12-05 12:14:41.262484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:17525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.156 [2024-12-05 12:14:41.262491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.157 [2024-12-05 12:14:41.274312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.157 [2024-12-05 12:14:41.274331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.157 [2024-12-05 12:14:41.274338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.157 [2024-12-05 12:14:41.283025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.157 [2024-12-05 12:14:41.283045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:20398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.157 [2024-12-05 12:14:41.283053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.157 [2024-12-05 12:14:41.295498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.157 [2024-12-05 12:14:41.295523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.157 [2024-12-05 12:14:41.295532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.157 [2024-12-05 12:14:41.308242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.157 [2024-12-05 12:14:41.308264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:10051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.157 [2024-12-05 12:14:41.308272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.157 [2024-12-05 12:14:41.316336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.157 [2024-12-05 12:14:41.316355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.157 [2024-12-05 12:14:41.316363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.157 [2024-12-05 12:14:41.328718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.157 [2024-12-05 12:14:41.328737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:23888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.157 [2024-12-05 12:14:41.328746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.157 [2024-12-05 12:14:41.339231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.157 [2024-12-05 12:14:41.339252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:17388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.157 [2024-12-05 12:14:41.339260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.157 [2024-12-05 12:14:41.347771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.157 [2024-12-05 12:14:41.347791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:7146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.157 [2024-12-05 12:14:41.347799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.416 [2024-12-05 12:14:41.358401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.416 [2024-12-05 12:14:41.358421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.416 [2024-12-05 12:14:41.358429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.416 [2024-12-05 12:14:41.372273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.416 [2024-12-05 12:14:41.372293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:24850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.416 [2024-12-05 12:14:41.372301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.416 [2024-12-05 12:14:41.384791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.416 [2024-12-05 12:14:41.384812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:19585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.416 [2024-12-05 12:14:41.384819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.416 [2024-12-05 12:14:41.392810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.416 [2024-12-05 12:14:41.392830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:11681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.416 [2024-12-05 12:14:41.392838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.416 [2024-12-05 12:14:41.404945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.416 [2024-12-05 12:14:41.404964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:5395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.416 [2024-12-05 12:14:41.404972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.416 [2024-12-05 12:14:41.413356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.416 [2024-12-05 12:14:41.413380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:3905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.416 [2024-12-05 12:14:41.413388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.416 [2024-12-05 12:14:41.425382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.416 [2024-12-05 12:14:41.425401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:13904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.416 [2024-12-05 12:14:41.425409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.416 [2024-12-05 12:14:41.435817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.416 [2024-12-05 12:14:41.435837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:6480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.416 [2024-12-05 12:14:41.435848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.416 [2024-12-05 12:14:41.444306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.416 [2024-12-05 12:14:41.444325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:23778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.416 [2024-12-05 12:14:41.444333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.416 [2024-12-05 12:14:41.455168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.416 [2024-12-05 12:14:41.455188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:1245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.416 [2024-12-05 12:14:41.455196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.416 [2024-12-05 12:14:41.465528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.416 [2024-12-05 12:14:41.465548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:3380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.416 [2024-12-05 12:14:41.465556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.416 [2024-12-05 12:14:41.474181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.416 [2024-12-05 12:14:41.474200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:6031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.416 [2024-12-05 12:14:41.474208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.416 [2024-12-05 12:14:41.484262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.416 [2024-12-05 12:14:41.484282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.416 [2024-12-05 12:14:41.484290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.416 [2024-12-05 12:14:41.492248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.416 [2024-12-05 12:14:41.492267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:7575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.416 [2024-12-05 12:14:41.492275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.416 [2024-12-05 12:14:41.503247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.416 [2024-12-05 12:14:41.503267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:6457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.416 [2024-12-05 12:14:41.503275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.416 [2024-12-05 12:14:41.515703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.416 [2024-12-05 12:14:41.515723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.416 [2024-12-05 12:14:41.515731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.416 [2024-12-05 12:14:41.526640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.416 [2024-12-05 12:14:41.526659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.416 [2024-12-05 12:14:41.526668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.416 [2024-12-05 12:14:41.535296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.416 [2024-12-05 12:14:41.535316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:21965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.416 [2024-12-05 12:14:41.535323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.416 [2024-12-05 12:14:41.547655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.417 [2024-12-05 12:14:41.547676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:16537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.417 [2024-12-05 12:14:41.547683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.417 [2024-12-05 12:14:41.559344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.417 [2024-12-05 12:14:41.559364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.417 [2024-12-05 12:14:41.559377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.417 [2024-12-05 12:14:41.568103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.417 [2024-12-05 12:14:41.568123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:19018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.417 [2024-12-05 12:14:41.568130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.417 [2024-12-05 12:14:41.578336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.417 [2024-12-05 12:14:41.578356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.417 [2024-12-05 12:14:41.578364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.417 [2024-12-05 12:14:41.587122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.417 [2024-12-05 12:14:41.587142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:14784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.417 [2024-12-05 12:14:41.587150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.417 [2024-12-05 12:14:41.596640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.417 [2024-12-05 12:14:41.596659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:20704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.417 [2024-12-05 12:14:41.596667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.417 [2024-12-05 12:14:41.607955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.417 [2024-12-05 12:14:41.607975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:8704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.417 [2024-12-05 12:14:41.607986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.677 [2024-12-05 12:14:41.618888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.677 [2024-12-05 12:14:41.618907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:1438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.677 [2024-12-05 12:14:41.618915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.677 [2024-12-05 12:14:41.628185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.677 [2024-12-05 12:14:41.628203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:10571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.677 [2024-12-05 12:14:41.628211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.677 [2024-12-05 12:14:41.639384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.677 [2024-12-05 12:14:41.639404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:14087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.677 [2024-12-05 12:14:41.639412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.677 [2024-12-05 12:14:41.651309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.677 [2024-12-05 12:14:41.651329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:6041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.677 [2024-12-05 12:14:41.651337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.677 [2024-12-05 12:14:41.660684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.677 [2024-12-05 12:14:41.660703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:4966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.677 [2024-12-05 12:14:41.660711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.677 [2024-12-05 12:14:41.670665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.677 [2024-12-05 12:14:41.670685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:6149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.677 [2024-12-05 12:14:41.670694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.677 [2024-12-05 12:14:41.681739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.677 [2024-12-05 12:14:41.681759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:10242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.677 [2024-12-05 12:14:41.681766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.677 [2024-12-05 12:14:41.692094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.677 [2024-12-05 12:14:41.692114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:22840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.677 [2024-12-05 12:14:41.692122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.677 [2024-12-05 12:14:41.700388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.677 [2024-12-05 12:14:41.700427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.677 [2024-12-05 12:14:41.700435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.677 [2024-12-05 12:14:41.712001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.677 [2024-12-05 12:14:41.712021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:20712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.677 [2024-12-05 12:14:41.712029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.677 [2024-12-05 12:14:41.720454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.677 [2024-12-05 12:14:41.720474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:7305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.677 [2024-12-05 12:14:41.720482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.677 [2024-12-05 12:14:41.732187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.677 [2024-12-05 12:14:41.732209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:11826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.677 [2024-12-05 12:14:41.732216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.677 [2024-12-05 12:14:41.742262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.677 [2024-12-05 12:14:41.742282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:11352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.677 [2024-12-05 12:14:41.742289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.677 [2024-12-05 12:14:41.750608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.677 [2024-12-05 12:14:41.750628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:21280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.677 [2024-12-05 12:14:41.750636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.677 [2024-12-05 12:14:41.763585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.677 [2024-12-05 12:14:41.763606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:14085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.677 [2024-12-05 12:14:41.763613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.677 [2024-12-05 12:14:41.773115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.677 [2024-12-05 12:14:41.773136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.677 [2024-12-05 12:14:41.773144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.677 [2024-12-05 12:14:41.782651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.677 [2024-12-05 12:14:41.782675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:11214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.677 [2024-12-05 12:14:41.782686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.677 [2024-12-05 12:14:41.795352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.677 [2024-12-05 12:14:41.795377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:9 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.677 [2024-12-05 12:14:41.795386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.677 [2024-12-05 12:14:41.805060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.677 [2024-12-05 12:14:41.805080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:20876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.677 [2024-12-05 12:14:41.805088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.677 [2024-12-05 12:14:41.815773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.677 [2024-12-05 12:14:41.815793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.677 [2024-12-05 12:14:41.815801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.677 [2024-12-05 12:14:41.826348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.677 [2024-12-05 12:14:41.826372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:5278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.677 [2024-12-05 12:14:41.826380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.677 [2024-12-05 12:14:41.834953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.677 [2024-12-05 12:14:41.834973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:17711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.677 [2024-12-05 12:14:41.834981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.678 [2024-12-05 12:14:41.847531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.678 [2024-12-05 12:14:41.847552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:17381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.678 [2024-12-05 12:14:41.847559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.678 [2024-12-05 12:14:41.857490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.678 [2024-12-05 12:14:41.857510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.678 [2024-12-05 12:14:41.857518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.678 [2024-12-05 12:14:41.865879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.678 [2024-12-05 12:14:41.865899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:7335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.678 [2024-12-05 12:14:41.865907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.938 [2024-12-05 12:14:41.879230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.938 [2024-12-05 12:14:41.879250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.938 [2024-12-05 12:14:41.879263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.938 [2024-12-05 12:14:41.887472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.938 [2024-12-05 12:14:41.887492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:9609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.938 [2024-12-05 12:14:41.887500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.938 [2024-12-05 12:14:41.898741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.938 [2024-12-05 12:14:41.898761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.938 [2024-12-05 12:14:41.898769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.938 24044.00 IOPS, 93.92 MiB/s [2024-12-05T11:14:42.134Z] [2024-12-05 12:14:41.908444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.938 [2024-12-05 12:14:41.908465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:16504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.938 [2024-12-05 12:14:41.908473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.938 [2024-12-05 12:14:41.918211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.938 [2024-12-05 12:14:41.918231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:2893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.938 [2024-12-05 12:14:41.918239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.938 [2024-12-05 12:14:41.926339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.938 [2024-12-05 12:14:41.926359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:24478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.938 [2024-12-05 12:14:41.926373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.938 [2024-12-05 12:14:41.935852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.938 [2024-12-05 12:14:41.935873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:10460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.938 [2024-12-05 12:14:41.935881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.938 [2024-12-05 12:14:41.945740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.938 [2024-12-05 12:14:41.945760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.938 [2024-12-05 12:14:41.945768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.938 [2024-12-05 12:14:41.956324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.938 [2024-12-05 12:14:41.956345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:15949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.938 [2024-12-05 12:14:41.956353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.938 [2024-12-05 12:14:41.966037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.938 [2024-12-05 12:14:41.966059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.938 [2024-12-05 12:14:41.966067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.938 [2024-12-05 12:14:41.975518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.938 [2024-12-05 12:14:41.975538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:10824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.938 [2024-12-05 12:14:41.975546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.938 [2024-12-05 12:14:41.986714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.938 [2024-12-05 12:14:41.986735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:6944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.938 [2024-12-05 12:14:41.986743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.938 [2024-12-05 12:14:41.996901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.938 [2024-12-05 12:14:41.996921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:13720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.938 [2024-12-05 12:14:41.996929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.938 [2024-12-05 12:14:42.005453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.938 [2024-12-05 12:14:42.005474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:2605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.938 [2024-12-05 12:14:42.005481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.938 [2024-12-05 12:14:42.018327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.938 [2024-12-05 12:14:42.018348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:3991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.938 [2024-12-05 12:14:42.018355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.938 [2024-12-05 12:14:42.029194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.938 [2024-12-05 12:14:42.029214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.938 [2024-12-05 12:14:42.029222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.938 [2024-12-05 12:14:42.041320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.938 [2024-12-05 12:14:42.041340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:25000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.938 [2024-12-05 12:14:42.041348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.938 [2024-12-05 12:14:42.053102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.938 [2024-12-05 12:14:42.053123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:8036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.938 [2024-12-05 12:14:42.053134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.938 [2024-12-05 12:14:42.061839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.938 [2024-12-05 12:14:42.061860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:9628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.938 [2024-12-05 12:14:42.061867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.938 [2024-12-05 12:14:42.074501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.938 [2024-12-05 12:14:42.074521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:5841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.938 [2024-12-05 12:14:42.074529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.939 [2024-12-05 12:14:42.084253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.939 [2024-12-05 12:14:42.084272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:6953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.939 [2024-12-05 12:14:42.084280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.939 [2024-12-05 12:14:42.093902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.939 [2024-12-05 12:14:42.093922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:13964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.939 [2024-12-05 12:14:42.093930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.939 [2024-12-05 12:14:42.104062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.939 [2024-12-05 12:14:42.104084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.939 [2024-12-05 12:14:42.104092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.939 [2024-12-05 12:14:42.113477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.939 [2024-12-05 12:14:42.113497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.939 [2024-12-05 12:14:42.113505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.939 [2024-12-05 12:14:42.121969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.939 [2024-12-05 12:14:42.121989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:19704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.939 [2024-12-05 12:14:42.121997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:07.939 [2024-12-05 12:14:42.133556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:07.939 [2024-12-05 12:14:42.133576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:20283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.939 [2024-12-05 12:14:42.133584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.198 [2024-12-05 12:14:42.142293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.198 [2024-12-05 12:14:42.142318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:13572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.198 [2024-12-05 12:14:42.142326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.198 [2024-12-05 12:14:42.154315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.198 [2024-12-05 12:14:42.154335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.198 [2024-12-05 12:14:42.154343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.198 [2024-12-05 12:14:42.163761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.198 [2024-12-05 12:14:42.163781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.198 [2024-12-05 12:14:42.163789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.198 [2024-12-05 12:14:42.172574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.198 [2024-12-05 12:14:42.172594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:22202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.198 [2024-12-05 12:14:42.172602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.198 [2024-12-05 12:14:42.183874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.198 [2024-12-05 12:14:42.183894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.198 [2024-12-05 12:14:42.183902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.198 [2024-12-05 12:14:42.196292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.198 [2024-12-05 12:14:42.196312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.198 [2024-12-05 12:14:42.196320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.198 [2024-12-05 12:14:42.207877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.199 [2024-12-05 12:14:42.207896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:10202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.199 [2024-12-05 12:14:42.207905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.199 [2024-12-05 12:14:42.217149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.199 [2024-12-05 12:14:42.217168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:18942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.199 [2024-12-05 12:14:42.217176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.199 [2024-12-05 12:14:42.225428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.199 [2024-12-05 12:14:42.225449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.199 [2024-12-05 12:14:42.225456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.199 [2024-12-05 12:14:42.235637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.199 [2024-12-05 12:14:42.235656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:10564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.199 [2024-12-05 12:14:42.235664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.199 [2024-12-05 12:14:42.244175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.199 [2024-12-05 12:14:42.244195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.199 [2024-12-05 12:14:42.244203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.199 [2024-12-05 12:14:42.254893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.199 [2024-12-05 12:14:42.254912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:4015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.199 [2024-12-05 12:14:42.254920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.199 [2024-12-05 12:14:42.265141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.199 [2024-12-05 12:14:42.265160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:1634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.199 [2024-12-05 12:14:42.265167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.199 [2024-12-05 12:14:42.273179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.199 [2024-12-05 12:14:42.273200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:1478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.199 [2024-12-05 12:14:42.273209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.199 [2024-12-05 12:14:42.284637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.199 [2024-12-05 12:14:42.284657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.199 [2024-12-05 12:14:42.284664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.199 [2024-12-05 12:14:42.294215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.199 [2024-12-05 12:14:42.294235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.199 [2024-12-05 12:14:42.294242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.199 [2024-12-05 12:14:42.302499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.199 [2024-12-05 12:14:42.302518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.199 [2024-12-05 12:14:42.302526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.199 [2024-12-05 12:14:42.312820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.199 [2024-12-05 12:14:42.312840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:13180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.199 [2024-12-05 12:14:42.312851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.199 [2024-12-05 12:14:42.322906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.199 [2024-12-05 12:14:42.322927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:13222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.199 [2024-12-05 12:14:42.322935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.199 [2024-12-05 12:14:42.331897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.199 [2024-12-05 12:14:42.331918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:12651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.199 [2024-12-05 12:14:42.331926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.199 [2024-12-05 12:14:42.341033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.199 [2024-12-05 12:14:42.341053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:5729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.199 [2024-12-05 12:14:42.341061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.199 [2024-12-05 12:14:42.350537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.199 [2024-12-05 12:14:42.350557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:6276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.199 [2024-12-05 12:14:42.350565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.199 [2024-12-05 12:14:42.358704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.199 [2024-12-05 12:14:42.358723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.199 [2024-12-05 12:14:42.358731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.199 [2024-12-05 12:14:42.369130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.199 [2024-12-05 12:14:42.369149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:3379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.199 [2024-12-05 12:14:42.369157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.199 [2024-12-05 12:14:42.377645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.199 [2024-12-05 12:14:42.377664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.199 [2024-12-05 12:14:42.377672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.199 [2024-12-05 12:14:42.387809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.199 [2024-12-05 12:14:42.387829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:21935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.199 [2024-12-05 12:14:42.387837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.459 [2024-12-05 12:14:42.396790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.459 [2024-12-05 12:14:42.396809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.459 [2024-12-05 12:14:42.396817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.459 [2024-12-05 12:14:42.406650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.459 [2024-12-05 12:14:42.406669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:22247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.459 [2024-12-05 12:14:42.406677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.459 [2024-12-05 12:14:42.417929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.459 [2024-12-05 12:14:42.417948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:8662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.459 [2024-12-05 12:14:42.417956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.459 [2024-12-05 12:14:42.428432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.459 [2024-12-05 12:14:42.428450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.459 [2024-12-05 12:14:42.428458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.459 [2024-12-05 12:14:42.437032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.459 [2024-12-05 12:14:42.437052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.459 [2024-12-05 12:14:42.437060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.459 [2024-12-05 12:14:42.448834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.459 [2024-12-05 12:14:42.448854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:18335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.459 [2024-12-05 12:14:42.448862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.459 [2024-12-05 12:14:42.459747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.459 [2024-12-05 12:14:42.459767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:13220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.459 [2024-12-05 12:14:42.459775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.459 [2024-12-05 12:14:42.468322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.459 [2024-12-05 12:14:42.468342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:6423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.459 [2024-12-05 12:14:42.468350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.459 [2024-12-05 12:14:42.477836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.459 [2024-12-05 12:14:42.477855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:12043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.459 [2024-12-05 12:14:42.477869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.459 [2024-12-05 12:14:42.489145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.459 [2024-12-05 12:14:42.489165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:18977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.459 [2024-12-05 12:14:42.489173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.459 [2024-12-05 12:14:42.496822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.459 [2024-12-05 12:14:42.496841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:8605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.459 [2024-12-05 12:14:42.496849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.459 [2024-12-05 12:14:42.508278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.459 [2024-12-05 12:14:42.508297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:10341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.459 [2024-12-05 12:14:42.508305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.459 [2024-12-05 12:14:42.519333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.459 [2024-12-05 12:14:42.519353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:19523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.459 [2024-12-05 12:14:42.519360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.459 [2024-12-05 12:14:42.527412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.459 [2024-12-05 12:14:42.527431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.459 [2024-12-05 12:14:42.527438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.459 [2024-12-05 12:14:42.539043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.459 [2024-12-05 12:14:42.539062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:17111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.459 [2024-12-05 12:14:42.539070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.459 [2024-12-05 12:14:42.549797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.459 [2024-12-05 12:14:42.549816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:2546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.459 [2024-12-05 12:14:42.549823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.459 [2024-12-05 12:14:42.560714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.459 [2024-12-05 12:14:42.560734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.459 [2024-12-05 12:14:42.560741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.459 [2024-12-05 12:14:42.569586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.459 [2024-12-05 12:14:42.569610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.459 [2024-12-05 12:14:42.569617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.459 [2024-12-05 12:14:42.581636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.459 [2024-12-05 12:14:42.581656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.459 [2024-12-05 12:14:42.581664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.459 [2024-12-05 12:14:42.589709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.459 [2024-12-05 12:14:42.589728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:6655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.459 [2024-12-05 12:14:42.589736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.459 [2024-12-05 12:14:42.601901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.459 [2024-12-05 12:14:42.601921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.459 [2024-12-05 12:14:42.601928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.459 [2024-12-05 12:14:42.610147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.460 [2024-12-05 12:14:42.610166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:4995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.460 [2024-12-05 12:14:42.610174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.460 [2024-12-05 12:14:42.622556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.460 [2024-12-05 12:14:42.622576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:19784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.460 [2024-12-05 12:14:42.622583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.460 [2024-12-05 12:14:42.634178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.460 [2024-12-05 12:14:42.634197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:2939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.460 [2024-12-05 12:14:42.634205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.460 [2024-12-05 12:14:42.642662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.460 [2024-12-05 12:14:42.642681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.460 [2024-12-05 12:14:42.642689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.460 [2024-12-05 12:14:42.654501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.460 [2024-12-05 12:14:42.654521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:8380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.460 [2024-12-05 12:14:42.654529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.719 [2024-12-05 12:14:42.663437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.719 [2024-12-05 12:14:42.663456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.719 [2024-12-05 12:14:42.663465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.719 [2024-12-05 12:14:42.671570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.719 [2024-12-05 12:14:42.671589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:13830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.720 [2024-12-05 12:14:42.671597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.720 [2024-12-05 12:14:42.682144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.720 [2024-12-05 12:14:42.682164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:21562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.720 [2024-12-05 12:14:42.682172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.720 [2024-12-05 12:14:42.690729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.720 [2024-12-05 12:14:42.690749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:18677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.720 [2024-12-05 12:14:42.690756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.720 [2024-12-05 12:14:42.699691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.720 [2024-12-05 12:14:42.699710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.720 [2024-12-05 12:14:42.699718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.720 [2024-12-05 12:14:42.710656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.720 [2024-12-05 12:14:42.710675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.720 [2024-12-05 12:14:42.710683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.720 [2024-12-05 12:14:42.719954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.720 [2024-12-05 12:14:42.719975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:15599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.720 [2024-12-05 12:14:42.719983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.720 [2024-12-05 12:14:42.732184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.720 [2024-12-05 12:14:42.732206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:16149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.720 [2024-12-05 12:14:42.732214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.720 [2024-12-05 12:14:42.744117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.720 [2024-12-05 12:14:42.744137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:23677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.720 [2024-12-05 12:14:42.744147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.720 [2024-12-05 12:14:42.751952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.720 [2024-12-05 12:14:42.751971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:14389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.720 [2024-12-05 12:14:42.751978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.720 [2024-12-05 12:14:42.763857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.720 [2024-12-05 12:14:42.763879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:19752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.720 [2024-12-05 12:14:42.763887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.720 [2024-12-05 12:14:42.772416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.720 [2024-12-05 12:14:42.772436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:15797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.720 [2024-12-05 12:14:42.772444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.720 [2024-12-05 12:14:42.784148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.720 [2024-12-05 12:14:42.784170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:8270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.720 [2024-12-05 12:14:42.784178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.720 [2024-12-05 12:14:42.796457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.720 [2024-12-05 12:14:42.796478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:23688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.720 [2024-12-05 12:14:42.796485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.720 [2024-12-05 12:14:42.808377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.720 [2024-12-05 12:14:42.808413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:25329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.720 [2024-12-05 12:14:42.808422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.720 [2024-12-05 12:14:42.820606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.720 [2024-12-05 12:14:42.820625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.720 [2024-12-05 12:14:42.820633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.720 [2024-12-05 12:14:42.832950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.720 [2024-12-05 12:14:42.832970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:6695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.720 [2024-12-05 12:14:42.832977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.720 [2024-12-05 12:14:42.844826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.720 [2024-12-05 12:14:42.844845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.720 [2024-12-05 12:14:42.844853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.720 [2024-12-05 12:14:42.856021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.720 [2024-12-05 12:14:42.856040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:10887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.720 [2024-12-05 12:14:42.856048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.720 [2024-12-05 12:14:42.865173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.720 [2024-12-05 12:14:42.865192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:7488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.720 [2024-12-05 12:14:42.865201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.720 [2024-12-05 12:14:42.876376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.720 [2024-12-05 12:14:42.876396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:19100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.720 [2024-12-05 12:14:42.876403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.720 [2024-12-05 12:14:42.885387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.720 [2024-12-05 12:14:42.885407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:20868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.720 [2024-12-05 12:14:42.885414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.720 [2024-12-05 12:14:42.894869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.720 [2024-12-05 12:14:42.894889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.720 [2024-12-05 12:14:42.894897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.720 24585.50 IOPS, 96.04 MiB/s [2024-12-05T11:14:42.916Z] [2024-12-05 12:14:42.904998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fff970) 00:31:08.720 [2024-12-05 12:14:42.905017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.720 [2024-12-05 12:14:42.905025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:08.979 00:31:08.979 Latency(us) 00:31:08.979 [2024-12-05T11:14:43.175Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:08.979 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:31:08.979 nvme0n1 : 2.04 24120.45 94.22 0.00 0.00 5199.26 2621.44 46686.60 00:31:08.979 [2024-12-05T11:14:43.175Z] =================================================================================================================== 00:31:08.979 [2024-12-05T11:14:43.175Z] Total : 24120.45 94.22 0.00 0.00 5199.26 2621.44 46686.60 00:31:08.979 { 00:31:08.979 "results": [ 00:31:08.979 { 00:31:08.979 "job": "nvme0n1", 00:31:08.979 "core_mask": "0x2", 00:31:08.979 "workload": "randread", 00:31:08.979 "status": "finished", 00:31:08.979 "queue_depth": 128, 00:31:08.979 "io_size": 4096, 00:31:08.979 "runtime": 2.043867, 00:31:08.979 "iops": 24120.45402171472, 00:31:08.979 "mibps": 94.22052352232312, 00:31:08.979 "io_failed": 0, 00:31:08.979 "io_timeout": 0, 00:31:08.979 "avg_latency_us": 5199.262194790003, 00:31:08.979 "min_latency_us": 2621.44, 00:31:08.979 "max_latency_us": 46686.59809523809 00:31:08.979 } 00:31:08.979 ], 00:31:08.979 "core_count": 1 00:31:08.979 } 00:31:08.979 12:14:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:31:08.979 12:14:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:31:08.979 12:14:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:31:08.979 12:14:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:31:08.979 | .driver_specific 00:31:08.979 | .nvme_error 00:31:08.979 | .status_code 00:31:08.979 | .command_transient_transport_error' 00:31:08.979 12:14:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 193 > 0 )) 00:31:08.979 12:14:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 230625 00:31:08.979 12:14:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 230625 ']' 00:31:08.979 12:14:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 230625 00:31:08.979 12:14:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:31:08.979 12:14:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:08.979 12:14:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 230625 00:31:09.238 12:14:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:09.238 12:14:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:09.238 12:14:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 230625' 00:31:09.238 killing process with pid 230625 00:31:09.238 12:14:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 230625 00:31:09.238 Received shutdown signal, test time was about 2.000000 seconds 00:31:09.238 00:31:09.238 Latency(us) 00:31:09.238 [2024-12-05T11:14:43.434Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:09.238 [2024-12-05T11:14:43.434Z] =================================================================================================================== 00:31:09.238 [2024-12-05T11:14:43.434Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:09.238 12:14:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 230625 00:31:09.238 12:14:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:31:09.238 12:14:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:31:09.238 12:14:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:31:09.238 12:14:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:31:09.238 12:14:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:31:09.238 12:14:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=231211 00:31:09.238 12:14:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 231211 /var/tmp/bperf.sock 00:31:09.238 12:14:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:31:09.238 12:14:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 231211 ']' 00:31:09.238 12:14:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:09.238 12:14:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:09.238 12:14:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:09.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:09.238 12:14:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:09.238 12:14:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:09.238 [2024-12-05 12:14:43.425112] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:31:09.238 [2024-12-05 12:14:43.425160] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid231211 ] 00:31:09.238 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:09.238 Zero copy mechanism will not be used. 00:31:09.497 [2024-12-05 12:14:43.498605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:09.497 [2024-12-05 12:14:43.540737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:09.497 12:14:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:09.497 12:14:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:31:09.497 12:14:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:09.497 12:14:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:09.756 12:14:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:09.756 12:14:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:09.756 12:14:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:09.756 12:14:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:09.756 12:14:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:09.756 12:14:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:10.324 nvme0n1 00:31:10.324 12:14:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:31:10.324 12:14:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.324 12:14:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:10.324 12:14:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.324 12:14:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:10.324 12:14:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:10.324 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:10.324 Zero copy mechanism will not be used. 00:31:10.324 Running I/O for 2 seconds... 00:31:10.324 [2024-12-05 12:14:44.361008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.324 [2024-12-05 12:14:44.361040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.324 [2024-12-05 12:14:44.361055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:10.324 [2024-12-05 12:14:44.366281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.324 [2024-12-05 12:14:44.366310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.324 [2024-12-05 12:14:44.366319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:10.324 [2024-12-05 12:14:44.371575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.324 [2024-12-05 12:14:44.371598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.324 [2024-12-05 12:14:44.371607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:10.324 [2024-12-05 12:14:44.376864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.324 [2024-12-05 12:14:44.376885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.324 [2024-12-05 12:14:44.376893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:10.324 [2024-12-05 12:14:44.382104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.324 [2024-12-05 12:14:44.382125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.324 [2024-12-05 12:14:44.382133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:10.324 [2024-12-05 12:14:44.387337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.324 [2024-12-05 12:14:44.387359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.324 [2024-12-05 12:14:44.387372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:10.324 [2024-12-05 12:14:44.392526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.324 [2024-12-05 12:14:44.392549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.324 [2024-12-05 12:14:44.392557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:10.324 [2024-12-05 12:14:44.397905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.324 [2024-12-05 12:14:44.397926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.324 [2024-12-05 12:14:44.397934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:10.324 [2024-12-05 12:14:44.403078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.324 [2024-12-05 12:14:44.403099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.324 [2024-12-05 12:14:44.403107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:10.324 [2024-12-05 12:14:44.408325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.324 [2024-12-05 12:14:44.408355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.324 [2024-12-05 12:14:44.408363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:10.324 [2024-12-05 12:14:44.413492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.324 [2024-12-05 12:14:44.413512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.324 [2024-12-05 12:14:44.413520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:10.324 [2024-12-05 12:14:44.418611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.324 [2024-12-05 12:14:44.418632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.324 [2024-12-05 12:14:44.418640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:10.324 [2024-12-05 12:14:44.423852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.324 [2024-12-05 12:14:44.423873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.324 [2024-12-05 12:14:44.423881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:10.324 [2024-12-05 12:14:44.429025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.324 [2024-12-05 12:14:44.429046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.324 [2024-12-05 12:14:44.429054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:10.324 [2024-12-05 12:14:44.434157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.324 [2024-12-05 12:14:44.434179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.324 [2024-12-05 12:14:44.434187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:10.324 [2024-12-05 12:14:44.439394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.324 [2024-12-05 12:14:44.439416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.324 [2024-12-05 12:14:44.439424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:10.324 [2024-12-05 12:14:44.444664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.324 [2024-12-05 12:14:44.444685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.325 [2024-12-05 12:14:44.444693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:10.325 [2024-12-05 12:14:44.449894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.325 [2024-12-05 12:14:44.449915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.325 [2024-12-05 12:14:44.449923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:10.325 [2024-12-05 12:14:44.455098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.325 [2024-12-05 12:14:44.455119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.325 [2024-12-05 12:14:44.455127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:10.325 [2024-12-05 12:14:44.460289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.325 [2024-12-05 12:14:44.460310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.325 [2024-12-05 12:14:44.460318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:10.325 [2024-12-05 12:14:44.465537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.325 [2024-12-05 12:14:44.465557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.325 [2024-12-05 12:14:44.465565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:10.325 [2024-12-05 12:14:44.470760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.325 [2024-12-05 12:14:44.470781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.325 [2024-12-05 12:14:44.470788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:10.325 [2024-12-05 12:14:44.476041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.325 [2024-12-05 12:14:44.476062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.325 [2024-12-05 12:14:44.476070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:10.325 [2024-12-05 12:14:44.481229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.325 [2024-12-05 12:14:44.481248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.325 [2024-12-05 12:14:44.481256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:10.325 [2024-12-05 12:14:44.486403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.325 [2024-12-05 12:14:44.486424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.325 [2024-12-05 12:14:44.486432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:10.325 [2024-12-05 12:14:44.491631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.325 [2024-12-05 12:14:44.491651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.325 [2024-12-05 12:14:44.491659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:10.325 [2024-12-05 12:14:44.496886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.325 [2024-12-05 12:14:44.496907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.325 [2024-12-05 12:14:44.496918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:10.325 [2024-12-05 12:14:44.502069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.325 [2024-12-05 12:14:44.502089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.325 [2024-12-05 12:14:44.502096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:10.325 [2024-12-05 12:14:44.507319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.325 [2024-12-05 12:14:44.507340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.325 [2024-12-05 12:14:44.507349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:10.325 [2024-12-05 12:14:44.512489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.325 [2024-12-05 12:14:44.512510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.325 [2024-12-05 12:14:44.512518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:10.325 [2024-12-05 12:14:44.517813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.325 [2024-12-05 12:14:44.517834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.325 [2024-12-05 12:14:44.517842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:10.585 [2024-12-05 12:14:44.523056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.585 [2024-12-05 12:14:44.523077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.585 [2024-12-05 12:14:44.523085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:10.585 [2024-12-05 12:14:44.528321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.585 [2024-12-05 12:14:44.528343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.585 [2024-12-05 12:14:44.528351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:10.585 [2024-12-05 12:14:44.533481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.585 [2024-12-05 12:14:44.533500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.585 [2024-12-05 12:14:44.533508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:10.585 [2024-12-05 12:14:44.538656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.585 [2024-12-05 12:14:44.538676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.585 [2024-12-05 12:14:44.538684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:10.585 [2024-12-05 12:14:44.543828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.585 [2024-12-05 12:14:44.543853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.585 [2024-12-05 12:14:44.543860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:10.585 [2024-12-05 12:14:44.549041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.585 [2024-12-05 12:14:44.549061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.585 [2024-12-05 12:14:44.549069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:10.585 [2024-12-05 12:14:44.554225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.585 [2024-12-05 12:14:44.554246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.585 [2024-12-05 12:14:44.554253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:10.585 [2024-12-05 12:14:44.559456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.585 [2024-12-05 12:14:44.559476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.585 [2024-12-05 12:14:44.559484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:10.585 [2024-12-05 12:14:44.564777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.585 [2024-12-05 12:14:44.564798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.585 [2024-12-05 12:14:44.564806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:10.585 [2024-12-05 12:14:44.569901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.585 [2024-12-05 12:14:44.569922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.585 [2024-12-05 12:14:44.569930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:10.585 [2024-12-05 12:14:44.575022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.585 [2024-12-05 12:14:44.575043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.585 [2024-12-05 12:14:44.575051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:10.585 [2024-12-05 12:14:44.580296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.585 [2024-12-05 12:14:44.580317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.585 [2024-12-05 12:14:44.580324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:10.585 [2024-12-05 12:14:44.585487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.586 [2024-12-05 12:14:44.585508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.586 [2024-12-05 12:14:44.585515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:10.586 [2024-12-05 12:14:44.590687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.586 [2024-12-05 12:14:44.590707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.586 [2024-12-05 12:14:44.590715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:10.586 [2024-12-05 12:14:44.595882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.586 [2024-12-05 12:14:44.595903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.586 [2024-12-05 12:14:44.595911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:10.586 [2024-12-05 12:14:44.601008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.586 [2024-12-05 12:14:44.601030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.586 [2024-12-05 12:14:44.601038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:10.586 [2024-12-05 12:14:44.606179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.586 [2024-12-05 12:14:44.606200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.586 [2024-12-05 12:14:44.606208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:10.586 [2024-12-05 12:14:44.611357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.586 [2024-12-05 12:14:44.611386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.586 [2024-12-05 12:14:44.611394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:10.586 [2024-12-05 12:14:44.616627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.586 [2024-12-05 12:14:44.616649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.586 [2024-12-05 12:14:44.616657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:10.586 [2024-12-05 12:14:44.621812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.586 [2024-12-05 12:14:44.621835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.586 [2024-12-05 12:14:44.621843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:10.586 [2024-12-05 12:14:44.627065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.586 [2024-12-05 12:14:44.627087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.586 [2024-12-05 12:14:44.627095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:10.586 [2024-12-05 12:14:44.632694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.586 [2024-12-05 12:14:44.632717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.586 [2024-12-05 12:14:44.632729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:10.586 [2024-12-05 12:14:44.638339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.586 [2024-12-05 12:14:44.638363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.586 [2024-12-05 12:14:44.638378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:10.586 [2024-12-05 12:14:44.643549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.586 [2024-12-05 12:14:44.643570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.586 [2024-12-05 12:14:44.643578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:10.586 [2024-12-05 12:14:44.648862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.586 [2024-12-05 12:14:44.648884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.586 [2024-12-05 12:14:44.648892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:10.586 [2024-12-05 12:14:44.654147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.586 [2024-12-05 12:14:44.654170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.586 [2024-12-05 12:14:44.654177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:10.586 [2024-12-05 12:14:44.659421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.586 [2024-12-05 12:14:44.659442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.586 [2024-12-05 12:14:44.659450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:10.586 [2024-12-05 12:14:44.664658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.586 [2024-12-05 12:14:44.664679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.586 [2024-12-05 12:14:44.664687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:10.586 [2024-12-05 12:14:44.669980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.586 [2024-12-05 12:14:44.670002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.586 [2024-12-05 12:14:44.670009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:10.586 [2024-12-05 12:14:44.675292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.586 [2024-12-05 12:14:44.675313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.586 [2024-12-05 12:14:44.675321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:10.586 [2024-12-05 12:14:44.680549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.586 [2024-12-05 12:14:44.680574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.586 [2024-12-05 12:14:44.680582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:10.586 [2024-12-05 12:14:44.685781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.586 [2024-12-05 12:14:44.685801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.586 [2024-12-05 12:14:44.685809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:10.586 [2024-12-05 12:14:44.691024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.586 [2024-12-05 12:14:44.691044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.586 [2024-12-05 12:14:44.691051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:10.586 [2024-12-05 12:14:44.696222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.586 [2024-12-05 12:14:44.696243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.586 [2024-12-05 12:14:44.696251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:10.586 [2024-12-05 12:14:44.701420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.586 [2024-12-05 12:14:44.701441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.586 [2024-12-05 12:14:44.701449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:10.586 [2024-12-05 12:14:44.706614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.586 [2024-12-05 12:14:44.706635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.586 [2024-12-05 12:14:44.706643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:10.586 [2024-12-05 12:14:44.711861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.586 [2024-12-05 12:14:44.711881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.586 [2024-12-05 12:14:44.711889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:10.586 [2024-12-05 12:14:44.717146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.586 [2024-12-05 12:14:44.717167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.586 [2024-12-05 12:14:44.717175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:10.586 [2024-12-05 12:14:44.722403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.586 [2024-12-05 12:14:44.722424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.586 [2024-12-05 12:14:44.722434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:10.586 [2024-12-05 12:14:44.727760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.586 [2024-12-05 12:14:44.727784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.586 [2024-12-05 12:14:44.727794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:10.586 [2024-12-05 12:14:44.733068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.586 [2024-12-05 12:14:44.733089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.586 [2024-12-05 12:14:44.733097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:10.586 [2024-12-05 12:14:44.737599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.586 [2024-12-05 12:14:44.737620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.586 [2024-12-05 12:14:44.737628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:10.586 [2024-12-05 12:14:44.740579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.586 [2024-12-05 12:14:44.740599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.586 [2024-12-05 12:14:44.740608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:10.586 [2024-12-05 12:14:44.745781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.586 [2024-12-05 12:14:44.745801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.586 [2024-12-05 12:14:44.745809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:10.586 [2024-12-05 12:14:44.750992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.586 [2024-12-05 12:14:44.751011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.586 [2024-12-05 12:14:44.751019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:10.586 [2024-12-05 12:14:44.756222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.586 [2024-12-05 12:14:44.756242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.587 [2024-12-05 12:14:44.756250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:10.587 [2024-12-05 12:14:44.761400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.587 [2024-12-05 12:14:44.761421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.587 [2024-12-05 12:14:44.761429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:10.587 [2024-12-05 12:14:44.766554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.587 [2024-12-05 12:14:44.766574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.587 [2024-12-05 12:14:44.766585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:10.587 [2024-12-05 12:14:44.771747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.587 [2024-12-05 12:14:44.771767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.587 [2024-12-05 12:14:44.771775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:10.587 [2024-12-05 12:14:44.776974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.587 [2024-12-05 12:14:44.776994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.587 [2024-12-05 12:14:44.777002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:10.847 [2024-12-05 12:14:44.782966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.847 [2024-12-05 12:14:44.782986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.847 [2024-12-05 12:14:44.782994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:10.847 [2024-12-05 12:14:44.787618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.847 [2024-12-05 12:14:44.787638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.847 [2024-12-05 12:14:44.787646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:10.847 [2024-12-05 12:14:44.792820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.847 [2024-12-05 12:14:44.792839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.847 [2024-12-05 12:14:44.792846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:10.847 [2024-12-05 12:14:44.797997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.847 [2024-12-05 12:14:44.798016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.847 [2024-12-05 12:14:44.798024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:10.847 [2024-12-05 12:14:44.803185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.847 [2024-12-05 12:14:44.803205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.847 [2024-12-05 12:14:44.803213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:10.847 [2024-12-05 12:14:44.808486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.847 [2024-12-05 12:14:44.808506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.847 [2024-12-05 12:14:44.808516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:10.847 [2024-12-05 12:14:44.813705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.847 [2024-12-05 12:14:44.813725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.847 [2024-12-05 12:14:44.813733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:10.847 [2024-12-05 12:14:44.818919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.847 [2024-12-05 12:14:44.818940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.847 [2024-12-05 12:14:44.818947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:10.847 [2024-12-05 12:14:44.823178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.847 [2024-12-05 12:14:44.823199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.847 [2024-12-05 12:14:44.823207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:10.847 [2024-12-05 12:14:44.827862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.847 [2024-12-05 12:14:44.827884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.847 [2024-12-05 12:14:44.827893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:10.847 [2024-12-05 12:14:44.832656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.847 [2024-12-05 12:14:44.832678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.847 [2024-12-05 12:14:44.832685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:10.847 [2024-12-05 12:14:44.837508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.847 [2024-12-05 12:14:44.837530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.847 [2024-12-05 12:14:44.837538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:10.847 [2024-12-05 12:14:44.842326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.847 [2024-12-05 12:14:44.842348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.847 [2024-12-05 12:14:44.842355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:10.847 [2024-12-05 12:14:44.847206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.847 [2024-12-05 12:14:44.847227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.847 [2024-12-05 12:14:44.847235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:10.848 [2024-12-05 12:14:44.852268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.848 [2024-12-05 12:14:44.852289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.848 [2024-12-05 12:14:44.852301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:10.848 [2024-12-05 12:14:44.856951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.848 [2024-12-05 12:14:44.856973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.848 [2024-12-05 12:14:44.856981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:10.848 [2024-12-05 12:14:44.861949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.848 [2024-12-05 12:14:44.861971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.848 [2024-12-05 12:14:44.861979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:10.848 [2024-12-05 12:14:44.865432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.848 [2024-12-05 12:14:44.865452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.848 [2024-12-05 12:14:44.865461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:10.848 [2024-12-05 12:14:44.869737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.848 [2024-12-05 12:14:44.869758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.848 [2024-12-05 12:14:44.869766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:10.848 [2024-12-05 12:14:44.875010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.848 [2024-12-05 12:14:44.875032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.848 [2024-12-05 12:14:44.875040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:10.848 [2024-12-05 12:14:44.880402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.848 [2024-12-05 12:14:44.880423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.848 [2024-12-05 12:14:44.880432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:10.848 [2024-12-05 12:14:44.885678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.848 [2024-12-05 12:14:44.885700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.848 [2024-12-05 12:14:44.885708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:10.848 [2024-12-05 12:14:44.890915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.848 [2024-12-05 12:14:44.890936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.848 [2024-12-05 12:14:44.890943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:10.848 [2024-12-05 12:14:44.896106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.848 [2024-12-05 12:14:44.896131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.848 [2024-12-05 12:14:44.896140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:10.848 [2024-12-05 12:14:44.901359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.848 [2024-12-05 12:14:44.901385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.848 [2024-12-05 12:14:44.901394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:10.848 [2024-12-05 12:14:44.906647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.848 [2024-12-05 12:14:44.906668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.848 [2024-12-05 12:14:44.906676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:10.848 [2024-12-05 12:14:44.911882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.848 [2024-12-05 12:14:44.911903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.848 [2024-12-05 12:14:44.911911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:10.848 [2024-12-05 12:14:44.917086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.848 [2024-12-05 12:14:44.917107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.848 [2024-12-05 12:14:44.917114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:10.848 [2024-12-05 12:14:44.922324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.848 [2024-12-05 12:14:44.922349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.848 [2024-12-05 12:14:44.922358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:10.848 [2024-12-05 12:14:44.927655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.848 [2024-12-05 12:14:44.927677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.848 [2024-12-05 12:14:44.927686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:10.848 [2024-12-05 12:14:44.932832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.848 [2024-12-05 12:14:44.932854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.848 [2024-12-05 12:14:44.932863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:10.848 [2024-12-05 12:14:44.938076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.848 [2024-12-05 12:14:44.938098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.848 [2024-12-05 12:14:44.938106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:10.848 [2024-12-05 12:14:44.943330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.848 [2024-12-05 12:14:44.943351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.848 [2024-12-05 12:14:44.943359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:10.848 [2024-12-05 12:14:44.948577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.848 [2024-12-05 12:14:44.948598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.848 [2024-12-05 12:14:44.948606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:10.848 [2024-12-05 12:14:44.953843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.848 [2024-12-05 12:14:44.953864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.848 [2024-12-05 12:14:44.953872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:10.848 [2024-12-05 12:14:44.959094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.848 [2024-12-05 12:14:44.959116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.848 [2024-12-05 12:14:44.959124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:10.848 [2024-12-05 12:14:44.964343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.848 [2024-12-05 12:14:44.964365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.848 [2024-12-05 12:14:44.964380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:10.848 [2024-12-05 12:14:44.969551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.848 [2024-12-05 12:14:44.969572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.848 [2024-12-05 12:14:44.969580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:10.848 [2024-12-05 12:14:44.974753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.848 [2024-12-05 12:14:44.974774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.849 [2024-12-05 12:14:44.974782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:10.849 [2024-12-05 12:14:44.979980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.849 [2024-12-05 12:14:44.980001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.849 [2024-12-05 12:14:44.980009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:10.849 [2024-12-05 12:14:44.985249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.849 [2024-12-05 12:14:44.985269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.849 [2024-12-05 12:14:44.985281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:10.849 [2024-12-05 12:14:44.990438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.849 [2024-12-05 12:14:44.990459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.849 [2024-12-05 12:14:44.990467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:10.849 [2024-12-05 12:14:44.995710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.849 [2024-12-05 12:14:44.995732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.849 [2024-12-05 12:14:44.995740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:10.849 [2024-12-05 12:14:45.000917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.849 [2024-12-05 12:14:45.000938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.849 [2024-12-05 12:14:45.000945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:10.849 [2024-12-05 12:14:45.006141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.849 [2024-12-05 12:14:45.006163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.849 [2024-12-05 12:14:45.006171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:10.849 [2024-12-05 12:14:45.011353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.849 [2024-12-05 12:14:45.011380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.849 [2024-12-05 12:14:45.011388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:10.849 [2024-12-05 12:14:45.016568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.849 [2024-12-05 12:14:45.016589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.849 [2024-12-05 12:14:45.016596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:10.849 [2024-12-05 12:14:45.021800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.849 [2024-12-05 12:14:45.021821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.849 [2024-12-05 12:14:45.021829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:10.849 [2024-12-05 12:14:45.027004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.849 [2024-12-05 12:14:45.027024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.849 [2024-12-05 12:14:45.027032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:10.849 [2024-12-05 12:14:45.032268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.849 [2024-12-05 12:14:45.032292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.849 [2024-12-05 12:14:45.032301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:10.849 [2024-12-05 12:14:45.037470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.849 [2024-12-05 12:14:45.037489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.849 [2024-12-05 12:14:45.037497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:10.849 [2024-12-05 12:14:45.042759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:10.849 [2024-12-05 12:14:45.042779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.849 [2024-12-05 12:14:45.042788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:11.108 [2024-12-05 12:14:45.048104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.108 [2024-12-05 12:14:45.048125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.108 [2024-12-05 12:14:45.048133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:11.108 [2024-12-05 12:14:45.053337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.108 [2024-12-05 12:14:45.053357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.108 [2024-12-05 12:14:45.053365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:11.108 [2024-12-05 12:14:45.058596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.108 [2024-12-05 12:14:45.058616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.108 [2024-12-05 12:14:45.058624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:11.108 [2024-12-05 12:14:45.063820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.108 [2024-12-05 12:14:45.063841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.108 [2024-12-05 12:14:45.063848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:11.108 [2024-12-05 12:14:45.068962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.108 [2024-12-05 12:14:45.068983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.108 [2024-12-05 12:14:45.068991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:11.108 [2024-12-05 12:14:45.074083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.108 [2024-12-05 12:14:45.074104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.108 [2024-12-05 12:14:45.074112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:11.108 [2024-12-05 12:14:45.079307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.108 [2024-12-05 12:14:45.079327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.108 [2024-12-05 12:14:45.079335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:11.108 [2024-12-05 12:14:45.084521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.108 [2024-12-05 12:14:45.084541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.108 [2024-12-05 12:14:45.084548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:11.108 [2024-12-05 12:14:45.089698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.108 [2024-12-05 12:14:45.089718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.108 [2024-12-05 12:14:45.089726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:11.108 [2024-12-05 12:14:45.094853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.108 [2024-12-05 12:14:45.094873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.108 [2024-12-05 12:14:45.094881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:11.108 [2024-12-05 12:14:45.100073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.108 [2024-12-05 12:14:45.100094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.108 [2024-12-05 12:14:45.100102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:11.108 [2024-12-05 12:14:45.105272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.108 [2024-12-05 12:14:45.105294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.108 [2024-12-05 12:14:45.105301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:11.108 [2024-12-05 12:14:45.110263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.108 [2024-12-05 12:14:45.110284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.108 [2024-12-05 12:14:45.110292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:11.108 [2024-12-05 12:14:45.115440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.108 [2024-12-05 12:14:45.115461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.108 [2024-12-05 12:14:45.115469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:11.108 [2024-12-05 12:14:45.120666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.108 [2024-12-05 12:14:45.120687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.108 [2024-12-05 12:14:45.120700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:11.108 [2024-12-05 12:14:45.125831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.108 [2024-12-05 12:14:45.125852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.108 [2024-12-05 12:14:45.125860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:11.108 [2024-12-05 12:14:45.131043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.108 [2024-12-05 12:14:45.131063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.108 [2024-12-05 12:14:45.131071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:11.108 [2024-12-05 12:14:45.136224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.108 [2024-12-05 12:14:45.136243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.108 [2024-12-05 12:14:45.136251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:11.108 [2024-12-05 12:14:45.141383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.108 [2024-12-05 12:14:45.141404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.108 [2024-12-05 12:14:45.141412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:11.108 [2024-12-05 12:14:45.146626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.108 [2024-12-05 12:14:45.146647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.108 [2024-12-05 12:14:45.146656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:11.108 [2024-12-05 12:14:45.151827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.108 [2024-12-05 12:14:45.151847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.108 [2024-12-05 12:14:45.151855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:11.108 [2024-12-05 12:14:45.157064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.108 [2024-12-05 12:14:45.157085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.108 [2024-12-05 12:14:45.157092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:11.108 [2024-12-05 12:14:45.162283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.108 [2024-12-05 12:14:45.162304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.108 [2024-12-05 12:14:45.162311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:11.108 [2024-12-05 12:14:45.167475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.108 [2024-12-05 12:14:45.167498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.108 [2024-12-05 12:14:45.167506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:11.108 [2024-12-05 12:14:45.172663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.108 [2024-12-05 12:14:45.172683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.108 [2024-12-05 12:14:45.172691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:11.108 [2024-12-05 12:14:45.177848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.108 [2024-12-05 12:14:45.177868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.108 [2024-12-05 12:14:45.177877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:11.108 [2024-12-05 12:14:45.183004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.108 [2024-12-05 12:14:45.183025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.108 [2024-12-05 12:14:45.183032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:11.108 [2024-12-05 12:14:45.188185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.108 [2024-12-05 12:14:45.188205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.108 [2024-12-05 12:14:45.188213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:11.108 [2024-12-05 12:14:45.193786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.108 [2024-12-05 12:14:45.193806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.108 [2024-12-05 12:14:45.193814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:11.108 [2024-12-05 12:14:45.199932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.108 [2024-12-05 12:14:45.199953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.108 [2024-12-05 12:14:45.199960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:11.108 [2024-12-05 12:14:45.207423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.108 [2024-12-05 12:14:45.207444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.108 [2024-12-05 12:14:45.207453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:11.108 [2024-12-05 12:14:45.213644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.108 [2024-12-05 12:14:45.213667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.108 [2024-12-05 12:14:45.213676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:11.108 [2024-12-05 12:14:45.219341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.108 [2024-12-05 12:14:45.219363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.108 [2024-12-05 12:14:45.219377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:11.108 [2024-12-05 12:14:45.225232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.108 [2024-12-05 12:14:45.225253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.108 [2024-12-05 12:14:45.225261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:11.108 [2024-12-05 12:14:45.232466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.108 [2024-12-05 12:14:45.232487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.108 [2024-12-05 12:14:45.232496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:11.108 [2024-12-05 12:14:45.239309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.108 [2024-12-05 12:14:45.239330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.108 [2024-12-05 12:14:45.239339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:11.108 [2024-12-05 12:14:45.245646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.108 [2024-12-05 12:14:45.245667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.108 [2024-12-05 12:14:45.245674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:11.108 [2024-12-05 12:14:45.251616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.108 [2024-12-05 12:14:45.251637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.108 [2024-12-05 12:14:45.251645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:11.108 [2024-12-05 12:14:45.257339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.108 [2024-12-05 12:14:45.257360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.108 [2024-12-05 12:14:45.257375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:11.108 [2024-12-05 12:14:45.262562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.108 [2024-12-05 12:14:45.262583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.108 [2024-12-05 12:14:45.262591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:11.108 [2024-12-05 12:14:45.267732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.108 [2024-12-05 12:14:45.267753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.108 [2024-12-05 12:14:45.267764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:11.108 [2024-12-05 12:14:45.272910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.108 [2024-12-05 12:14:45.272930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.108 [2024-12-05 12:14:45.272938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:11.108 [2024-12-05 12:14:45.278066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.108 [2024-12-05 12:14:45.278087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.108 [2024-12-05 12:14:45.278095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:11.108 [2024-12-05 12:14:45.283233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.108 [2024-12-05 12:14:45.283254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.108 [2024-12-05 12:14:45.283262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:11.108 [2024-12-05 12:14:45.288408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.108 [2024-12-05 12:14:45.288429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.108 [2024-12-05 12:14:45.288436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:11.108 [2024-12-05 12:14:45.293594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.108 [2024-12-05 12:14:45.293615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.108 [2024-12-05 12:14:45.293622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:11.108 [2024-12-05 12:14:45.298824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.108 [2024-12-05 12:14:45.298845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.108 [2024-12-05 12:14:45.298853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:11.108 [2024-12-05 12:14:45.304039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.108 [2024-12-05 12:14:45.304060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.108 [2024-12-05 12:14:45.304068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:11.367 [2024-12-05 12:14:45.309264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.367 [2024-12-05 12:14:45.309285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.367 [2024-12-05 12:14:45.309293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:11.367 [2024-12-05 12:14:45.314485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.367 [2024-12-05 12:14:45.314505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.367 [2024-12-05 12:14:45.314512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:11.367 [2024-12-05 12:14:45.319874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.367 [2024-12-05 12:14:45.319895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.367 [2024-12-05 12:14:45.319903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:11.367 [2024-12-05 12:14:45.325035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.367 [2024-12-05 12:14:45.325056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.367 [2024-12-05 12:14:45.325064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:11.367 [2024-12-05 12:14:45.330272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.367 [2024-12-05 12:14:45.330298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.367 [2024-12-05 12:14:45.330306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:11.367 [2024-12-05 12:14:45.335476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.367 [2024-12-05 12:14:45.335495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.367 [2024-12-05 12:14:45.335503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:11.367 [2024-12-05 12:14:45.340585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.367 [2024-12-05 12:14:45.340606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.367 [2024-12-05 12:14:45.340613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:11.367 [2024-12-05 12:14:45.345785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.367 [2024-12-05 12:14:45.345806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.367 [2024-12-05 12:14:45.345813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:11.367 [2024-12-05 12:14:45.350999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.367 [2024-12-05 12:14:45.351019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.367 [2024-12-05 12:14:45.351027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:11.367 [2024-12-05 12:14:45.356242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.367 [2024-12-05 12:14:45.356263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.367 [2024-12-05 12:14:45.356275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:11.367 5897.00 IOPS, 737.12 MiB/s [2024-12-05T11:14:45.563Z] [2024-12-05 12:14:45.362567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.367 [2024-12-05 12:14:45.362588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.367 [2024-12-05 12:14:45.362597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:11.367 [2024-12-05 12:14:45.367785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.367 [2024-12-05 12:14:45.367806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.367 [2024-12-05 12:14:45.367814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:11.367 [2024-12-05 12:14:45.373030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.367 [2024-12-05 12:14:45.373052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.367 [2024-12-05 12:14:45.373060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:11.367 [2024-12-05 12:14:45.378212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.367 [2024-12-05 12:14:45.378232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.367 [2024-12-05 12:14:45.378240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:11.367 [2024-12-05 12:14:45.383445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.367 [2024-12-05 12:14:45.383467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.367 [2024-12-05 12:14:45.383474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:11.367 [2024-12-05 12:14:45.388655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.367 [2024-12-05 12:14:45.388675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.367 [2024-12-05 12:14:45.388683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:11.367 [2024-12-05 12:14:45.393879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.367 [2024-12-05 12:14:45.393900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.367 [2024-12-05 12:14:45.393908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:11.367 [2024-12-05 12:14:45.399112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.367 [2024-12-05 12:14:45.399133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.367 [2024-12-05 12:14:45.399141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:11.367 [2024-12-05 12:14:45.404306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.367 [2024-12-05 12:14:45.404330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.367 [2024-12-05 12:14:45.404338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:11.367 [2024-12-05 12:14:45.409563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.367 [2024-12-05 12:14:45.409586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.367 [2024-12-05 12:14:45.409594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:11.367 [2024-12-05 12:14:45.414812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.367 [2024-12-05 12:14:45.414833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.367 [2024-12-05 12:14:45.414841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:11.367 [2024-12-05 12:14:45.420023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.367 [2024-12-05 12:14:45.420044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.368 [2024-12-05 12:14:45.420051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:11.368 [2024-12-05 12:14:45.425278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.368 [2024-12-05 12:14:45.425298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.368 [2024-12-05 12:14:45.425306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:11.368 [2024-12-05 12:14:45.430522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.368 [2024-12-05 12:14:45.430544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.368 [2024-12-05 12:14:45.430552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:11.368 [2024-12-05 12:14:45.435735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.368 [2024-12-05 12:14:45.435756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.368 [2024-12-05 12:14:45.435764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:11.368 [2024-12-05 12:14:45.441723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.368 [2024-12-05 12:14:45.441745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.368 [2024-12-05 12:14:45.441752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:11.368 [2024-12-05 12:14:45.447066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.368 [2024-12-05 12:14:45.447087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.368 [2024-12-05 12:14:45.447095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:11.368 [2024-12-05 12:14:45.452300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.368 [2024-12-05 12:14:45.452320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.368 [2024-12-05 12:14:45.452328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:11.368 [2024-12-05 12:14:45.457474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.368 [2024-12-05 12:14:45.457493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.368 [2024-12-05 12:14:45.457501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:11.368 [2024-12-05 12:14:45.462701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.368 [2024-12-05 12:14:45.462721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.368 [2024-12-05 12:14:45.462728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:11.368 [2024-12-05 12:14:45.467990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.368 [2024-12-05 12:14:45.468011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.368 [2024-12-05 12:14:45.468018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:11.368 [2024-12-05 12:14:45.473262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.368 [2024-12-05 12:14:45.473282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.368 [2024-12-05 12:14:45.473290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:11.368 [2024-12-05 12:14:45.479373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.368 [2024-12-05 12:14:45.479394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.368 [2024-12-05 12:14:45.479402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:11.368 [2024-12-05 12:14:45.486863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.368 [2024-12-05 12:14:45.486884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.368 [2024-12-05 12:14:45.486892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:11.368 [2024-12-05 12:14:45.493800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.368 [2024-12-05 12:14:45.493822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.368 [2024-12-05 12:14:45.493830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:11.368 [2024-12-05 12:14:45.501208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.368 [2024-12-05 12:14:45.501230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.368 [2024-12-05 12:14:45.501241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:11.368 [2024-12-05 12:14:45.508737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.368 [2024-12-05 12:14:45.508758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.368 [2024-12-05 12:14:45.508766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:11.368 [2024-12-05 12:14:45.515039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.368 [2024-12-05 12:14:45.515060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.368 [2024-12-05 12:14:45.515067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:11.368 [2024-12-05 12:14:45.521153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.368 [2024-12-05 12:14:45.521174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.368 [2024-12-05 12:14:45.521183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:11.368 [2024-12-05 12:14:45.528449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.368 [2024-12-05 12:14:45.528470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.368 [2024-12-05 12:14:45.528480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:11.368 [2024-12-05 12:14:45.535213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.368 [2024-12-05 12:14:45.535235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.368 [2024-12-05 12:14:45.535243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:11.368 [2024-12-05 12:14:45.542958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.368 [2024-12-05 12:14:45.542980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.368 [2024-12-05 12:14:45.542988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:11.368 [2024-12-05 12:14:45.549947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.368 [2024-12-05 12:14:45.549968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.368 [2024-12-05 12:14:45.549976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:11.368 [2024-12-05 12:14:45.556471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.368 [2024-12-05 12:14:45.556493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.368 [2024-12-05 12:14:45.556501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:11.368 [2024-12-05 12:14:45.562055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.368 [2024-12-05 12:14:45.562219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.368 [2024-12-05 12:14:45.562228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:11.626 [2024-12-05 12:14:45.567672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.626 [2024-12-05 12:14:45.567693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.626 [2024-12-05 12:14:45.567701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:11.626 [2024-12-05 12:14:45.573127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.626 [2024-12-05 12:14:45.573148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.626 [2024-12-05 12:14:45.573156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:11.626 [2024-12-05 12:14:45.578295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.626 [2024-12-05 12:14:45.578316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.626 [2024-12-05 12:14:45.578324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:11.626 [2024-12-05 12:14:45.583679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.626 [2024-12-05 12:14:45.583699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.626 [2024-12-05 12:14:45.583707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:11.626 [2024-12-05 12:14:45.589072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.626 [2024-12-05 12:14:45.589093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.626 [2024-12-05 12:14:45.589100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:11.626 [2024-12-05 12:14:45.594638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.626 [2024-12-05 12:14:45.594658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.626 [2024-12-05 12:14:45.594666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:11.626 [2024-12-05 12:14:45.600149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.626 [2024-12-05 12:14:45.600169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.626 [2024-12-05 12:14:45.600177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:11.626 [2024-12-05 12:14:45.605575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.626 [2024-12-05 12:14:45.605595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.626 [2024-12-05 12:14:45.605603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:11.626 [2024-12-05 12:14:45.610890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.626 [2024-12-05 12:14:45.610911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.626 [2024-12-05 12:14:45.610918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:11.626 [2024-12-05 12:14:45.616336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.626 [2024-12-05 12:14:45.616357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.626 [2024-12-05 12:14:45.616365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:11.626 [2024-12-05 12:14:45.621121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.626 [2024-12-05 12:14:45.621142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.626 [2024-12-05 12:14:45.621150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:11.626 [2024-12-05 12:14:45.626510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.626 [2024-12-05 12:14:45.626531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.626 [2024-12-05 12:14:45.626539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:11.627 [2024-12-05 12:14:45.631792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.627 [2024-12-05 12:14:45.631823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.627 [2024-12-05 12:14:45.631831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:11.627 [2024-12-05 12:14:45.637086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.627 [2024-12-05 12:14:45.637107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.627 [2024-12-05 12:14:45.637115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:11.627 [2024-12-05 12:14:45.642277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.627 [2024-12-05 12:14:45.642298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.627 [2024-12-05 12:14:45.642306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:11.627 [2024-12-05 12:14:45.647547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.627 [2024-12-05 12:14:45.647567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.627 [2024-12-05 12:14:45.647575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:11.627 [2024-12-05 12:14:45.652938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.627 [2024-12-05 12:14:45.652958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.627 [2024-12-05 12:14:45.652970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:11.627 [2024-12-05 12:14:45.658424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.627 [2024-12-05 12:14:45.658445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.627 [2024-12-05 12:14:45.658453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:11.627 [2024-12-05 12:14:45.663841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.627 [2024-12-05 12:14:45.663861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.627 [2024-12-05 12:14:45.663869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:11.627 [2024-12-05 12:14:45.669138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.627 [2024-12-05 12:14:45.669159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.627 [2024-12-05 12:14:45.669166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:11.627 [2024-12-05 12:14:45.674550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.627 [2024-12-05 12:14:45.674571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.627 [2024-12-05 12:14:45.674579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:11.627 [2024-12-05 12:14:45.680073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.627 [2024-12-05 12:14:45.680095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.627 [2024-12-05 12:14:45.680103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:11.627 [2024-12-05 12:14:45.685576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.627 [2024-12-05 12:14:45.685597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.627 [2024-12-05 12:14:45.685605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:11.627 [2024-12-05 12:14:45.690953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.627 [2024-12-05 12:14:45.690973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.627 [2024-12-05 12:14:45.690981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:11.627 [2024-12-05 12:14:45.696177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.627 [2024-12-05 12:14:45.696198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.627 [2024-12-05 12:14:45.696206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:11.627 [2024-12-05 12:14:45.701405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.627 [2024-12-05 12:14:45.701432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.627 [2024-12-05 12:14:45.701442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:11.627 [2024-12-05 12:14:45.706793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.627 [2024-12-05 12:14:45.706815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.627 [2024-12-05 12:14:45.706825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:11.627 [2024-12-05 12:14:45.712259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.627 [2024-12-05 12:14:45.712280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.627 [2024-12-05 12:14:45.712288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:11.627 [2024-12-05 12:14:45.717786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.627 [2024-12-05 12:14:45.717806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.627 [2024-12-05 12:14:45.717814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:11.627 [2024-12-05 12:14:45.723019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.627 [2024-12-05 12:14:45.723040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.627 [2024-12-05 12:14:45.723048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:11.627 [2024-12-05 12:14:45.728277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.627 [2024-12-05 12:14:45.728297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.627 [2024-12-05 12:14:45.728305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:11.628 [2024-12-05 12:14:45.733605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.628 [2024-12-05 12:14:45.733626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.628 [2024-12-05 12:14:45.733634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:11.628 [2024-12-05 12:14:45.739104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.628 [2024-12-05 12:14:45.739126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.628 [2024-12-05 12:14:45.739134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:11.628 [2024-12-05 12:14:45.744531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.628 [2024-12-05 12:14:45.744552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.628 [2024-12-05 12:14:45.744560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:11.628 [2024-12-05 12:14:45.749846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.628 [2024-12-05 12:14:45.749867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.628 [2024-12-05 12:14:45.749875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:11.628 [2024-12-05 12:14:45.755177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.628 [2024-12-05 12:14:45.755197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.628 [2024-12-05 12:14:45.755205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:11.628 [2024-12-05 12:14:45.760780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.628 [2024-12-05 12:14:45.760800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.628 [2024-12-05 12:14:45.760808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:11.628 [2024-12-05 12:14:45.766041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.628 [2024-12-05 12:14:45.766061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.628 [2024-12-05 12:14:45.766068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:11.628 [2024-12-05 12:14:45.771494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.628 [2024-12-05 12:14:45.771514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.628 [2024-12-05 12:14:45.771523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:11.628 [2024-12-05 12:14:45.776975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.628 [2024-12-05 12:14:45.776996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.628 [2024-12-05 12:14:45.777004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:11.628 [2024-12-05 12:14:45.782414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.628 [2024-12-05 12:14:45.782434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.628 [2024-12-05 12:14:45.782442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:11.628 [2024-12-05 12:14:45.787758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.628 [2024-12-05 12:14:45.787779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.628 [2024-12-05 12:14:45.787786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:11.628 [2024-12-05 12:14:45.793037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.628 [2024-12-05 12:14:45.793061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.628 [2024-12-05 12:14:45.793069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:11.628 [2024-12-05 12:14:45.798614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.628 [2024-12-05 12:14:45.798635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.628 [2024-12-05 12:14:45.798643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:11.628 [2024-12-05 12:14:45.803909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.628 [2024-12-05 12:14:45.803930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.628 [2024-12-05 12:14:45.803937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:11.628 [2024-12-05 12:14:45.809237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.628 [2024-12-05 12:14:45.809257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.628 [2024-12-05 12:14:45.809265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:11.628 [2024-12-05 12:14:45.814594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.628 [2024-12-05 12:14:45.814614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.628 [2024-12-05 12:14:45.814622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:11.628 [2024-12-05 12:14:45.820076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.628 [2024-12-05 12:14:45.820098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.628 [2024-12-05 12:14:45.820106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:11.886 [2024-12-05 12:14:45.825820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.886 [2024-12-05 12:14:45.825841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.886 [2024-12-05 12:14:45.825848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:11.886 [2024-12-05 12:14:45.831184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.886 [2024-12-05 12:14:45.831204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.886 [2024-12-05 12:14:45.831211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:11.886 [2024-12-05 12:14:45.836650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.886 [2024-12-05 12:14:45.836670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.886 [2024-12-05 12:14:45.836678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:11.886 [2024-12-05 12:14:45.842104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.886 [2024-12-05 12:14:45.842124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.886 [2024-12-05 12:14:45.842132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:11.886 [2024-12-05 12:14:45.847451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.886 [2024-12-05 12:14:45.847471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.886 [2024-12-05 12:14:45.847479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:11.886 [2024-12-05 12:14:45.852758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.886 [2024-12-05 12:14:45.852778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.886 [2024-12-05 12:14:45.852787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:11.886 [2024-12-05 12:14:45.858052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.886 [2024-12-05 12:14:45.858072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.886 [2024-12-05 12:14:45.858080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:11.886 [2024-12-05 12:14:45.863409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.886 [2024-12-05 12:14:45.863429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.886 [2024-12-05 12:14:45.863437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:11.887 [2024-12-05 12:14:45.868823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.887 [2024-12-05 12:14:45.868843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.887 [2024-12-05 12:14:45.868851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:11.887 [2024-12-05 12:14:45.874078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.887 [2024-12-05 12:14:45.874098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.887 [2024-12-05 12:14:45.874106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:11.887 [2024-12-05 12:14:45.879323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.887 [2024-12-05 12:14:45.879344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.887 [2024-12-05 12:14:45.879352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:11.887 [2024-12-05 12:14:45.884805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.887 [2024-12-05 12:14:45.884825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.887 [2024-12-05 12:14:45.884837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:11.887 [2024-12-05 12:14:45.890178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.887 [2024-12-05 12:14:45.890199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.887 [2024-12-05 12:14:45.890206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:11.887 [2024-12-05 12:14:45.895521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.887 [2024-12-05 12:14:45.895542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.887 [2024-12-05 12:14:45.895549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:11.887 [2024-12-05 12:14:45.900929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.887 [2024-12-05 12:14:45.900949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.887 [2024-12-05 12:14:45.900957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:11.887 [2024-12-05 12:14:45.906400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.887 [2024-12-05 12:14:45.906421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.887 [2024-12-05 12:14:45.906429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:11.887 [2024-12-05 12:14:45.911922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.887 [2024-12-05 12:14:45.911942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.887 [2024-12-05 12:14:45.911950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:11.887 [2024-12-05 12:14:45.917237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.887 [2024-12-05 12:14:45.917259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.887 [2024-12-05 12:14:45.917266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:11.887 [2024-12-05 12:14:45.922421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.887 [2024-12-05 12:14:45.922442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.887 [2024-12-05 12:14:45.922450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:11.887 [2024-12-05 12:14:45.927789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.887 [2024-12-05 12:14:45.927813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.887 [2024-12-05 12:14:45.927821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:11.887 [2024-12-05 12:14:45.933152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.887 [2024-12-05 12:14:45.933176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.887 [2024-12-05 12:14:45.933184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:11.887 [2024-12-05 12:14:45.938518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.887 [2024-12-05 12:14:45.938538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.887 [2024-12-05 12:14:45.938546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:11.887 [2024-12-05 12:14:45.943955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.887 [2024-12-05 12:14:45.943976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.887 [2024-12-05 12:14:45.943983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:11.887 [2024-12-05 12:14:45.949540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.887 [2024-12-05 12:14:45.949560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.887 [2024-12-05 12:14:45.949568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:11.887 [2024-12-05 12:14:45.954925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.887 [2024-12-05 12:14:45.954945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.887 [2024-12-05 12:14:45.954953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:11.887 [2024-12-05 12:14:45.960259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.887 [2024-12-05 12:14:45.960280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.887 [2024-12-05 12:14:45.960288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:11.887 [2024-12-05 12:14:45.965618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.887 [2024-12-05 12:14:45.965641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.887 [2024-12-05 12:14:45.965649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:11.887 [2024-12-05 12:14:45.970939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.887 [2024-12-05 12:14:45.970961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.887 [2024-12-05 12:14:45.970969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:11.887 [2024-12-05 12:14:45.976287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.887 [2024-12-05 12:14:45.976308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.887 [2024-12-05 12:14:45.976316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:11.887 [2024-12-05 12:14:45.981602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.887 [2024-12-05 12:14:45.981623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.887 [2024-12-05 12:14:45.981632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:11.887 [2024-12-05 12:14:45.986989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.887 [2024-12-05 12:14:45.987009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.887 [2024-12-05 12:14:45.987017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:11.887 [2024-12-05 12:14:45.992344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.887 [2024-12-05 12:14:45.992365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.887 [2024-12-05 12:14:45.992381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:11.887 [2024-12-05 12:14:45.997596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.887 [2024-12-05 12:14:45.997617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.887 [2024-12-05 12:14:45.997625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:11.887 [2024-12-05 12:14:46.002761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.887 [2024-12-05 12:14:46.002782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.887 [2024-12-05 12:14:46.002790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:11.887 [2024-12-05 12:14:46.007911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.887 [2024-12-05 12:14:46.007932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.888 [2024-12-05 12:14:46.007940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:11.888 [2024-12-05 12:14:46.013262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.888 [2024-12-05 12:14:46.013283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.888 [2024-12-05 12:14:46.013292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:11.888 [2024-12-05 12:14:46.018544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.888 [2024-12-05 12:14:46.018567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.888 [2024-12-05 12:14:46.018574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:11.888 [2024-12-05 12:14:46.023823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.888 [2024-12-05 12:14:46.023844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.888 [2024-12-05 12:14:46.023855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:11.888 [2024-12-05 12:14:46.029206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.888 [2024-12-05 12:14:46.029227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.888 [2024-12-05 12:14:46.029235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:11.888 [2024-12-05 12:14:46.034624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.888 [2024-12-05 12:14:46.034646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.888 [2024-12-05 12:14:46.034654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:11.888 [2024-12-05 12:14:46.040347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.888 [2024-12-05 12:14:46.040374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.888 [2024-12-05 12:14:46.040383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:11.888 [2024-12-05 12:14:46.045958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.888 [2024-12-05 12:14:46.045980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.888 [2024-12-05 12:14:46.045988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:11.888 [2024-12-05 12:14:46.051462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.888 [2024-12-05 12:14:46.051484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.888 [2024-12-05 12:14:46.051492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:11.888 [2024-12-05 12:14:46.057670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.888 [2024-12-05 12:14:46.057692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.888 [2024-12-05 12:14:46.057699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:11.888 [2024-12-05 12:14:46.063078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.888 [2024-12-05 12:14:46.063100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.888 [2024-12-05 12:14:46.063108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:11.888 [2024-12-05 12:14:46.068515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.888 [2024-12-05 12:14:46.068539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.888 [2024-12-05 12:14:46.068548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:11.888 [2024-12-05 12:14:46.074118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.888 [2024-12-05 12:14:46.074144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.888 [2024-12-05 12:14:46.074152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:11.888 [2024-12-05 12:14:46.079458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:11.888 [2024-12-05 12:14:46.079480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.888 [2024-12-05 12:14:46.079488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:12.147 [2024-12-05 12:14:46.084885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:12.147 [2024-12-05 12:14:46.084907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.147 [2024-12-05 12:14:46.084915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:12.147 [2024-12-05 12:14:46.090250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:12.147 [2024-12-05 12:14:46.090271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.147 [2024-12-05 12:14:46.090278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:12.147 [2024-12-05 12:14:46.095667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:12.147 [2024-12-05 12:14:46.095689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.147 [2024-12-05 12:14:46.095696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:12.147 [2024-12-05 12:14:46.101116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:12.147 [2024-12-05 12:14:46.101137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.147 [2024-12-05 12:14:46.101145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:12.147 [2024-12-05 12:14:46.106466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:12.147 [2024-12-05 12:14:46.106487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.147 [2024-12-05 12:14:46.106495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:12.147 [2024-12-05 12:14:46.111729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:12.147 [2024-12-05 12:14:46.111749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.147 [2024-12-05 12:14:46.111757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:12.147 [2024-12-05 12:14:46.117120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:12.147 [2024-12-05 12:14:46.117140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.147 [2024-12-05 12:14:46.117149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:12.147 [2024-12-05 12:14:46.122526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:12.147 [2024-12-05 12:14:46.122547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.147 [2024-12-05 12:14:46.122554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:12.147 [2024-12-05 12:14:46.127763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:12.147 [2024-12-05 12:14:46.127783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.147 [2024-12-05 12:14:46.127791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:12.147 [2024-12-05 12:14:46.133167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:12.147 [2024-12-05 12:14:46.133189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.147 [2024-12-05 12:14:46.133196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:12.147 [2024-12-05 12:14:46.138585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:12.147 [2024-12-05 12:14:46.138606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.147 [2024-12-05 12:14:46.138614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:12.147 [2024-12-05 12:14:46.143999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:12.147 [2024-12-05 12:14:46.144020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.147 [2024-12-05 12:14:46.144028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:12.147 [2024-12-05 12:14:46.149434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:12.147 [2024-12-05 12:14:46.149455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.147 [2024-12-05 12:14:46.149463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:12.147 [2024-12-05 12:14:46.154641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:12.147 [2024-12-05 12:14:46.154661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.147 [2024-12-05 12:14:46.154669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:12.147 [2024-12-05 12:14:46.159790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:12.147 [2024-12-05 12:14:46.159811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.147 [2024-12-05 12:14:46.159819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:12.147 [2024-12-05 12:14:46.164987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:12.147 [2024-12-05 12:14:46.165008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.147 [2024-12-05 12:14:46.165020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:12.147 [2024-12-05 12:14:46.170130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:12.147 [2024-12-05 12:14:46.170150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.147 [2024-12-05 12:14:46.170158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:12.147 [2024-12-05 12:14:46.175415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:12.147 [2024-12-05 12:14:46.175436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.147 [2024-12-05 12:14:46.175443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:12.148 [2024-12-05 12:14:46.180551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:12.148 [2024-12-05 12:14:46.180573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.148 [2024-12-05 12:14:46.180581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:12.148 [2024-12-05 12:14:46.185744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:12.148 [2024-12-05 12:14:46.185764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.148 [2024-12-05 12:14:46.185772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:12.148 [2024-12-05 12:14:46.190749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:12.148 [2024-12-05 12:14:46.190770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.148 [2024-12-05 12:14:46.190778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:12.148 [2024-12-05 12:14:46.195849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:12.148 [2024-12-05 12:14:46.195873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.148 [2024-12-05 12:14:46.195882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:12.148 [2024-12-05 12:14:46.201056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:12.148 [2024-12-05 12:14:46.201077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.148 [2024-12-05 12:14:46.201085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:12.148 [2024-12-05 12:14:46.206235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:12.148 [2024-12-05 12:14:46.206255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.148 [2024-12-05 12:14:46.206262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:12.148 [2024-12-05 12:14:46.211424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:12.148 [2024-12-05 12:14:46.211449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.148 [2024-12-05 12:14:46.211457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:12.148 [2024-12-05 12:14:46.216647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:12.148 [2024-12-05 12:14:46.216668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.148 [2024-12-05 12:14:46.216675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:12.148 [2024-12-05 12:14:46.221811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:12.148 [2024-12-05 12:14:46.221832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.148 [2024-12-05 12:14:46.221839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:12.148 [2024-12-05 12:14:46.227124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:12.148 [2024-12-05 12:14:46.227145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.148 [2024-12-05 12:14:46.227153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:12.148 [2024-12-05 12:14:46.232476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:12.148 [2024-12-05 12:14:46.232498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.148 [2024-12-05 12:14:46.232506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:12.148 [2024-12-05 12:14:46.237812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:12.148 [2024-12-05 12:14:46.237833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.148 [2024-12-05 12:14:46.237841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:12.148 [2024-12-05 12:14:46.243250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:12.148 [2024-12-05 12:14:46.243271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.148 [2024-12-05 12:14:46.243279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:12.148 [2024-12-05 12:14:46.248610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:12.148 [2024-12-05 12:14:46.248630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.148 [2024-12-05 12:14:46.248638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:12.148 [2024-12-05 12:14:46.254164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:12.148 [2024-12-05 12:14:46.254184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.148 [2024-12-05 12:14:46.254192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:12.148 [2024-12-05 12:14:46.259341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:12.148 [2024-12-05 12:14:46.259362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.148 [2024-12-05 12:14:46.259377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:12.148 [2024-12-05 12:14:46.264441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:12.148 [2024-12-05 12:14:46.264463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.148 [2024-12-05 12:14:46.264470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:12.148 [2024-12-05 12:14:46.269440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:12.148 [2024-12-05 12:14:46.269461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.148 [2024-12-05 12:14:46.269468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:12.148 [2024-12-05 12:14:46.275209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:12.148 [2024-12-05 12:14:46.275230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.148 [2024-12-05 12:14:46.275238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:12.148 [2024-12-05 12:14:46.282124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:12.148 [2024-12-05 12:14:46.282146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.148 [2024-12-05 12:14:46.282154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:12.148 [2024-12-05 12:14:46.289418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:12.148 [2024-12-05 12:14:46.289440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.148 [2024-12-05 12:14:46.289449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:12.148 [2024-12-05 12:14:46.295722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:12.148 [2024-12-05 12:14:46.295747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.148 [2024-12-05 12:14:46.295755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:12.148 [2024-12-05 12:14:46.301963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:12.148 [2024-12-05 12:14:46.301985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.148 [2024-12-05 12:14:46.301992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:12.148 [2024-12-05 12:14:46.308889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:12.148 [2024-12-05 12:14:46.308914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.148 [2024-12-05 12:14:46.308923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:12.148 [2024-12-05 12:14:46.315614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:12.148 [2024-12-05 12:14:46.315635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.148 [2024-12-05 12:14:46.315644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:12.148 [2024-12-05 12:14:46.322149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:12.148 [2024-12-05 12:14:46.322172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.148 [2024-12-05 12:14:46.322180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:12.148 [2024-12-05 12:14:46.328645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:12.148 [2024-12-05 12:14:46.328666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.148 [2024-12-05 12:14:46.328673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:12.148 [2024-12-05 12:14:46.334177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:12.149 [2024-12-05 12:14:46.334198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.149 [2024-12-05 12:14:46.334206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:12.149 [2024-12-05 12:14:46.339548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:12.149 [2024-12-05 12:14:46.339570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.149 [2024-12-05 12:14:46.339577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:12.407 [2024-12-05 12:14:46.344925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:12.407 [2024-12-05 12:14:46.344946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.407 [2024-12-05 12:14:46.344954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:12.407 [2024-12-05 12:14:46.350248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:12.407 [2024-12-05 12:14:46.350269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.407 [2024-12-05 12:14:46.350277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:12.407 [2024-12-05 12:14:46.355717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:12.407 [2024-12-05 12:14:46.355738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.407 [2024-12-05 12:14:46.355746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:12.407 5765.50 IOPS, 720.69 MiB/s [2024-12-05T11:14:46.603Z] [2024-12-05 12:14:46.362177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d310) 00:31:12.407 [2024-12-05 12:14:46.362198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.407 [2024-12-05 12:14:46.362206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:12.407 00:31:12.407 Latency(us) 00:31:12.407 [2024-12-05T11:14:46.603Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:12.407 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:31:12.407 nvme0n1 : 2.00 5766.29 720.79 0.00 0.00 2771.50 475.92 8550.89 00:31:12.407 [2024-12-05T11:14:46.603Z] =================================================================================================================== 00:31:12.407 [2024-12-05T11:14:46.603Z] Total : 5766.29 720.79 0.00 0.00 2771.50 475.92 8550.89 00:31:12.407 { 00:31:12.407 "results": [ 00:31:12.407 { 00:31:12.407 "job": "nvme0n1", 00:31:12.407 "core_mask": "0x2", 00:31:12.407 "workload": "randread", 00:31:12.407 "status": "finished", 00:31:12.407 "queue_depth": 16, 00:31:12.407 "io_size": 131072, 00:31:12.407 "runtime": 2.002502, 00:31:12.407 "iops": 5766.2863757439445, 00:31:12.407 "mibps": 720.7857969679931, 00:31:12.407 "io_failed": 0, 00:31:12.407 "io_timeout": 0, 00:31:12.407 "avg_latency_us": 2771.498543674506, 00:31:12.407 "min_latency_us": 475.9161904761905, 00:31:12.407 "max_latency_us": 8550.887619047619 00:31:12.407 } 00:31:12.407 ], 00:31:12.407 "core_count": 1 00:31:12.407 } 00:31:12.407 12:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:31:12.407 12:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:31:12.407 12:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:31:12.407 | .driver_specific 00:31:12.407 | .nvme_error 00:31:12.407 | .status_code 00:31:12.407 | .command_transient_transport_error' 00:31:12.407 12:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:31:12.407 12:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 373 > 0 )) 00:31:12.407 12:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 231211 00:31:12.407 12:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 231211 ']' 00:31:12.407 12:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 231211 00:31:12.407 12:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:31:12.407 12:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:12.666 12:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 231211 00:31:12.666 12:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:12.666 12:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:12.666 12:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 231211' 00:31:12.666 killing process with pid 231211 00:31:12.666 12:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 231211 00:31:12.666 Received shutdown signal, test time was about 2.000000 seconds 00:31:12.666 00:31:12.666 Latency(us) 00:31:12.666 [2024-12-05T11:14:46.862Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:12.666 [2024-12-05T11:14:46.862Z] =================================================================================================================== 00:31:12.666 [2024-12-05T11:14:46.862Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:12.666 12:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 231211 00:31:12.666 12:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:31:12.666 12:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:31:12.666 12:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:31:12.666 12:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:31:12.666 12:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:31:12.666 12:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=231792 00:31:12.666 12:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 231792 /var/tmp/bperf.sock 00:31:12.666 12:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:31:12.666 12:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 231792 ']' 00:31:12.666 12:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:12.666 12:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:12.666 12:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:12.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:12.666 12:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:12.666 12:14:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:12.666 [2024-12-05 12:14:46.853715] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:31:12.666 [2024-12-05 12:14:46.853767] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid231792 ] 00:31:12.924 [2024-12-05 12:14:46.927230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:12.924 [2024-12-05 12:14:46.964510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:12.924 12:14:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:12.924 12:14:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:31:12.924 12:14:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:12.924 12:14:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:13.183 12:14:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:13.183 12:14:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.183 12:14:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:13.183 12:14:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.183 12:14:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:13.183 12:14:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:13.442 nvme0n1 00:31:13.701 12:14:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:31:13.701 12:14:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.701 12:14:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:13.701 12:14:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.701 12:14:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:13.701 12:14:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:13.701 Running I/O for 2 seconds... 00:31:13.701 [2024-12-05 12:14:47.745871] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ee3060 00:31:13.701 [2024-12-05 12:14:47.746796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:18709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.701 [2024-12-05 12:14:47.746826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:13.701 [2024-12-05 12:14:47.754532] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef6cc8 00:31:13.701 [2024-12-05 12:14:47.755422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.701 [2024-12-05 12:14:47.755445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:13.701 [2024-12-05 12:14:47.763985] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef46d0 00:31:13.701 [2024-12-05 12:14:47.764940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:20995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.701 [2024-12-05 12:14:47.764960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:13.701 [2024-12-05 12:14:47.773690] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016eed0b0 00:31:13.701 [2024-12-05 12:14:47.774850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:18565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.701 [2024-12-05 12:14:47.774869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:13.701 [2024-12-05 12:14:47.782319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef4f40 00:31:13.701 [2024-12-05 12:14:47.783169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:18509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.701 [2024-12-05 12:14:47.783188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:13.701 [2024-12-05 12:14:47.791414] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016eeb760 00:31:13.701 [2024-12-05 12:14:47.792203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.701 [2024-12-05 12:14:47.792222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:13.701 [2024-12-05 12:14:47.800225] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016efd208 00:31:13.701 [2024-12-05 12:14:47.800701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.701 [2024-12-05 12:14:47.800720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:13.701 [2024-12-05 12:14:47.811585] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016eef270 00:31:13.701 [2024-12-05 12:14:47.812978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:21807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.701 [2024-12-05 12:14:47.812996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:13.701 [2024-12-05 12:14:47.818079] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef7100 00:31:13.701 [2024-12-05 12:14:47.818867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.701 [2024-12-05 12:14:47.818886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:13.701 [2024-12-05 12:14:47.829147] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ee5ec8 00:31:13.701 [2024-12-05 12:14:47.830297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.701 [2024-12-05 12:14:47.830316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:13.701 [2024-12-05 12:14:47.836503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016eddc00 00:31:13.701 [2024-12-05 12:14:47.837187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.702 [2024-12-05 12:14:47.837206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:13.702 [2024-12-05 12:14:47.845532] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ee0ea0 00:31:13.702 [2024-12-05 12:14:47.846225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:1780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.702 [2024-12-05 12:14:47.846244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:13.702 [2024-12-05 12:14:47.854554] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016eea680 00:31:13.702 [2024-12-05 12:14:47.855255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:15218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.702 [2024-12-05 12:14:47.855273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:13.702 [2024-12-05 12:14:47.863654] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef1ca0 00:31:13.702 [2024-12-05 12:14:47.864323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.702 [2024-12-05 12:14:47.864341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:13.702 [2024-12-05 12:14:47.872946] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef7da8 00:31:13.702 [2024-12-05 12:14:47.873679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.702 [2024-12-05 12:14:47.873698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:13.702 [2024-12-05 12:14:47.882013] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ee5220 00:31:13.702 [2024-12-05 12:14:47.882804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.702 [2024-12-05 12:14:47.882826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:13.702 [2024-12-05 12:14:47.891217] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ee6fa8 00:31:13.702 [2024-12-05 12:14:47.891817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.702 [2024-12-05 12:14:47.891835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:13.961 [2024-12-05 12:14:47.900915] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016eebfd0 00:31:13.961 [2024-12-05 12:14:47.901626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.961 [2024-12-05 12:14:47.901645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:13.961 [2024-12-05 12:14:47.909179] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef92c0 00:31:13.961 [2024-12-05 12:14:47.910007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:13653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.961 [2024-12-05 12:14:47.910025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:13.962 [2024-12-05 12:14:47.918121] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016eebb98 00:31:13.962 [2024-12-05 12:14:47.918878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:13978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.962 [2024-12-05 12:14:47.918897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:13.962 [2024-12-05 12:14:47.927212] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016eecc78 00:31:13.962 [2024-12-05 12:14:47.928011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:8664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.962 [2024-12-05 12:14:47.928030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:13.962 [2024-12-05 12:14:47.936178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ee8d30 00:31:13.962 [2024-12-05 12:14:47.936988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:7856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.962 [2024-12-05 12:14:47.937006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:13.962 [2024-12-05 12:14:47.945235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016efc998 00:31:13.962 [2024-12-05 12:14:47.946067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.962 [2024-12-05 12:14:47.946085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:13.962 [2024-12-05 12:14:47.954226] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016efb8b8 00:31:13.962 [2024-12-05 12:14:47.954996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.962 [2024-12-05 12:14:47.955014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:13.962 [2024-12-05 12:14:47.963203] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016edfdc0 00:31:13.962 [2024-12-05 12:14:47.963976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.962 [2024-12-05 12:14:47.964001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:13.962 [2024-12-05 12:14:47.972219] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ee5a90 00:31:13.962 [2024-12-05 12:14:47.973028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:20707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.962 [2024-12-05 12:14:47.973046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:13.962 [2024-12-05 12:14:47.981189] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef8618 00:31:13.962 [2024-12-05 12:14:47.982002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.962 [2024-12-05 12:14:47.982019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:13.962 [2024-12-05 12:14:47.990186] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016efdeb0 00:31:13.962 [2024-12-05 12:14:47.990957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:18785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.962 [2024-12-05 12:14:47.990976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:13.962 [2024-12-05 12:14:47.999130] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ee99d8 00:31:13.962 [2024-12-05 12:14:47.999932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:13259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.962 [2024-12-05 12:14:47.999949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:13.962 [2024-12-05 12:14:48.008348] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016eeff18 00:31:13.962 [2024-12-05 12:14:48.009149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:10854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.962 [2024-12-05 12:14:48.009167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:13.962 [2024-12-05 12:14:48.017500] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef0ff8 00:31:13.962 [2024-12-05 12:14:48.018307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.962 [2024-12-05 12:14:48.018325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:13.962 [2024-12-05 12:14:48.026586] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef20d8 00:31:13.962 [2024-12-05 12:14:48.027391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.962 [2024-12-05 12:14:48.027409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:13.962 [2024-12-05 12:14:48.035604] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef8e88 00:31:13.962 [2024-12-05 12:14:48.036390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.962 [2024-12-05 12:14:48.036408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:13.962 [2024-12-05 12:14:48.044897] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016efeb58 00:31:13.962 [2024-12-05 12:14:48.045763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:8837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.962 [2024-12-05 12:14:48.045782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:13.962 [2024-12-05 12:14:48.053931] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef6458 00:31:13.962 [2024-12-05 12:14:48.054847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:6130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.962 [2024-12-05 12:14:48.054866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:13.962 [2024-12-05 12:14:48.062908] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef7538 00:31:13.962 [2024-12-05 12:14:48.063821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:11317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.962 [2024-12-05 12:14:48.063840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:13.962 [2024-12-05 12:14:48.071904] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ee88f8 00:31:13.962 [2024-12-05 12:14:48.072840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.962 [2024-12-05 12:14:48.072858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:13.962 [2024-12-05 12:14:48.080864] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ee7818 00:31:13.962 [2024-12-05 12:14:48.081754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:10237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.962 [2024-12-05 12:14:48.081771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:13.962 [2024-12-05 12:14:48.089836] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016efd208 00:31:13.962 [2024-12-05 12:14:48.090732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:11826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.962 [2024-12-05 12:14:48.090750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:13.962 [2024-12-05 12:14:48.098827] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016efc128 00:31:13.962 [2024-12-05 12:14:48.099747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:22681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.962 [2024-12-05 12:14:48.099766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:13.962 [2024-12-05 12:14:48.107804] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ee0630 00:31:13.962 [2024-12-05 12:14:48.108746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:6674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.962 [2024-12-05 12:14:48.108764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:13.962 [2024-12-05 12:14:48.117115] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ee6b70 00:31:13.962 [2024-12-05 12:14:48.118001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:19197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.962 [2024-12-05 12:14:48.118019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:13.962 [2024-12-05 12:14:48.126034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef7da8 00:31:13.962 [2024-12-05 12:14:48.126923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:19770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.962 [2024-12-05 12:14:48.126941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:13.962 [2024-12-05 12:14:48.135551] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ee4de8 00:31:13.962 [2024-12-05 12:14:48.136718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:5893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.962 [2024-12-05 12:14:48.136736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:13.962 [2024-12-05 12:14:48.144727] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016eddc00 00:31:13.962 [2024-12-05 12:14:48.145882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:23178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.962 [2024-12-05 12:14:48.145900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:13.962 [2024-12-05 12:14:48.153987] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016eea248 00:31:13.962 [2024-12-05 12:14:48.155146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.963 [2024-12-05 12:14:48.155164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:14.222 [2024-12-05 12:14:48.163683] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef1430 00:31:14.222 [2024-12-05 12:14:48.165063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:11252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.222 [2024-12-05 12:14:48.165084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:14.222 [2024-12-05 12:14:48.173102] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef3e60 00:31:14.222 [2024-12-05 12:14:48.174644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:24333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.222 [2024-12-05 12:14:48.174662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:14.222 [2024-12-05 12:14:48.179428] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016edfdc0 00:31:14.222 [2024-12-05 12:14:48.180084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:20131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.222 [2024-12-05 12:14:48.180102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:14.222 [2024-12-05 12:14:48.188536] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016eec408 00:31:14.222 [2024-12-05 12:14:48.189198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:10616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.222 [2024-12-05 12:14:48.189216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:14.222 [2024-12-05 12:14:48.197549] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef0788 00:31:14.222 [2024-12-05 12:14:48.198232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:8760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.222 [2024-12-05 12:14:48.198254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:14.222 [2024-12-05 12:14:48.206566] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef1868 00:31:14.222 [2024-12-05 12:14:48.207237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:3257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.222 [2024-12-05 12:14:48.207256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:14.222 [2024-12-05 12:14:48.215875] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ee95a0 00:31:14.222 [2024-12-05 12:14:48.216692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.223 [2024-12-05 12:14:48.216710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:14.223 [2024-12-05 12:14:48.225306] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016eec840 00:31:14.223 [2024-12-05 12:14:48.226233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.223 [2024-12-05 12:14:48.226252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:14.223 [2024-12-05 12:14:48.233675] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016eec408 00:31:14.223 [2024-12-05 12:14:48.234423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:3794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.223 [2024-12-05 12:14:48.234441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:14.223 [2024-12-05 12:14:48.242325] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016eed4e8 00:31:14.223 [2024-12-05 12:14:48.243066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.223 [2024-12-05 12:14:48.243084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:14.223 [2024-12-05 12:14:48.252255] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016eef270 00:31:14.223 [2024-12-05 12:14:48.253167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:25068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.223 [2024-12-05 12:14:48.253186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:14.223 [2024-12-05 12:14:48.261481] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016eee190 00:31:14.223 [2024-12-05 12:14:48.262383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.223 [2024-12-05 12:14:48.262401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:14.223 [2024-12-05 12:14:48.270655] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ee5658 00:31:14.223 [2024-12-05 12:14:48.271600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:16761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.223 [2024-12-05 12:14:48.271619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:14.223 [2024-12-05 12:14:48.279722] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ee6fa8 00:31:14.223 [2024-12-05 12:14:48.280641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.223 [2024-12-05 12:14:48.280659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:14.223 [2024-12-05 12:14:48.288720] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef8e88 00:31:14.223 [2024-12-05 12:14:48.289600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.223 [2024-12-05 12:14:48.289619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:14.223 [2024-12-05 12:14:48.297629] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef20d8 00:31:14.223 [2024-12-05 12:14:48.298518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.223 [2024-12-05 12:14:48.298536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:14.223 [2024-12-05 12:14:48.306618] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef0ff8 00:31:14.223 [2024-12-05 12:14:48.307545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.223 [2024-12-05 12:14:48.307564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:14.223 [2024-12-05 12:14:48.315595] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016eebb98 00:31:14.223 [2024-12-05 12:14:48.316522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:11349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.223 [2024-12-05 12:14:48.316540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:14.223 [2024-12-05 12:14:48.324827] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016eecc78 00:31:14.223 [2024-12-05 12:14:48.325709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.223 [2024-12-05 12:14:48.325728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:14.223 [2024-12-05 12:14:48.333867] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016efb048 00:31:14.223 [2024-12-05 12:14:48.334757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.223 [2024-12-05 12:14:48.334775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:14.223 [2024-12-05 12:14:48.342263] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef57b0 00:31:14.223 [2024-12-05 12:14:48.343122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:12425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.223 [2024-12-05 12:14:48.343140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:14.223 [2024-12-05 12:14:48.352218] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ee9e10 00:31:14.223 [2024-12-05 12:14:48.353277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:24452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.223 [2024-12-05 12:14:48.353295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:14.223 [2024-12-05 12:14:48.361204] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016efda78 00:31:14.223 [2024-12-05 12:14:48.362207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:5946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.223 [2024-12-05 12:14:48.362227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:14.223 [2024-12-05 12:14:48.370173] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef81e0 00:31:14.223 [2024-12-05 12:14:48.371208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:3988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.223 [2024-12-05 12:14:48.371227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:14.223 [2024-12-05 12:14:48.379154] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016eeff18 00:31:14.223 [2024-12-05 12:14:48.380197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:11027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.223 [2024-12-05 12:14:48.380215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:14.223 [2024-12-05 12:14:48.388157] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ee0a68 00:31:14.223 [2024-12-05 12:14:48.389231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:24834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.223 [2024-12-05 12:14:48.389250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:14.223 [2024-12-05 12:14:48.397076] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016efb480 00:31:14.223 [2024-12-05 12:14:48.398089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.223 [2024-12-05 12:14:48.398108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:14.223 [2024-12-05 12:14:48.406059] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef3a28 00:31:14.223 [2024-12-05 12:14:48.407067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:20915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.223 [2024-12-05 12:14:48.407086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:14.223 [2024-12-05 12:14:48.415066] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016efc128 00:31:14.223 [2024-12-05 12:14:48.416092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:14322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.223 [2024-12-05 12:14:48.416111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:14.483 [2024-12-05 12:14:48.424518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016efeb58 00:31:14.483 [2024-12-05 12:14:48.425639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.483 [2024-12-05 12:14:48.425656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:14.483 [2024-12-05 12:14:48.433049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016edece0 00:31:14.483 [2024-12-05 12:14:48.434154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:3258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.483 [2024-12-05 12:14:48.434175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:14.483 [2024-12-05 12:14:48.442409] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ee0ea0 00:31:14.483 [2024-12-05 12:14:48.443651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:7588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.483 [2024-12-05 12:14:48.443669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:14.483 [2024-12-05 12:14:48.450739] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef20d8 00:31:14.483 [2024-12-05 12:14:48.451667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.483 [2024-12-05 12:14:48.451685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:14.483 [2024-12-05 12:14:48.459631] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef0ff8 00:31:14.483 [2024-12-05 12:14:48.460568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.483 [2024-12-05 12:14:48.460586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:14.483 [2024-12-05 12:14:48.468858] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef7da8 00:31:14.483 [2024-12-05 12:14:48.469547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.483 [2024-12-05 12:14:48.469565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:14.483 [2024-12-05 12:14:48.476993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef3a28 00:31:14.483 [2024-12-05 12:14:48.477803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:2035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.483 [2024-12-05 12:14:48.477821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:14.483 [2024-12-05 12:14:48.486015] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016efdeb0 00:31:14.483 [2024-12-05 12:14:48.486820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.483 [2024-12-05 12:14:48.486838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:14.483 [2024-12-05 12:14:48.495099] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ee9e10 00:31:14.483 [2024-12-05 12:14:48.495995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:11256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.483 [2024-12-05 12:14:48.496014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:14.483 [2024-12-05 12:14:48.504353] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016eed4e8 00:31:14.483 [2024-12-05 12:14:48.505294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:20444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.483 [2024-12-05 12:14:48.505312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:14.483 [2024-12-05 12:14:48.513704] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016efeb58 00:31:14.483 [2024-12-05 12:14:48.514633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.483 [2024-12-05 12:14:48.514651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:14.483 [2024-12-05 12:14:48.522794] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016efb048 00:31:14.483 [2024-12-05 12:14:48.523727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:18263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.483 [2024-12-05 12:14:48.523745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:14.483 [2024-12-05 12:14:48.531959] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef20d8 00:31:14.483 [2024-12-05 12:14:48.532882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.483 [2024-12-05 12:14:48.532900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:14.483 [2024-12-05 12:14:48.541028] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef8e88 00:31:14.483 [2024-12-05 12:14:48.541944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:2859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.483 [2024-12-05 12:14:48.541962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:14.483 [2024-12-05 12:14:48.551137] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016eea248 00:31:14.483 [2024-12-05 12:14:48.552547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.483 [2024-12-05 12:14:48.552565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:14.483 [2024-12-05 12:14:48.560244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ee0ea0 00:31:14.483 [2024-12-05 12:14:48.561628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.483 [2024-12-05 12:14:48.561646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:14.483 [2024-12-05 12:14:48.566895] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016eedd58 00:31:14.483 [2024-12-05 12:14:48.567687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:8446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.483 [2024-12-05 12:14:48.567705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:14.483 [2024-12-05 12:14:48.576004] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016edf988 00:31:14.483 [2024-12-05 12:14:48.576803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:18667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.483 [2024-12-05 12:14:48.576821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:14.483 [2024-12-05 12:14:48.585416] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef6020 00:31:14.483 [2024-12-05 12:14:48.586216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:4651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.483 [2024-12-05 12:14:48.586234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:14.483 [2024-12-05 12:14:48.594347] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016efc128 00:31:14.483 [2024-12-05 12:14:48.595128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.483 [2024-12-05 12:14:48.595147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:14.483 [2024-12-05 12:14:48.603310] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ee7c50 00:31:14.483 [2024-12-05 12:14:48.604091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.483 [2024-12-05 12:14:48.604109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:14.483 [2024-12-05 12:14:48.612291] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ee9e10 00:31:14.483 [2024-12-05 12:14:48.613099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:5374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.483 [2024-12-05 12:14:48.613117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:14.483 [2024-12-05 12:14:48.622499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef0788 00:31:14.483 [2024-12-05 12:14:48.623754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.483 [2024-12-05 12:14:48.623772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:14.483 [2024-12-05 12:14:48.631942] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016efeb58 00:31:14.483 [2024-12-05 12:14:48.633361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.483 [2024-12-05 12:14:48.633386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:14.483 [2024-12-05 12:14:48.640293] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef57b0 00:31:14.483 [2024-12-05 12:14:48.641273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.483 [2024-12-05 12:14:48.641291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:14.483 [2024-12-05 12:14:48.648580] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef0350 00:31:14.483 [2024-12-05 12:14:48.649620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:12084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.483 [2024-12-05 12:14:48.649638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:14.483 [2024-12-05 12:14:48.657981] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016efe720 00:31:14.483 [2024-12-05 12:14:48.659148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.483 [2024-12-05 12:14:48.659166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:14.483 [2024-12-05 12:14:48.667048] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ee8d30 00:31:14.484 [2024-12-05 12:14:48.668225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:18427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.484 [2024-12-05 12:14:48.668245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:14.484 [2024-12-05 12:14:48.675280] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef46d0 00:31:14.484 [2024-12-05 12:14:48.676199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:9578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.484 [2024-12-05 12:14:48.676218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:14.742 [2024-12-05 12:14:48.684414] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ee95a0 00:31:14.742 [2024-12-05 12:14:48.685265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:3407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.742 [2024-12-05 12:14:48.685283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:14.742 [2024-12-05 12:14:48.693841] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016eee190 00:31:14.742 [2024-12-05 12:14:48.694903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.742 [2024-12-05 12:14:48.694920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:14.742 [2024-12-05 12:14:48.703243] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef6020 00:31:14.742 [2024-12-05 12:14:48.704425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:7441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.742 [2024-12-05 12:14:48.704444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:14.742 [2024-12-05 12:14:48.712319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef2d80 00:31:14.743 [2024-12-05 12:14:48.713473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:4417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.743 [2024-12-05 12:14:48.713491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:14.743 [2024-12-05 12:14:48.721538] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ee4140 00:31:14.743 [2024-12-05 12:14:48.722716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.743 [2024-12-05 12:14:48.722735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:14.743 [2024-12-05 12:14:48.730141] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef4298 00:31:14.743 [2024-12-05 12:14:48.731253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.743 [2024-12-05 12:14:48.731272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:14.743 [2024-12-05 12:14:48.738663] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ee8088 00:31:14.743 28252.00 IOPS, 110.36 MiB/s [2024-12-05T11:14:48.939Z] [2024-12-05 12:14:48.739652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.743 [2024-12-05 12:14:48.739668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:14.743 [2024-12-05 12:14:48.748042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ee9168 00:31:14.743 [2024-12-05 12:14:48.749116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.743 [2024-12-05 12:14:48.749135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:14.743 [2024-12-05 12:14:48.757551] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016efd208 00:31:14.743 [2024-12-05 12:14:48.758786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:2541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.743 [2024-12-05 12:14:48.758804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:14.743 [2024-12-05 12:14:48.765500] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ee4140 00:31:14.743 [2024-12-05 12:14:48.766230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.743 [2024-12-05 12:14:48.766249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:14.743 [2024-12-05 12:14:48.775013] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016eecc78 00:31:14.743 [2024-12-05 12:14:48.776182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:8051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.743 [2024-12-05 12:14:48.776200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:14.743 [2024-12-05 12:14:48.785964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef8e88 00:31:14.743 [2024-12-05 12:14:48.787519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:21333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.743 [2024-12-05 12:14:48.787537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:14.743 [2024-12-05 12:14:48.792340] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef81e0 00:31:14.743 [2024-12-05 12:14:48.793048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:14242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.743 [2024-12-05 12:14:48.793066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:14.743 [2024-12-05 12:14:48.801862] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ee0ea0 00:31:14.743 [2024-12-05 12:14:48.802845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:24586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.743 [2024-12-05 12:14:48.802863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:14.743 [2024-12-05 12:14:48.811045] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ee6b70 00:31:14.743 [2024-12-05 12:14:48.811537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.743 [2024-12-05 12:14:48.811555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:14.743 [2024-12-05 12:14:48.822480] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ee1b48 00:31:14.743 [2024-12-05 12:14:48.824056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.743 [2024-12-05 12:14:48.824074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:14.743 [2024-12-05 12:14:48.829026] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016edece0 00:31:14.743 [2024-12-05 12:14:48.829892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.743 [2024-12-05 12:14:48.829910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:14.743 [2024-12-05 12:14:48.840025] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016efac10 00:31:14.743 [2024-12-05 12:14:48.841237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:11949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.743 [2024-12-05 12:14:48.841256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:14.743 [2024-12-05 12:14:48.848575] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef0788 00:31:14.743 [2024-12-05 12:14:48.849784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.743 [2024-12-05 12:14:48.849802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:14.743 [2024-12-05 12:14:48.857685] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016edf988 00:31:14.743 [2024-12-05 12:14:48.858922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:19867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.743 [2024-12-05 12:14:48.858940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:14.743 [2024-12-05 12:14:48.866512] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef7970 00:31:14.743 [2024-12-05 12:14:48.867262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.743 [2024-12-05 12:14:48.867280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:14.743 [2024-12-05 12:14:48.875115] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ee6738 00:31:14.743 [2024-12-05 12:14:48.876440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:18595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.743 [2024-12-05 12:14:48.876458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:14.743 [2024-12-05 12:14:48.884233] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016edf118 00:31:14.743 [2024-12-05 12:14:48.885248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:15964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.743 [2024-12-05 12:14:48.885266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:14.743 [2024-12-05 12:14:48.893744] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef7da8 00:31:14.743 [2024-12-05 12:14:48.894746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:23125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.743 [2024-12-05 12:14:48.894764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:14.743 [2024-12-05 12:14:48.903002] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef6458 00:31:14.743 [2024-12-05 12:14:48.904253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:20025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.743 [2024-12-05 12:14:48.904277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:14.743 [2024-12-05 12:14:48.911578] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016eee5c8 00:31:14.743 [2024-12-05 12:14:48.912711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:18361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.743 [2024-12-05 12:14:48.912730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:14.743 [2024-12-05 12:14:48.920178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016efef90 00:31:14.743 [2024-12-05 12:14:48.921130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:10667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.743 [2024-12-05 12:14:48.921149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:14.743 [2024-12-05 12:14:48.929757] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016eed0b0 00:31:14.743 [2024-12-05 12:14:48.931013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:12544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.743 [2024-12-05 12:14:48.931032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:14.743 [2024-12-05 12:14:48.938387] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ee0a68 00:31:14.743 [2024-12-05 12:14:48.939347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:8068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.743 [2024-12-05 12:14:48.939373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:15.003 [2024-12-05 12:14:48.949434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016eec408 00:31:15.003 [2024-12-05 12:14:48.950908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:6445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.003 [2024-12-05 12:14:48.950926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:15.003 [2024-12-05 12:14:48.956103] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef4f40 00:31:15.003 [2024-12-05 12:14:48.956864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.003 [2024-12-05 12:14:48.956882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:15.003 [2024-12-05 12:14:48.967059] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ee1f80 00:31:15.003 [2024-12-05 12:14:48.968279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:14544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.003 [2024-12-05 12:14:48.968297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:15.003 [2024-12-05 12:14:48.976031] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ee9168 00:31:15.003 [2024-12-05 12:14:48.976952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.003 [2024-12-05 12:14:48.976970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:15.003 [2024-12-05 12:14:48.984175] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef0788 00:31:15.003 [2024-12-05 12:14:48.985126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:1965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.003 [2024-12-05 12:14:48.985144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:15.003 [2024-12-05 12:14:48.994982] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef0788 00:31:15.003 [2024-12-05 12:14:48.996517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.003 [2024-12-05 12:14:48.996534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:15.003 [2024-12-05 12:14:49.001525] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ee8088 00:31:15.003 [2024-12-05 12:14:49.002283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:10606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.003 [2024-12-05 12:14:49.002300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:15.003 [2024-12-05 12:14:49.010612] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef96f8 00:31:15.003 [2024-12-05 12:14:49.011308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:18016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.003 [2024-12-05 12:14:49.011325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:15.003 [2024-12-05 12:14:49.021098] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef1430 00:31:15.003 [2024-12-05 12:14:49.022042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.003 [2024-12-05 12:14:49.022060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:15.003 [2024-12-05 12:14:49.030212] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef96f8 00:31:15.003 [2024-12-05 12:14:49.031159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.003 [2024-12-05 12:14:49.031177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:15.003 [2024-12-05 12:14:49.039113] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef96f8 00:31:15.003 [2024-12-05 12:14:49.040057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.003 [2024-12-05 12:14:49.040075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:15.003 [2024-12-05 12:14:49.048050] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef92c0 00:31:15.003 [2024-12-05 12:14:49.048993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.003 [2024-12-05 12:14:49.049012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:15.003 [2024-12-05 12:14:49.056631] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016efc560 00:31:15.003 [2024-12-05 12:14:49.057624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:13748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.003 [2024-12-05 12:14:49.057642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:15.003 [2024-12-05 12:14:49.065588] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef4298 00:31:15.003 [2024-12-05 12:14:49.066445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:12747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.003 [2024-12-05 12:14:49.066464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:15.003 [2024-12-05 12:14:49.075144] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016efb8b8 00:31:15.003 [2024-12-05 12:14:49.076264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:3537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.003 [2024-12-05 12:14:49.076283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:15.003 [2024-12-05 12:14:49.084519] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016efa7d8 00:31:15.003 [2024-12-05 12:14:49.085802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:3389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.003 [2024-12-05 12:14:49.085820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:15.003 [2024-12-05 12:14:49.093027] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ee5658 00:31:15.003 [2024-12-05 12:14:49.093979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.003 [2024-12-05 12:14:49.093998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:15.003 [2024-12-05 12:14:49.102059] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef20d8 00:31:15.003 [2024-12-05 12:14:49.103014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.003 [2024-12-05 12:14:49.103032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:15.004 [2024-12-05 12:14:49.112717] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef20d8 00:31:15.004 [2024-12-05 12:14:49.114104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:14612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.004 [2024-12-05 12:14:49.114121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:15.004 [2024-12-05 12:14:49.120555] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016eebfd0 00:31:15.004 [2024-12-05 12:14:49.121517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:9806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.004 [2024-12-05 12:14:49.121537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:15.004 [2024-12-05 12:14:49.129533] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016eec840 00:31:15.004 [2024-12-05 12:14:49.130485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.004 [2024-12-05 12:14:49.130504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:15.004 [2024-12-05 12:14:49.137774] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016edece0 00:31:15.004 [2024-12-05 12:14:49.139042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.004 [2024-12-05 12:14:49.139063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:15.004 [2024-12-05 12:14:49.146909] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016eed4e8 00:31:15.004 [2024-12-05 12:14:49.147889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:13336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.004 [2024-12-05 12:14:49.147908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:15.004 [2024-12-05 12:14:49.155784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef9b30 00:31:15.004 [2024-12-05 12:14:49.156656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:12391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.004 [2024-12-05 12:14:49.156675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:15.004 [2024-12-05 12:14:49.165979] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ee5220 00:31:15.004 [2024-12-05 12:14:49.167264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.004 [2024-12-05 12:14:49.167283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:15.004 [2024-12-05 12:14:49.174998] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016eff3c8 00:31:15.004 [2024-12-05 12:14:49.176283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:24457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.004 [2024-12-05 12:14:49.176301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:15.004 [2024-12-05 12:14:49.182759] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef2948 00:31:15.004 [2024-12-05 12:14:49.183563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.004 [2024-12-05 12:14:49.183581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:15.004 [2024-12-05 12:14:49.192015] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016efc998 00:31:15.004 [2024-12-05 12:14:49.193059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.004 [2024-12-05 12:14:49.193078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:15.263 [2024-12-05 12:14:49.202513] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef2d80 00:31:15.263 [2024-12-05 12:14:49.203892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.263 [2024-12-05 12:14:49.203910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:15.263 [2024-12-05 12:14:49.210358] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef81e0 00:31:15.263 [2024-12-05 12:14:49.211229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:3126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.263 [2024-12-05 12:14:49.211247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:15.263 [2024-12-05 12:14:49.219670] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ee6300 00:31:15.263 [2024-12-05 12:14:49.220938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:6444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.263 [2024-12-05 12:14:49.220956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:15.263 [2024-12-05 12:14:49.228964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef2d80 00:31:15.263 [2024-12-05 12:14:49.229760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.263 [2024-12-05 12:14:49.229779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:15.263 [2024-12-05 12:14:49.237582] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016edece0 00:31:15.263 [2024-12-05 12:14:49.238884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:18738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.263 [2024-12-05 12:14:49.238903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:15.263 [2024-12-05 12:14:49.245285] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016efac10 00:31:15.263 [2024-12-05 12:14:49.246044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:21986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.263 [2024-12-05 12:14:49.246062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:15.263 [2024-12-05 12:14:49.256142] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016eddc00 00:31:15.263 [2024-12-05 12:14:49.257197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:25143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.263 [2024-12-05 12:14:49.257217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:15.263 [2024-12-05 12:14:49.265228] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef0bc0 00:31:15.263 [2024-12-05 12:14:49.266296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:24425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.263 [2024-12-05 12:14:49.266316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:15.263 [2024-12-05 12:14:49.274008] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef20d8 00:31:15.263 [2024-12-05 12:14:49.275069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:18672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.263 [2024-12-05 12:14:49.275089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:15.263 [2024-12-05 12:14:49.283149] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016eddc00 00:31:15.263 [2024-12-05 12:14:49.284163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:6788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.263 [2024-12-05 12:14:49.284182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:15.263 [2024-12-05 12:14:49.292701] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef7da8 00:31:15.263 [2024-12-05 12:14:49.293798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:14585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.263 [2024-12-05 12:14:49.293817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:15.263 [2024-12-05 12:14:49.302097] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef0788 00:31:15.263 [2024-12-05 12:14:49.303346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:12245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.263 [2024-12-05 12:14:49.303365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:15.263 [2024-12-05 12:14:49.311056] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef81e0 00:31:15.263 [2024-12-05 12:14:49.312343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:5252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.263 [2024-12-05 12:14:49.312362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:15.263 [2024-12-05 12:14:49.320662] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ee7c50 00:31:15.263 [2024-12-05 12:14:49.322024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:15114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.263 [2024-12-05 12:14:49.322042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:15.263 [2024-12-05 12:14:49.328487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016efac10 00:31:15.264 [2024-12-05 12:14:49.329396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:20234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.264 [2024-12-05 12:14:49.329414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:15.264 [2024-12-05 12:14:49.336826] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016eebb98 00:31:15.264 [2024-12-05 12:14:49.337811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:6930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.264 [2024-12-05 12:14:49.337829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:15.264 [2024-12-05 12:14:49.346220] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef35f0 00:31:15.264 [2024-12-05 12:14:49.346763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.264 [2024-12-05 12:14:49.346782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:15.264 [2024-12-05 12:14:49.355781] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef31b8 00:31:15.264 [2024-12-05 12:14:49.356442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:8522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.264 [2024-12-05 12:14:49.356460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:15.264 [2024-12-05 12:14:49.364465] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ee27f0 00:31:15.264 [2024-12-05 12:14:49.365701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:4718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.264 [2024-12-05 12:14:49.365720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:15.264 [2024-12-05 12:14:49.374091] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016eec408 00:31:15.264 [2024-12-05 12:14:49.375465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:24497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.264 [2024-12-05 12:14:49.375486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:15.264 [2024-12-05 12:14:49.382493] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ee6fa8 00:31:15.264 [2024-12-05 12:14:49.383181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:18129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.264 [2024-12-05 12:14:49.383200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:15.264 [2024-12-05 12:14:49.391834] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016eebfd0 00:31:15.264 [2024-12-05 12:14:49.392365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.264 [2024-12-05 12:14:49.392390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:15.264 [2024-12-05 12:14:49.400704] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016efef90 00:31:15.264 [2024-12-05 12:14:49.401472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:16282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.264 [2024-12-05 12:14:49.401491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:15.264 [2024-12-05 12:14:49.409781] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016eed4e8 00:31:15.264 [2024-12-05 12:14:49.410670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:11826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.264 [2024-12-05 12:14:49.410688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:15.264 [2024-12-05 12:14:49.419409] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef96f8 00:31:15.264 [2024-12-05 12:14:49.420428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:8180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.264 [2024-12-05 12:14:49.420445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:15.264 [2024-12-05 12:14:49.430563] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef5be8 00:31:15.264 [2024-12-05 12:14:49.432100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.264 [2024-12-05 12:14:49.432118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:15.264 [2024-12-05 12:14:49.437041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016edf550 00:31:15.264 [2024-12-05 12:14:49.437732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:3955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.264 [2024-12-05 12:14:49.437750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:15.264 [2024-12-05 12:14:49.446629] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef6cc8 00:31:15.264 [2024-12-05 12:14:49.447337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.264 [2024-12-05 12:14:49.447356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:15.264 [2024-12-05 12:14:49.456902] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ee1b48 00:31:15.264 [2024-12-05 12:14:49.458161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.264 [2024-12-05 12:14:49.458179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:15.523 [2024-12-05 12:14:49.466489] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ee84c0 00:31:15.523 [2024-12-05 12:14:49.467859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.523 [2024-12-05 12:14:49.467877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:15.523 [2024-12-05 12:14:49.475943] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ee5ec8 00:31:15.523 [2024-12-05 12:14:49.477470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.523 [2024-12-05 12:14:49.477487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:15.523 [2024-12-05 12:14:49.482476] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef7100 00:31:15.523 [2024-12-05 12:14:49.483276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:14485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.523 [2024-12-05 12:14:49.483294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:15.523 [2024-12-05 12:14:49.493661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016eed0b0 00:31:15.523 [2024-12-05 12:14:49.494929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.523 [2024-12-05 12:14:49.494948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:15.523 [2024-12-05 12:14:49.501595] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef8a50 00:31:15.523 [2024-12-05 12:14:49.502159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.523 [2024-12-05 12:14:49.502177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:15.523 [2024-12-05 12:14:49.510169] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ee4578 00:31:15.523 [2024-12-05 12:14:49.510656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.523 [2024-12-05 12:14:49.510674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:15.523 [2024-12-05 12:14:49.521583] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016eec408 00:31:15.523 [2024-12-05 12:14:49.523130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.523 [2024-12-05 12:14:49.523149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:15.523 [2024-12-05 12:14:49.528097] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016efeb58 00:31:15.523 [2024-12-05 12:14:49.528745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:15876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.523 [2024-12-05 12:14:49.528763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:15.523 [2024-12-05 12:14:49.537858] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef6890 00:31:15.523 [2024-12-05 12:14:49.538767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:9439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.523 [2024-12-05 12:14:49.538786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:15.523 [2024-12-05 12:14:49.547197] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ee6b70 00:31:15.523 [2024-12-05 12:14:49.548114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:19612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.523 [2024-12-05 12:14:49.548132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:15.523 [2024-12-05 12:14:49.556141] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef7538 00:31:15.523 [2024-12-05 12:14:49.556607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:20744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.523 [2024-12-05 12:14:49.556626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:15.523 [2024-12-05 12:14:49.565332] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016eef6a8 00:31:15.523 [2024-12-05 12:14:49.566025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.523 [2024-12-05 12:14:49.566043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:15.523 [2024-12-05 12:14:49.573732] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016eeff18 00:31:15.523 [2024-12-05 12:14:49.574517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:7385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.523 [2024-12-05 12:14:49.574535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:15.523 [2024-12-05 12:14:49.583087] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ee3498 00:31:15.523 [2024-12-05 12:14:49.584004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:18149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.523 [2024-12-05 12:14:49.584022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:15.523 [2024-12-05 12:14:49.592497] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ee6300 00:31:15.523 [2024-12-05 12:14:49.593533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:1121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.523 [2024-12-05 12:14:49.593551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:15.524 [2024-12-05 12:14:49.601840] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ee95a0 00:31:15.524 [2024-12-05 12:14:49.602961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.524 [2024-12-05 12:14:49.602979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:15.524 [2024-12-05 12:14:49.610959] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016eeff18 00:31:15.524 [2024-12-05 12:14:49.612117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.524 [2024-12-05 12:14:49.612141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:15.524 [2024-12-05 12:14:49.618758] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016edece0 00:31:15.524 [2024-12-05 12:14:49.619334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.524 [2024-12-05 12:14:49.619351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:15.524 [2024-12-05 12:14:49.627592] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016edf550 00:31:15.524 [2024-12-05 12:14:49.628215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.524 [2024-12-05 12:14:49.628234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:15.524 [2024-12-05 12:14:49.636725] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ee27f0 00:31:15.524 [2024-12-05 12:14:49.637329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.524 [2024-12-05 12:14:49.637347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:15.524 [2024-12-05 12:14:49.646197] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016efd640 00:31:15.524 [2024-12-05 12:14:49.647121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.524 [2024-12-05 12:14:49.647139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:15.524 [2024-12-05 12:14:49.655279] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef5378 00:31:15.524 [2024-12-05 12:14:49.655776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:15400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.524 [2024-12-05 12:14:49.655795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:15.524 [2024-12-05 12:14:49.666036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ee73e0 00:31:15.524 [2024-12-05 12:14:49.667420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.524 [2024-12-05 12:14:49.667438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:15.524 [2024-12-05 12:14:49.674395] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016eef6a8 00:31:15.524 [2024-12-05 12:14:49.675339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:3115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.524 [2024-12-05 12:14:49.675357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:15.524 [2024-12-05 12:14:49.682582] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016efa7d8 00:31:15.524 [2024-12-05 12:14:49.683852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.524 [2024-12-05 12:14:49.683870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:15.524 [2024-12-05 12:14:49.690283] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef6020 00:31:15.524 [2024-12-05 12:14:49.690974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:3230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.524 [2024-12-05 12:14:49.690993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:15.524 [2024-12-05 12:14:49.701140] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ee5ec8 00:31:15.524 [2024-12-05 12:14:49.702160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:14925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.524 [2024-12-05 12:14:49.702178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:15.524 [2024-12-05 12:14:49.710527] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016efb048 00:31:15.524 [2024-12-05 12:14:49.711628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:12702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.524 [2024-12-05 12:14:49.711646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:15.524 [2024-12-05 12:14:49.718990] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ee12d8 00:31:15.524 [2024-12-05 12:14:49.720029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:11321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.524 [2024-12-05 12:14:49.720048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:15.782 [2024-12-05 12:14:49.727535] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016efef90 00:31:15.782 [2024-12-05 12:14:49.728507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:24238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.782 [2024-12-05 12:14:49.728525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:15.782 [2024-12-05 12:14:49.736320] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ef8e88 00:31:15.782 [2024-12-05 12:14:49.736935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.782 [2024-12-05 12:14:49.736953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:15.782 28214.00 IOPS, 110.21 MiB/s [2024-12-05T11:14:49.978Z] [2024-12-05 12:14:49.745518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb2180) with pdu=0x200016ee6b70 00:31:15.782 [2024-12-05 12:14:49.746281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:23144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:15.782 [2024-12-05 12:14:49.746299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:15.782 00:31:15.782 Latency(us) 00:31:15.782 [2024-12-05T11:14:49.978Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:15.782 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:15.782 nvme0n1 : 2.01 28225.57 110.26 0.00 0.00 4528.58 1825.65 12358.22 00:31:15.782 [2024-12-05T11:14:49.978Z] =================================================================================================================== 00:31:15.782 [2024-12-05T11:14:49.978Z] Total : 28225.57 110.26 0.00 0.00 4528.58 1825.65 12358.22 00:31:15.782 { 00:31:15.782 "results": [ 00:31:15.782 { 00:31:15.782 "job": "nvme0n1", 00:31:15.782 "core_mask": "0x2", 00:31:15.782 "workload": "randwrite", 00:31:15.782 "status": "finished", 00:31:15.782 "queue_depth": 128, 00:31:15.782 "io_size": 4096, 00:31:15.782 "runtime": 2.005947, 00:31:15.782 "iops": 28225.571263846952, 00:31:15.782 "mibps": 110.25613774940216, 00:31:15.782 "io_failed": 0, 00:31:15.782 "io_timeout": 0, 00:31:15.782 "avg_latency_us": 4528.583229960664, 00:31:15.782 "min_latency_us": 1825.6457142857143, 00:31:15.782 "max_latency_us": 12358.217142857144 00:31:15.782 } 00:31:15.782 ], 00:31:15.782 "core_count": 1 00:31:15.782 } 00:31:15.782 12:14:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:31:15.782 12:14:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:31:15.782 12:14:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:31:15.782 | .driver_specific 00:31:15.782 | .nvme_error 00:31:15.782 | .status_code 00:31:15.782 | .command_transient_transport_error' 00:31:15.782 12:14:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:31:15.782 12:14:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 222 > 0 )) 00:31:15.782 12:14:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 231792 00:31:15.782 12:14:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 231792 ']' 00:31:15.782 12:14:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 231792 00:31:15.782 12:14:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:31:15.782 12:14:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:16.041 12:14:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 231792 00:31:16.041 12:14:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:16.041 12:14:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:16.041 12:14:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 231792' 00:31:16.041 killing process with pid 231792 00:31:16.041 12:14:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 231792 00:31:16.041 Received shutdown signal, test time was about 2.000000 seconds 00:31:16.041 00:31:16.042 Latency(us) 00:31:16.042 [2024-12-05T11:14:50.238Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:16.042 [2024-12-05T11:14:50.238Z] =================================================================================================================== 00:31:16.042 [2024-12-05T11:14:50.238Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:16.042 12:14:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 231792 00:31:16.042 12:14:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:31:16.042 12:14:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:31:16.042 12:14:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:31:16.042 12:14:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:31:16.042 12:14:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:31:16.042 12:14:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=232266 00:31:16.042 12:14:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 232266 /var/tmp/bperf.sock 00:31:16.042 12:14:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:31:16.042 12:14:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 232266 ']' 00:31:16.042 12:14:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:16.042 12:14:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:16.042 12:14:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:16.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:16.042 12:14:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:16.042 12:14:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:16.042 [2024-12-05 12:14:50.236473] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:31:16.042 [2024-12-05 12:14:50.236522] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid232266 ] 00:31:16.042 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:16.042 Zero copy mechanism will not be used. 00:31:16.301 [2024-12-05 12:14:50.312197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:16.301 [2024-12-05 12:14:50.350068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:16.301 12:14:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:16.301 12:14:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:31:16.301 12:14:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:16.301 12:14:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:16.560 12:14:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:16.560 12:14:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.560 12:14:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:16.560 12:14:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:16.560 12:14:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:16.560 12:14:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:16.818 nvme0n1 00:31:16.819 12:14:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:31:16.819 12:14:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.819 12:14:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:17.079 12:14:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.079 12:14:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:17.079 12:14:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:17.079 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:17.079 Zero copy mechanism will not be used. 00:31:17.079 Running I/O for 2 seconds... 00:31:17.079 [2024-12-05 12:14:51.106927] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.079 [2024-12-05 12:14:51.107008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.079 [2024-12-05 12:14:51.107037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:17.079 [2024-12-05 12:14:51.112468] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.079 [2024-12-05 12:14:51.112525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.079 [2024-12-05 12:14:51.112547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:17.079 [2024-12-05 12:14:51.117158] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.079 [2024-12-05 12:14:51.117225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.079 [2024-12-05 12:14:51.117245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:17.079 [2024-12-05 12:14:51.121829] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.079 [2024-12-05 12:14:51.121879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.079 [2024-12-05 12:14:51.121898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:17.079 [2024-12-05 12:14:51.126442] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.079 [2024-12-05 12:14:51.126495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.079 [2024-12-05 12:14:51.126516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:17.079 [2024-12-05 12:14:51.131098] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.079 [2024-12-05 12:14:51.131155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.079 [2024-12-05 12:14:51.131178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:17.079 [2024-12-05 12:14:51.135591] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.079 [2024-12-05 12:14:51.135639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.079 [2024-12-05 12:14:51.135658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:17.079 [2024-12-05 12:14:51.140207] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.079 [2024-12-05 12:14:51.140270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.079 [2024-12-05 12:14:51.140288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:17.079 [2024-12-05 12:14:51.144742] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.079 [2024-12-05 12:14:51.144826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.079 [2024-12-05 12:14:51.144846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:17.079 [2024-12-05 12:14:51.149194] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.079 [2024-12-05 12:14:51.149257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.079 [2024-12-05 12:14:51.149278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:17.079 [2024-12-05 12:14:51.153672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.079 [2024-12-05 12:14:51.153739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.079 [2024-12-05 12:14:51.153757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:17.079 [2024-12-05 12:14:51.158194] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.079 [2024-12-05 12:14:51.158254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.079 [2024-12-05 12:14:51.158273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:17.079 [2024-12-05 12:14:51.162647] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.079 [2024-12-05 12:14:51.162730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.079 [2024-12-05 12:14:51.162749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:17.079 [2024-12-05 12:14:51.167054] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.079 [2024-12-05 12:14:51.167124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.079 [2024-12-05 12:14:51.167143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:17.079 [2024-12-05 12:14:51.171561] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.079 [2024-12-05 12:14:51.171632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.079 [2024-12-05 12:14:51.171660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:17.079 [2024-12-05 12:14:51.176122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.079 [2024-12-05 12:14:51.176187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.079 [2024-12-05 12:14:51.176204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:17.079 [2024-12-05 12:14:51.180683] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.079 [2024-12-05 12:14:51.180769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.079 [2024-12-05 12:14:51.180787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:17.079 [2024-12-05 12:14:51.185273] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.079 [2024-12-05 12:14:51.185345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.080 [2024-12-05 12:14:51.185363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:17.080 [2024-12-05 12:14:51.190573] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.080 [2024-12-05 12:14:51.190653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.080 [2024-12-05 12:14:51.190671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:17.080 [2024-12-05 12:14:51.195487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.080 [2024-12-05 12:14:51.195594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.080 [2024-12-05 12:14:51.195612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:17.080 [2024-12-05 12:14:51.200187] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.080 [2024-12-05 12:14:51.200234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.080 [2024-12-05 12:14:51.200253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:17.080 [2024-12-05 12:14:51.204801] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.080 [2024-12-05 12:14:51.204869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.080 [2024-12-05 12:14:51.204887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:17.080 [2024-12-05 12:14:51.209560] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.080 [2024-12-05 12:14:51.209639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.080 [2024-12-05 12:14:51.209657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:17.080 [2024-12-05 12:14:51.214638] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.080 [2024-12-05 12:14:51.214707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.080 [2024-12-05 12:14:51.214725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:17.080 [2024-12-05 12:14:51.219182] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.080 [2024-12-05 12:14:51.219248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.080 [2024-12-05 12:14:51.219266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:17.080 [2024-12-05 12:14:51.223762] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.080 [2024-12-05 12:14:51.223827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.080 [2024-12-05 12:14:51.223845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:17.080 [2024-12-05 12:14:51.228349] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.080 [2024-12-05 12:14:51.228412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.080 [2024-12-05 12:14:51.228430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:17.080 [2024-12-05 12:14:51.232867] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.080 [2024-12-05 12:14:51.232968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.080 [2024-12-05 12:14:51.232986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:17.080 [2024-12-05 12:14:51.237530] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.080 [2024-12-05 12:14:51.237578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.080 [2024-12-05 12:14:51.237596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:17.080 [2024-12-05 12:14:51.242129] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.080 [2024-12-05 12:14:51.242223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.080 [2024-12-05 12:14:51.242245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:17.080 [2024-12-05 12:14:51.246670] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.080 [2024-12-05 12:14:51.246726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.080 [2024-12-05 12:14:51.246744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:17.080 [2024-12-05 12:14:51.251236] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.080 [2024-12-05 12:14:51.251309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.080 [2024-12-05 12:14:51.251327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:17.080 [2024-12-05 12:14:51.255772] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.080 [2024-12-05 12:14:51.255870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.080 [2024-12-05 12:14:51.255888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:17.080 [2024-12-05 12:14:51.260296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.080 [2024-12-05 12:14:51.260349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.080 [2024-12-05 12:14:51.260372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:17.080 [2024-12-05 12:14:51.264955] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.080 [2024-12-05 12:14:51.265009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.080 [2024-12-05 12:14:51.265026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:17.080 [2024-12-05 12:14:51.269521] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.080 [2024-12-05 12:14:51.269583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.080 [2024-12-05 12:14:51.269604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:17.080 [2024-12-05 12:14:51.274036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.080 [2024-12-05 12:14:51.274088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.080 [2024-12-05 12:14:51.274106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:17.341 [2024-12-05 12:14:51.278667] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.341 [2024-12-05 12:14:51.278744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.341 [2024-12-05 12:14:51.278762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:17.341 [2024-12-05 12:14:51.283304] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.341 [2024-12-05 12:14:51.283384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.341 [2024-12-05 12:14:51.283401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:17.341 [2024-12-05 12:14:51.288345] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.341 [2024-12-05 12:14:51.288464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.341 [2024-12-05 12:14:51.288482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:17.341 [2024-12-05 12:14:51.293352] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.341 [2024-12-05 12:14:51.293412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.341 [2024-12-05 12:14:51.293430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:17.341 [2024-12-05 12:14:51.298206] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.341 [2024-12-05 12:14:51.298254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.341 [2024-12-05 12:14:51.298271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:17.341 [2024-12-05 12:14:51.303546] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.341 [2024-12-05 12:14:51.303646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.341 [2024-12-05 12:14:51.303664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:17.341 [2024-12-05 12:14:51.308722] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.341 [2024-12-05 12:14:51.308788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.341 [2024-12-05 12:14:51.308806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:17.341 [2024-12-05 12:14:51.313605] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.341 [2024-12-05 12:14:51.313683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.341 [2024-12-05 12:14:51.313701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:17.341 [2024-12-05 12:14:51.318455] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.341 [2024-12-05 12:14:51.318526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.341 [2024-12-05 12:14:51.318544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:17.341 [2024-12-05 12:14:51.323114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.341 [2024-12-05 12:14:51.323184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.341 [2024-12-05 12:14:51.323202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:17.341 [2024-12-05 12:14:51.327550] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.341 [2024-12-05 12:14:51.327603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.341 [2024-12-05 12:14:51.327622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:17.341 [2024-12-05 12:14:51.332088] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.341 [2024-12-05 12:14:51.332138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.341 [2024-12-05 12:14:51.332157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:17.341 [2024-12-05 12:14:51.336679] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.341 [2024-12-05 12:14:51.336728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.341 [2024-12-05 12:14:51.336745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:17.341 [2024-12-05 12:14:51.341224] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.341 [2024-12-05 12:14:51.341281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.341 [2024-12-05 12:14:51.341298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:17.341 [2024-12-05 12:14:51.345989] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.341 [2024-12-05 12:14:51.346055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.341 [2024-12-05 12:14:51.346073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:17.341 [2024-12-05 12:14:51.350508] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.341 [2024-12-05 12:14:51.350557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.341 [2024-12-05 12:14:51.350575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:17.341 [2024-12-05 12:14:51.354829] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.341 [2024-12-05 12:14:51.354883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.341 [2024-12-05 12:14:51.354901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:17.341 [2024-12-05 12:14:51.359136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.341 [2024-12-05 12:14:51.359199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.341 [2024-12-05 12:14:51.359217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:17.341 [2024-12-05 12:14:51.363602] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.341 [2024-12-05 12:14:51.363671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.341 [2024-12-05 12:14:51.363689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:17.341 [2024-12-05 12:14:51.367998] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.341 [2024-12-05 12:14:51.368072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.341 [2024-12-05 12:14:51.368091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:17.341 [2024-12-05 12:14:51.372393] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.341 [2024-12-05 12:14:51.372457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.341 [2024-12-05 12:14:51.372475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:17.341 [2024-12-05 12:14:51.376841] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.341 [2024-12-05 12:14:51.376894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.341 [2024-12-05 12:14:51.376913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:17.341 [2024-12-05 12:14:51.381261] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.341 [2024-12-05 12:14:51.381330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.341 [2024-12-05 12:14:51.381348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:17.341 [2024-12-05 12:14:51.385688] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.341 [2024-12-05 12:14:51.385761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.341 [2024-12-05 12:14:51.385779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:17.341 [2024-12-05 12:14:51.390202] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.342 [2024-12-05 12:14:51.390311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.342 [2024-12-05 12:14:51.390333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:17.342 [2024-12-05 12:14:51.395076] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.342 [2024-12-05 12:14:51.395141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.342 [2024-12-05 12:14:51.395159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:17.342 [2024-12-05 12:14:51.400561] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.342 [2024-12-05 12:14:51.400629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.342 [2024-12-05 12:14:51.400658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:17.342 [2024-12-05 12:14:51.406635] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.342 [2024-12-05 12:14:51.406776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.342 [2024-12-05 12:14:51.406793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:17.342 [2024-12-05 12:14:51.413913] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.342 [2024-12-05 12:14:51.414040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.342 [2024-12-05 12:14:51.414059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:17.342 [2024-12-05 12:14:51.420125] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.342 [2024-12-05 12:14:51.420245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.342 [2024-12-05 12:14:51.420264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:17.342 [2024-12-05 12:14:51.425759] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.342 [2024-12-05 12:14:51.425884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.342 [2024-12-05 12:14:51.425913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:17.342 [2024-12-05 12:14:51.430419] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.342 [2024-12-05 12:14:51.430486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.342 [2024-12-05 12:14:51.430504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:17.342 [2024-12-05 12:14:51.434949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.342 [2024-12-05 12:14:51.435019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.342 [2024-12-05 12:14:51.435036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:17.342 [2024-12-05 12:14:51.439436] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.342 [2024-12-05 12:14:51.439495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.342 [2024-12-05 12:14:51.439512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:17.342 [2024-12-05 12:14:51.443976] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.342 [2024-12-05 12:14:51.444032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.342 [2024-12-05 12:14:51.444050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:17.342 [2024-12-05 12:14:51.448468] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.342 [2024-12-05 12:14:51.448529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.342 [2024-12-05 12:14:51.448546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:17.342 [2024-12-05 12:14:51.452921] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.342 [2024-12-05 12:14:51.452973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.342 [2024-12-05 12:14:51.452991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:17.342 [2024-12-05 12:14:51.457364] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.342 [2024-12-05 12:14:51.457435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.342 [2024-12-05 12:14:51.457453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:17.342 [2024-12-05 12:14:51.461791] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.342 [2024-12-05 12:14:51.461861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.342 [2024-12-05 12:14:51.461879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:17.342 [2024-12-05 12:14:51.466233] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.342 [2024-12-05 12:14:51.466299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.342 [2024-12-05 12:14:51.466317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:17.342 [2024-12-05 12:14:51.470683] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.342 [2024-12-05 12:14:51.470744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.342 [2024-12-05 12:14:51.470762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:17.342 [2024-12-05 12:14:51.475091] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.342 [2024-12-05 12:14:51.475174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.342 [2024-12-05 12:14:51.475192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:17.342 [2024-12-05 12:14:51.479572] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.342 [2024-12-05 12:14:51.479680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.342 [2024-12-05 12:14:51.479697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:17.342 [2024-12-05 12:14:51.483975] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.342 [2024-12-05 12:14:51.484026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.342 [2024-12-05 12:14:51.484044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:17.342 [2024-12-05 12:14:51.488415] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.342 [2024-12-05 12:14:51.488467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.342 [2024-12-05 12:14:51.488484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:17.342 [2024-12-05 12:14:51.492848] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.342 [2024-12-05 12:14:51.492903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.342 [2024-12-05 12:14:51.492920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:17.342 [2024-12-05 12:14:51.497252] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.342 [2024-12-05 12:14:51.497311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.342 [2024-12-05 12:14:51.497329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:17.342 [2024-12-05 12:14:51.501607] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.342 [2024-12-05 12:14:51.501674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.342 [2024-12-05 12:14:51.501692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:17.342 [2024-12-05 12:14:51.506066] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.342 [2024-12-05 12:14:51.506117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.342 [2024-12-05 12:14:51.506134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:17.342 [2024-12-05 12:14:51.510529] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.342 [2024-12-05 12:14:51.510583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.342 [2024-12-05 12:14:51.510600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:17.342 [2024-12-05 12:14:51.515642] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.342 [2024-12-05 12:14:51.515757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.342 [2024-12-05 12:14:51.515778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:17.342 [2024-12-05 12:14:51.520756] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.342 [2024-12-05 12:14:51.520819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.343 [2024-12-05 12:14:51.520837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:17.343 [2024-12-05 12:14:51.525700] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.343 [2024-12-05 12:14:51.525816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.343 [2024-12-05 12:14:51.525833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:17.343 [2024-12-05 12:14:51.530409] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.343 [2024-12-05 12:14:51.530458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.343 [2024-12-05 12:14:51.530475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:17.343 [2024-12-05 12:14:51.535060] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.343 [2024-12-05 12:14:51.535112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.343 [2024-12-05 12:14:51.535130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:17.602 [2024-12-05 12:14:51.539707] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.602 [2024-12-05 12:14:51.539758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.602 [2024-12-05 12:14:51.539775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:17.602 [2024-12-05 12:14:51.544248] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.602 [2024-12-05 12:14:51.544310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.602 [2024-12-05 12:14:51.544328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:17.602 [2024-12-05 12:14:51.548773] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.602 [2024-12-05 12:14:51.548860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.602 [2024-12-05 12:14:51.548879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:17.602 [2024-12-05 12:14:51.553400] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.602 [2024-12-05 12:14:51.553472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.602 [2024-12-05 12:14:51.553490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:17.602 [2024-12-05 12:14:51.558159] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.602 [2024-12-05 12:14:51.558214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.603 [2024-12-05 12:14:51.558231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:17.603 [2024-12-05 12:14:51.563157] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.603 [2024-12-05 12:14:51.563208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.603 [2024-12-05 12:14:51.563227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:17.603 [2024-12-05 12:14:51.568162] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.603 [2024-12-05 12:14:51.568216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.603 [2024-12-05 12:14:51.568233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:17.603 [2024-12-05 12:14:51.573236] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.603 [2024-12-05 12:14:51.573317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.603 [2024-12-05 12:14:51.573335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:17.603 [2024-12-05 12:14:51.578341] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.603 [2024-12-05 12:14:51.578417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.603 [2024-12-05 12:14:51.578435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:17.603 [2024-12-05 12:14:51.582975] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.603 [2024-12-05 12:14:51.583026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.603 [2024-12-05 12:14:51.583043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:17.603 [2024-12-05 12:14:51.588270] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.603 [2024-12-05 12:14:51.588326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.603 [2024-12-05 12:14:51.588343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:17.603 [2024-12-05 12:14:51.592892] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.603 [2024-12-05 12:14:51.592959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.603 [2024-12-05 12:14:51.592977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:17.603 [2024-12-05 12:14:51.597428] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.603 [2024-12-05 12:14:51.597496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.603 [2024-12-05 12:14:51.597515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:17.603 [2024-12-05 12:14:51.602437] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.603 [2024-12-05 12:14:51.602489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.603 [2024-12-05 12:14:51.602507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:17.603 [2024-12-05 12:14:51.607350] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.603 [2024-12-05 12:14:51.607414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.603 [2024-12-05 12:14:51.607432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:17.603 [2024-12-05 12:14:51.612146] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.603 [2024-12-05 12:14:51.612200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.603 [2024-12-05 12:14:51.612219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:17.603 [2024-12-05 12:14:51.616908] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.603 [2024-12-05 12:14:51.617002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.603 [2024-12-05 12:14:51.617023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:17.603 [2024-12-05 12:14:51.621968] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.603 [2024-12-05 12:14:51.622021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.603 [2024-12-05 12:14:51.622042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:17.603 [2024-12-05 12:14:51.626932] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.603 [2024-12-05 12:14:51.626999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.603 [2024-12-05 12:14:51.627018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:17.603 [2024-12-05 12:14:51.632194] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.603 [2024-12-05 12:14:51.632259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.603 [2024-12-05 12:14:51.632277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:17.603 [2024-12-05 12:14:51.637544] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.603 [2024-12-05 12:14:51.637598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.603 [2024-12-05 12:14:51.637616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:17.603 [2024-12-05 12:14:51.642956] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.603 [2024-12-05 12:14:51.643008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.603 [2024-12-05 12:14:51.643030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:17.603 [2024-12-05 12:14:51.647767] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.603 [2024-12-05 12:14:51.647836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.603 [2024-12-05 12:14:51.647854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:17.603 [2024-12-05 12:14:51.652514] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.603 [2024-12-05 12:14:51.652624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.603 [2024-12-05 12:14:51.652642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:17.603 [2024-12-05 12:14:51.657356] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.603 [2024-12-05 12:14:51.657437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.603 [2024-12-05 12:14:51.657455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:17.603 [2024-12-05 12:14:51.662567] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.603 [2024-12-05 12:14:51.662617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.603 [2024-12-05 12:14:51.662635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:17.603 [2024-12-05 12:14:51.667520] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.603 [2024-12-05 12:14:51.667571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.603 [2024-12-05 12:14:51.667589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:17.603 [2024-12-05 12:14:51.672458] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.603 [2024-12-05 12:14:51.672524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.603 [2024-12-05 12:14:51.672542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:17.603 [2024-12-05 12:14:51.676977] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.603 [2024-12-05 12:14:51.677054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.603 [2024-12-05 12:14:51.677072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:17.603 [2024-12-05 12:14:51.682106] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.603 [2024-12-05 12:14:51.682160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.603 [2024-12-05 12:14:51.682178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:17.603 [2024-12-05 12:14:51.687281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.603 [2024-12-05 12:14:51.687338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.603 [2024-12-05 12:14:51.687355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:17.603 [2024-12-05 12:14:51.691914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.603 [2024-12-05 12:14:51.691965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.603 [2024-12-05 12:14:51.691983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:17.604 [2024-12-05 12:14:51.696582] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.604 [2024-12-05 12:14:51.696639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.604 [2024-12-05 12:14:51.696657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:17.604 [2024-12-05 12:14:51.701174] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.604 [2024-12-05 12:14:51.701222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.604 [2024-12-05 12:14:51.701240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:17.604 [2024-12-05 12:14:51.705643] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.604 [2024-12-05 12:14:51.705712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.604 [2024-12-05 12:14:51.705730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:17.604 [2024-12-05 12:14:51.710206] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.604 [2024-12-05 12:14:51.710323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.604 [2024-12-05 12:14:51.710340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:17.604 [2024-12-05 12:14:51.714727] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.604 [2024-12-05 12:14:51.714776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.604 [2024-12-05 12:14:51.714794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:17.604 [2024-12-05 12:14:51.719256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.604 [2024-12-05 12:14:51.719317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.604 [2024-12-05 12:14:51.719335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:17.604 [2024-12-05 12:14:51.723760] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.604 [2024-12-05 12:14:51.723808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.604 [2024-12-05 12:14:51.723825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:17.604 [2024-12-05 12:14:51.728243] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.604 [2024-12-05 12:14:51.728311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.604 [2024-12-05 12:14:51.728329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:17.604 [2024-12-05 12:14:51.732801] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.604 [2024-12-05 12:14:51.732866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.604 [2024-12-05 12:14:51.732884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:17.604 [2024-12-05 12:14:51.737323] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.604 [2024-12-05 12:14:51.737414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.604 [2024-12-05 12:14:51.737431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:17.604 [2024-12-05 12:14:51.742317] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.604 [2024-12-05 12:14:51.742428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.604 [2024-12-05 12:14:51.742446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:17.604 [2024-12-05 12:14:51.747604] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.604 [2024-12-05 12:14:51.747662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.604 [2024-12-05 12:14:51.747679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:17.604 [2024-12-05 12:14:51.752171] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.604 [2024-12-05 12:14:51.752221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.604 [2024-12-05 12:14:51.752239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:17.604 [2024-12-05 12:14:51.756893] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.604 [2024-12-05 12:14:51.756948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.604 [2024-12-05 12:14:51.756966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:17.604 [2024-12-05 12:14:51.761347] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.604 [2024-12-05 12:14:51.761417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.604 [2024-12-05 12:14:51.761451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:17.604 [2024-12-05 12:14:51.765958] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.604 [2024-12-05 12:14:51.766030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.604 [2024-12-05 12:14:51.766053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:17.604 [2024-12-05 12:14:51.770546] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.604 [2024-12-05 12:14:51.770628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.604 [2024-12-05 12:14:51.770645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:17.604 [2024-12-05 12:14:51.774968] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.604 [2024-12-05 12:14:51.775019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.604 [2024-12-05 12:14:51.775037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:17.604 [2024-12-05 12:14:51.779515] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.604 [2024-12-05 12:14:51.779571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.604 [2024-12-05 12:14:51.779588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:17.604 [2024-12-05 12:14:51.783942] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.604 [2024-12-05 12:14:51.784029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.604 [2024-12-05 12:14:51.784046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:17.604 [2024-12-05 12:14:51.788500] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.604 [2024-12-05 12:14:51.788556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.604 [2024-12-05 12:14:51.788573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:17.604 [2024-12-05 12:14:51.792835] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.604 [2024-12-05 12:14:51.792889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.604 [2024-12-05 12:14:51.792906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:17.604 [2024-12-05 12:14:51.797235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.604 [2024-12-05 12:14:51.797289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.604 [2024-12-05 12:14:51.797308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:17.864 [2024-12-05 12:14:51.801566] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.864 [2024-12-05 12:14:51.801626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.864 [2024-12-05 12:14:51.801644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:17.864 [2024-12-05 12:14:51.805908] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.864 [2024-12-05 12:14:51.805969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.864 [2024-12-05 12:14:51.805987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:17.864 [2024-12-05 12:14:51.810408] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.865 [2024-12-05 12:14:51.810473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.865 [2024-12-05 12:14:51.810491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:17.865 [2024-12-05 12:14:51.814932] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.865 [2024-12-05 12:14:51.815007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.865 [2024-12-05 12:14:51.815025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:17.865 [2024-12-05 12:14:51.819678] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.865 [2024-12-05 12:14:51.819732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.865 [2024-12-05 12:14:51.819750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:17.865 [2024-12-05 12:14:51.825055] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.865 [2024-12-05 12:14:51.825140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.865 [2024-12-05 12:14:51.825158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:17.865 [2024-12-05 12:14:51.830470] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.865 [2024-12-05 12:14:51.830544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.865 [2024-12-05 12:14:51.830562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:17.865 [2024-12-05 12:14:51.835319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.865 [2024-12-05 12:14:51.835389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.865 [2024-12-05 12:14:51.835407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:17.865 [2024-12-05 12:14:51.840292] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.865 [2024-12-05 12:14:51.840344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.865 [2024-12-05 12:14:51.840362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:17.865 [2024-12-05 12:14:51.845482] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.865 [2024-12-05 12:14:51.845627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.865 [2024-12-05 12:14:51.845644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:17.865 [2024-12-05 12:14:51.851341] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.865 [2024-12-05 12:14:51.851488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.865 [2024-12-05 12:14:51.851506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:17.865 [2024-12-05 12:14:51.856650] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.865 [2024-12-05 12:14:51.856719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.865 [2024-12-05 12:14:51.856737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:17.865 [2024-12-05 12:14:51.861782] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.865 [2024-12-05 12:14:51.861852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.865 [2024-12-05 12:14:51.861870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:17.865 [2024-12-05 12:14:51.866510] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.865 [2024-12-05 12:14:51.866586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.865 [2024-12-05 12:14:51.866603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:17.865 [2024-12-05 12:14:51.871132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.865 [2024-12-05 12:14:51.871185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.865 [2024-12-05 12:14:51.871203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:17.865 [2024-12-05 12:14:51.875719] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.865 [2024-12-05 12:14:51.875769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.865 [2024-12-05 12:14:51.875787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:17.865 [2024-12-05 12:14:51.880317] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.865 [2024-12-05 12:14:51.880396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.865 [2024-12-05 12:14:51.880414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:17.865 [2024-12-05 12:14:51.885068] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.865 [2024-12-05 12:14:51.885126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.865 [2024-12-05 12:14:51.885144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:17.865 [2024-12-05 12:14:51.889685] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.865 [2024-12-05 12:14:51.889735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.865 [2024-12-05 12:14:51.889755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:17.865 [2024-12-05 12:14:51.894327] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.865 [2024-12-05 12:14:51.894383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.865 [2024-12-05 12:14:51.894402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:17.865 [2024-12-05 12:14:51.899107] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.865 [2024-12-05 12:14:51.899216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.865 [2024-12-05 12:14:51.899234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:17.865 [2024-12-05 12:14:51.903841] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.865 [2024-12-05 12:14:51.903918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.865 [2024-12-05 12:14:51.903936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:17.865 [2024-12-05 12:14:51.908481] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.865 [2024-12-05 12:14:51.908549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.865 [2024-12-05 12:14:51.908566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:17.865 [2024-12-05 12:14:51.913066] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.865 [2024-12-05 12:14:51.913113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.865 [2024-12-05 12:14:51.913131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:17.865 [2024-12-05 12:14:51.917653] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.865 [2024-12-05 12:14:51.917727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.865 [2024-12-05 12:14:51.917745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:17.865 [2024-12-05 12:14:51.922239] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.865 [2024-12-05 12:14:51.922346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.865 [2024-12-05 12:14:51.922364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:17.865 [2024-12-05 12:14:51.926795] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.865 [2024-12-05 12:14:51.926865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.865 [2024-12-05 12:14:51.926883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:17.865 [2024-12-05 12:14:51.931409] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.865 [2024-12-05 12:14:51.931463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.865 [2024-12-05 12:14:51.931481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:17.865 [2024-12-05 12:14:51.935941] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.865 [2024-12-05 12:14:51.935990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.865 [2024-12-05 12:14:51.936008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:17.865 [2024-12-05 12:14:51.940473] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.866 [2024-12-05 12:14:51.940524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.866 [2024-12-05 12:14:51.940542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:17.866 [2024-12-05 12:14:51.945433] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.866 [2024-12-05 12:14:51.945506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.866 [2024-12-05 12:14:51.945524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:17.866 [2024-12-05 12:14:51.951381] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.866 [2024-12-05 12:14:51.951477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.866 [2024-12-05 12:14:51.951494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:17.866 [2024-12-05 12:14:51.958044] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.866 [2024-12-05 12:14:51.958216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.866 [2024-12-05 12:14:51.958234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:17.866 [2024-12-05 12:14:51.965509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.866 [2024-12-05 12:14:51.965694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.866 [2024-12-05 12:14:51.965713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:17.866 [2024-12-05 12:14:51.972555] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.866 [2024-12-05 12:14:51.972690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.866 [2024-12-05 12:14:51.972708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:17.866 [2024-12-05 12:14:51.979689] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.866 [2024-12-05 12:14:51.979765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.866 [2024-12-05 12:14:51.979783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:17.866 [2024-12-05 12:14:51.986608] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.866 [2024-12-05 12:14:51.986811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.866 [2024-12-05 12:14:51.986829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:17.866 [2024-12-05 12:14:51.993579] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.866 [2024-12-05 12:14:51.993736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.866 [2024-12-05 12:14:51.993754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:17.866 [2024-12-05 12:14:51.999905] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.866 [2024-12-05 12:14:52.000082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.866 [2024-12-05 12:14:52.000100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:17.866 [2024-12-05 12:14:52.006425] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.866 [2024-12-05 12:14:52.006595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.866 [2024-12-05 12:14:52.006613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:17.866 [2024-12-05 12:14:52.012960] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.866 [2024-12-05 12:14:52.013114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.866 [2024-12-05 12:14:52.013132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:17.866 [2024-12-05 12:14:52.019520] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.866 [2024-12-05 12:14:52.019618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.866 [2024-12-05 12:14:52.019636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:17.866 [2024-12-05 12:14:52.026585] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.866 [2024-12-05 12:14:52.026705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.866 [2024-12-05 12:14:52.026723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:17.866 [2024-12-05 12:14:52.033834] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.866 [2024-12-05 12:14:52.034024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.866 [2024-12-05 12:14:52.034042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:17.866 [2024-12-05 12:14:52.040677] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.866 [2024-12-05 12:14:52.040764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.866 [2024-12-05 12:14:52.040787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:17.866 [2024-12-05 12:14:52.048077] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.866 [2024-12-05 12:14:52.048208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.866 [2024-12-05 12:14:52.048226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:17.866 [2024-12-05 12:14:52.054135] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.866 [2024-12-05 12:14:52.054204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.866 [2024-12-05 12:14:52.054223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:17.866 [2024-12-05 12:14:52.059009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:17.866 [2024-12-05 12:14:52.059080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.866 [2024-12-05 12:14:52.059098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:18.127 [2024-12-05 12:14:52.063816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.127 [2024-12-05 12:14:52.063902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.127 [2024-12-05 12:14:52.063920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:18.127 [2024-12-05 12:14:52.068575] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.127 [2024-12-05 12:14:52.068654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.127 [2024-12-05 12:14:52.068671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:18.127 [2024-12-05 12:14:52.073333] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.127 [2024-12-05 12:14:52.073389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.127 [2024-12-05 12:14:52.073407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:18.127 [2024-12-05 12:14:52.078288] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.127 [2024-12-05 12:14:52.078356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.127 [2024-12-05 12:14:52.078380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:18.127 [2024-12-05 12:14:52.083133] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.127 [2024-12-05 12:14:52.083184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.127 [2024-12-05 12:14:52.083202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:18.127 [2024-12-05 12:14:52.087959] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.127 [2024-12-05 12:14:52.088041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.127 [2024-12-05 12:14:52.088059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:18.127 [2024-12-05 12:14:52.093057] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.127 [2024-12-05 12:14:52.093145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.127 [2024-12-05 12:14:52.093162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:18.127 [2024-12-05 12:14:52.097903] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.127 [2024-12-05 12:14:52.097957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.127 [2024-12-05 12:14:52.097975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:18.127 6272.00 IOPS, 784.00 MiB/s [2024-12-05T11:14:52.323Z] [2024-12-05 12:14:52.103814] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.127 [2024-12-05 12:14:52.103885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.127 [2024-12-05 12:14:52.103904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:18.127 [2024-12-05 12:14:52.108441] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.127 [2024-12-05 12:14:52.108525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.127 [2024-12-05 12:14:52.108544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:18.127 [2024-12-05 12:14:52.113173] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.127 [2024-12-05 12:14:52.113242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.127 [2024-12-05 12:14:52.113263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:18.127 [2024-12-05 12:14:52.117761] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.127 [2024-12-05 12:14:52.117815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.127 [2024-12-05 12:14:52.117833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:18.127 [2024-12-05 12:14:52.122228] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.127 [2024-12-05 12:14:52.122311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.127 [2024-12-05 12:14:52.122329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:18.127 [2024-12-05 12:14:52.126667] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.127 [2024-12-05 12:14:52.126741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.127 [2024-12-05 12:14:52.126760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:18.127 [2024-12-05 12:14:52.131098] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.127 [2024-12-05 12:14:52.131167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.127 [2024-12-05 12:14:52.131185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:18.127 [2024-12-05 12:14:52.135587] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.127 [2024-12-05 12:14:52.135658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.127 [2024-12-05 12:14:52.135676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:18.127 [2024-12-05 12:14:52.140200] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.127 [2024-12-05 12:14:52.140283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.127 [2024-12-05 12:14:52.140301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:18.128 [2024-12-05 12:14:52.144644] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.128 [2024-12-05 12:14:52.144736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.128 [2024-12-05 12:14:52.144754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:18.128 [2024-12-05 12:14:52.149071] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.128 [2024-12-05 12:14:52.149145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.128 [2024-12-05 12:14:52.149163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:18.128 [2024-12-05 12:14:52.154484] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.128 [2024-12-05 12:14:52.154599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.128 [2024-12-05 12:14:52.154617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:18.128 [2024-12-05 12:14:52.159379] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.128 [2024-12-05 12:14:52.159468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.128 [2024-12-05 12:14:52.159487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:18.128 [2024-12-05 12:14:52.164193] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.128 [2024-12-05 12:14:52.164310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.128 [2024-12-05 12:14:52.164328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:18.128 [2024-12-05 12:14:52.169775] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.128 [2024-12-05 12:14:52.169950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.128 [2024-12-05 12:14:52.169974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:18.128 [2024-12-05 12:14:52.175972] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.128 [2024-12-05 12:14:52.176131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.128 [2024-12-05 12:14:52.176150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:18.128 [2024-12-05 12:14:52.182137] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.128 [2024-12-05 12:14:52.182319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.128 [2024-12-05 12:14:52.182337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:18.128 [2024-12-05 12:14:52.188642] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.128 [2024-12-05 12:14:52.188731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.128 [2024-12-05 12:14:52.188748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:18.128 [2024-12-05 12:14:52.195205] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.128 [2024-12-05 12:14:52.195398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.128 [2024-12-05 12:14:52.195415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:18.128 [2024-12-05 12:14:52.202357] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.128 [2024-12-05 12:14:52.202442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.128 [2024-12-05 12:14:52.202461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:18.128 [2024-12-05 12:14:52.207829] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.128 [2024-12-05 12:14:52.207889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.128 [2024-12-05 12:14:52.207907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:18.128 [2024-12-05 12:14:52.212535] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.128 [2024-12-05 12:14:52.212604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.128 [2024-12-05 12:14:52.212622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:18.128 [2024-12-05 12:14:52.217268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.128 [2024-12-05 12:14:52.217345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.128 [2024-12-05 12:14:52.217363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:18.128 [2024-12-05 12:14:52.222284] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.128 [2024-12-05 12:14:52.222391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.128 [2024-12-05 12:14:52.222409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:18.128 [2024-12-05 12:14:52.228327] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.128 [2024-12-05 12:14:52.228508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.128 [2024-12-05 12:14:52.228526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:18.128 [2024-12-05 12:14:52.234219] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.128 [2024-12-05 12:14:52.234307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.128 [2024-12-05 12:14:52.234325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:18.128 [2024-12-05 12:14:52.240040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.128 [2024-12-05 12:14:52.240219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.128 [2024-12-05 12:14:52.240236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:18.128 [2024-12-05 12:14:52.246552] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.128 [2024-12-05 12:14:52.246691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.128 [2024-12-05 12:14:52.246710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:18.128 [2024-12-05 12:14:52.253383] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.128 [2024-12-05 12:14:52.253525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.128 [2024-12-05 12:14:52.253544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:18.128 [2024-12-05 12:14:52.259661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.128 [2024-12-05 12:14:52.259795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.128 [2024-12-05 12:14:52.259814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:18.128 [2024-12-05 12:14:52.266112] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.128 [2024-12-05 12:14:52.266278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.128 [2024-12-05 12:14:52.266296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:18.128 [2024-12-05 12:14:52.272404] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.128 [2024-12-05 12:14:52.272498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.128 [2024-12-05 12:14:52.272516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:18.128 [2024-12-05 12:14:52.278814] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.128 [2024-12-05 12:14:52.278991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.128 [2024-12-05 12:14:52.279009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:18.128 [2024-12-05 12:14:52.285487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.128 [2024-12-05 12:14:52.285622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.128 [2024-12-05 12:14:52.285640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:18.128 [2024-12-05 12:14:52.292552] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.128 [2024-12-05 12:14:52.292652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.128 [2024-12-05 12:14:52.292670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:18.128 [2024-12-05 12:14:52.299168] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.129 [2024-12-05 12:14:52.299340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.129 [2024-12-05 12:14:52.299358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:18.129 [2024-12-05 12:14:52.305641] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.129 [2024-12-05 12:14:52.305812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.129 [2024-12-05 12:14:52.305830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:18.129 [2024-12-05 12:14:52.311847] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.129 [2024-12-05 12:14:52.312007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.129 [2024-12-05 12:14:52.312025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:18.129 [2024-12-05 12:14:52.318277] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.129 [2024-12-05 12:14:52.318453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.129 [2024-12-05 12:14:52.318471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:18.389 [2024-12-05 12:14:52.326215] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.389 [2024-12-05 12:14:52.326343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.389 [2024-12-05 12:14:52.326360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:18.389 [2024-12-05 12:14:52.333022] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.389 [2024-12-05 12:14:52.333120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.389 [2024-12-05 12:14:52.333142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:18.389 [2024-12-05 12:14:52.338189] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.389 [2024-12-05 12:14:52.338261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.389 [2024-12-05 12:14:52.338279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:18.389 [2024-12-05 12:14:52.343486] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.389 [2024-12-05 12:14:52.343588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.389 [2024-12-05 12:14:52.343606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:18.389 [2024-12-05 12:14:52.348979] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.389 [2024-12-05 12:14:52.349063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.389 [2024-12-05 12:14:52.349081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:18.389 [2024-12-05 12:14:52.354510] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.389 [2024-12-05 12:14:52.354620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.389 [2024-12-05 12:14:52.354638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:18.389 [2024-12-05 12:14:52.359924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.389 [2024-12-05 12:14:52.360016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.389 [2024-12-05 12:14:52.360034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:18.389 [2024-12-05 12:14:52.365522] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.389 [2024-12-05 12:14:52.365598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.389 [2024-12-05 12:14:52.365627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:18.389 [2024-12-05 12:14:52.370686] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.389 [2024-12-05 12:14:52.370892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.389 [2024-12-05 12:14:52.370910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:18.389 [2024-12-05 12:14:52.377711] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.389 [2024-12-05 12:14:52.377869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.389 [2024-12-05 12:14:52.377887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:18.389 [2024-12-05 12:14:52.383710] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.389 [2024-12-05 12:14:52.383821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.389 [2024-12-05 12:14:52.383839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:18.389 [2024-12-05 12:14:52.389003] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.389 [2024-12-05 12:14:52.389104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.389 [2024-12-05 12:14:52.389122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:18.389 [2024-12-05 12:14:52.394178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.389 [2024-12-05 12:14:52.394262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.389 [2024-12-05 12:14:52.394280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:18.389 [2024-12-05 12:14:52.399658] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.389 [2024-12-05 12:14:52.399775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.389 [2024-12-05 12:14:52.399793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:18.389 [2024-12-05 12:14:52.405767] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.389 [2024-12-05 12:14:52.405932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.389 [2024-12-05 12:14:52.405949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:18.389 [2024-12-05 12:14:52.412408] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.389 [2024-12-05 12:14:52.412568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.389 [2024-12-05 12:14:52.412585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:18.389 [2024-12-05 12:14:52.419045] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.389 [2024-12-05 12:14:52.419235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.389 [2024-12-05 12:14:52.419252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:18.389 [2024-12-05 12:14:52.425873] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.389 [2024-12-05 12:14:52.426035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.389 [2024-12-05 12:14:52.426052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:18.389 [2024-12-05 12:14:52.431104] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.389 [2024-12-05 12:14:52.431158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.389 [2024-12-05 12:14:52.431176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:18.390 [2024-12-05 12:14:52.435900] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.390 [2024-12-05 12:14:52.436015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.390 [2024-12-05 12:14:52.436033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:18.390 [2024-12-05 12:14:52.441205] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.390 [2024-12-05 12:14:52.441289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.390 [2024-12-05 12:14:52.441308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:18.390 [2024-12-05 12:14:52.446775] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.390 [2024-12-05 12:14:52.446881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.390 [2024-12-05 12:14:52.446898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:18.390 [2024-12-05 12:14:52.451847] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.390 [2024-12-05 12:14:52.451958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.390 [2024-12-05 12:14:52.451975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:18.390 [2024-12-05 12:14:52.456994] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.390 [2024-12-05 12:14:52.457123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.390 [2024-12-05 12:14:52.457141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:18.390 [2024-12-05 12:14:52.462106] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.390 [2024-12-05 12:14:52.462171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.390 [2024-12-05 12:14:52.462189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:18.390 [2024-12-05 12:14:52.466401] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.390 [2024-12-05 12:14:52.466465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.390 [2024-12-05 12:14:52.466483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:18.390 [2024-12-05 12:14:52.470675] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.390 [2024-12-05 12:14:52.470738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.390 [2024-12-05 12:14:52.470756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:18.390 [2024-12-05 12:14:52.475259] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.390 [2024-12-05 12:14:52.475321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.390 [2024-12-05 12:14:52.475348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:18.390 [2024-12-05 12:14:52.479760] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.390 [2024-12-05 12:14:52.479811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.390 [2024-12-05 12:14:52.479829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:18.390 [2024-12-05 12:14:52.484471] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.390 [2024-12-05 12:14:52.484537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.390 [2024-12-05 12:14:52.484556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:18.390 [2024-12-05 12:14:52.489349] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.390 [2024-12-05 12:14:52.489437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.390 [2024-12-05 12:14:52.489456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:18.390 [2024-12-05 12:14:52.494018] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.390 [2024-12-05 12:14:52.494081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.390 [2024-12-05 12:14:52.494100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:18.390 [2024-12-05 12:14:52.498620] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.390 [2024-12-05 12:14:52.498668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.390 [2024-12-05 12:14:52.498686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:18.390 [2024-12-05 12:14:52.503281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.390 [2024-12-05 12:14:52.503330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.390 [2024-12-05 12:14:52.503348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:18.390 [2024-12-05 12:14:52.507780] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.390 [2024-12-05 12:14:52.507841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.390 [2024-12-05 12:14:52.507860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:18.390 [2024-12-05 12:14:52.512125] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.390 [2024-12-05 12:14:52.512179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.390 [2024-12-05 12:14:52.512197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:18.390 [2024-12-05 12:14:52.516664] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.390 [2024-12-05 12:14:52.516744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.390 [2024-12-05 12:14:52.516762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:18.390 [2024-12-05 12:14:52.521004] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.390 [2024-12-05 12:14:52.521075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.390 [2024-12-05 12:14:52.521093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:18.390 [2024-12-05 12:14:52.525338] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.390 [2024-12-05 12:14:52.525424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.390 [2024-12-05 12:14:52.525443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:18.390 [2024-12-05 12:14:52.529770] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.390 [2024-12-05 12:14:52.529827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.390 [2024-12-05 12:14:52.529845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:18.390 [2024-12-05 12:14:52.534024] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.390 [2024-12-05 12:14:52.534078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.390 [2024-12-05 12:14:52.534096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:18.390 [2024-12-05 12:14:52.538496] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.390 [2024-12-05 12:14:52.538552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.390 [2024-12-05 12:14:52.538570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:18.390 [2024-12-05 12:14:52.542956] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.390 [2024-12-05 12:14:52.543025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.390 [2024-12-05 12:14:52.543043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:18.390 [2024-12-05 12:14:52.548453] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.390 [2024-12-05 12:14:52.548571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.390 [2024-12-05 12:14:52.548589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:18.390 [2024-12-05 12:14:52.553468] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.390 [2024-12-05 12:14:52.553543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.390 [2024-12-05 12:14:52.553562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:18.390 [2024-12-05 12:14:52.558150] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.390 [2024-12-05 12:14:52.558212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.390 [2024-12-05 12:14:52.558230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:18.390 [2024-12-05 12:14:52.562601] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.391 [2024-12-05 12:14:52.562675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.391 [2024-12-05 12:14:52.562693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:18.391 [2024-12-05 12:14:52.567169] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.391 [2024-12-05 12:14:52.567248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.391 [2024-12-05 12:14:52.567266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:18.391 [2024-12-05 12:14:52.571676] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.391 [2024-12-05 12:14:52.571730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.391 [2024-12-05 12:14:52.571747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:18.391 [2024-12-05 12:14:52.576026] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.391 [2024-12-05 12:14:52.576090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.391 [2024-12-05 12:14:52.576107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:18.391 [2024-12-05 12:14:52.580816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.391 [2024-12-05 12:14:52.580889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.391 [2024-12-05 12:14:52.580907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:18.650 [2024-12-05 12:14:52.585563] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.650 [2024-12-05 12:14:52.585615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.650 [2024-12-05 12:14:52.585634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:18.650 [2024-12-05 12:14:52.591069] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.650 [2024-12-05 12:14:52.591134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.650 [2024-12-05 12:14:52.591153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:18.650 [2024-12-05 12:14:52.595961] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.650 [2024-12-05 12:14:52.596036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.650 [2024-12-05 12:14:52.596057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:18.650 [2024-12-05 12:14:52.600913] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.650 [2024-12-05 12:14:52.600980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.650 [2024-12-05 12:14:52.601001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:18.650 [2024-12-05 12:14:52.605750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.651 [2024-12-05 12:14:52.605807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.651 [2024-12-05 12:14:52.605825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:18.651 [2024-12-05 12:14:52.610653] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.651 [2024-12-05 12:14:52.610721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.651 [2024-12-05 12:14:52.610739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:18.651 [2024-12-05 12:14:52.615328] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.651 [2024-12-05 12:14:52.615402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.651 [2024-12-05 12:14:52.615420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:18.651 [2024-12-05 12:14:52.620106] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.651 [2024-12-05 12:14:52.620168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.651 [2024-12-05 12:14:52.620185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:18.651 [2024-12-05 12:14:52.624788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.651 [2024-12-05 12:14:52.624853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.651 [2024-12-05 12:14:52.624871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:18.651 [2024-12-05 12:14:52.629509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.651 [2024-12-05 12:14:52.629603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.651 [2024-12-05 12:14:52.629620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:18.651 [2024-12-05 12:14:52.634192] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.651 [2024-12-05 12:14:52.634249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.651 [2024-12-05 12:14:52.634267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:18.651 [2024-12-05 12:14:52.638894] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.651 [2024-12-05 12:14:52.638956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.651 [2024-12-05 12:14:52.638975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:18.651 [2024-12-05 12:14:52.643662] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.651 [2024-12-05 12:14:52.643741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.651 [2024-12-05 12:14:52.643759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:18.651 [2024-12-05 12:14:52.648672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.651 [2024-12-05 12:14:52.648740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.651 [2024-12-05 12:14:52.648758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:18.651 [2024-12-05 12:14:52.653610] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.651 [2024-12-05 12:14:52.653683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.651 [2024-12-05 12:14:52.653701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:18.651 [2024-12-05 12:14:52.658690] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.651 [2024-12-05 12:14:52.658774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.651 [2024-12-05 12:14:52.658792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:18.651 [2024-12-05 12:14:52.663518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.651 [2024-12-05 12:14:52.663588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.651 [2024-12-05 12:14:52.663607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:18.651 [2024-12-05 12:14:52.668511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.651 [2024-12-05 12:14:52.668562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.651 [2024-12-05 12:14:52.668579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:18.651 [2024-12-05 12:14:52.673562] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.651 [2024-12-05 12:14:52.673617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.651 [2024-12-05 12:14:52.673634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:18.651 [2024-12-05 12:14:52.678490] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.651 [2024-12-05 12:14:52.678552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.651 [2024-12-05 12:14:52.678570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:18.651 [2024-12-05 12:14:52.683538] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.651 [2024-12-05 12:14:52.683659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.651 [2024-12-05 12:14:52.683677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:18.651 [2024-12-05 12:14:52.688491] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.651 [2024-12-05 12:14:52.688568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.651 [2024-12-05 12:14:52.688585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:18.651 [2024-12-05 12:14:52.693578] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.651 [2024-12-05 12:14:52.693628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.651 [2024-12-05 12:14:52.693646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:18.651 [2024-12-05 12:14:52.698712] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.651 [2024-12-05 12:14:52.698861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.651 [2024-12-05 12:14:52.698879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:18.651 [2024-12-05 12:14:52.704197] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.651 [2024-12-05 12:14:52.704251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.651 [2024-12-05 12:14:52.704269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:18.651 [2024-12-05 12:14:52.709183] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.651 [2024-12-05 12:14:52.709288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.651 [2024-12-05 12:14:52.709306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:18.651 [2024-12-05 12:14:52.714053] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.651 [2024-12-05 12:14:52.714103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.651 [2024-12-05 12:14:52.714120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:18.651 [2024-12-05 12:14:52.719023] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.651 [2024-12-05 12:14:52.719122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.651 [2024-12-05 12:14:52.719140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:18.651 [2024-12-05 12:14:52.723821] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.651 [2024-12-05 12:14:52.723877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.651 [2024-12-05 12:14:52.723897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:18.651 [2024-12-05 12:14:52.728905] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.651 [2024-12-05 12:14:52.728961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.651 [2024-12-05 12:14:52.728979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:18.651 [2024-12-05 12:14:52.733814] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.651 [2024-12-05 12:14:52.733866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.651 [2024-12-05 12:14:52.733884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:18.651 [2024-12-05 12:14:52.738765] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.651 [2024-12-05 12:14:52.738845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.652 [2024-12-05 12:14:52.738864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:18.652 [2024-12-05 12:14:52.743553] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.652 [2024-12-05 12:14:52.743689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.652 [2024-12-05 12:14:52.743707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:18.652 [2024-12-05 12:14:52.748380] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.652 [2024-12-05 12:14:52.748441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.652 [2024-12-05 12:14:52.748459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:18.652 [2024-12-05 12:14:52.753154] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.652 [2024-12-05 12:14:52.753215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.652 [2024-12-05 12:14:52.753232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:18.652 [2024-12-05 12:14:52.758146] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.652 [2024-12-05 12:14:52.758201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.652 [2024-12-05 12:14:52.758219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:18.652 [2024-12-05 12:14:52.762921] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.652 [2024-12-05 12:14:52.763035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.652 [2024-12-05 12:14:52.763053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:18.652 [2024-12-05 12:14:52.767929] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.652 [2024-12-05 12:14:52.767994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.652 [2024-12-05 12:14:52.768011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:18.652 [2024-12-05 12:14:52.772799] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.652 [2024-12-05 12:14:52.772852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.652 [2024-12-05 12:14:52.772870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:18.652 [2024-12-05 12:14:52.777428] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.652 [2024-12-05 12:14:52.777481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.652 [2024-12-05 12:14:52.777498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:18.652 [2024-12-05 12:14:52.782230] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.652 [2024-12-05 12:14:52.782302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.652 [2024-12-05 12:14:52.782320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:18.652 [2024-12-05 12:14:52.787089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.652 [2024-12-05 12:14:52.787160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.652 [2024-12-05 12:14:52.787178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:18.652 [2024-12-05 12:14:52.791883] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.652 [2024-12-05 12:14:52.791984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.652 [2024-12-05 12:14:52.792002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:18.652 [2024-12-05 12:14:52.796708] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.652 [2024-12-05 12:14:52.796779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.652 [2024-12-05 12:14:52.796797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:18.652 [2024-12-05 12:14:52.801316] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.652 [2024-12-05 12:14:52.801374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.652 [2024-12-05 12:14:52.801392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:18.652 [2024-12-05 12:14:52.806030] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.652 [2024-12-05 12:14:52.806090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.652 [2024-12-05 12:14:52.806107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:18.652 [2024-12-05 12:14:52.810712] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.652 [2024-12-05 12:14:52.810794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.652 [2024-12-05 12:14:52.810811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:18.652 [2024-12-05 12:14:52.815395] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.652 [2024-12-05 12:14:52.815451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.652 [2024-12-05 12:14:52.815469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:18.652 [2024-12-05 12:14:52.820933] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.652 [2024-12-05 12:14:52.821029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.652 [2024-12-05 12:14:52.821047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:18.652 [2024-12-05 12:14:52.826113] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.652 [2024-12-05 12:14:52.826186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.652 [2024-12-05 12:14:52.826203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:18.652 [2024-12-05 12:14:52.831809] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.652 [2024-12-05 12:14:52.831876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.652 [2024-12-05 12:14:52.831894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:18.652 [2024-12-05 12:14:52.837703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.652 [2024-12-05 12:14:52.837820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.652 [2024-12-05 12:14:52.837837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:18.652 [2024-12-05 12:14:52.844756] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.652 [2024-12-05 12:14:52.844892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.652 [2024-12-05 12:14:52.844910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:18.912 [2024-12-05 12:14:52.851711] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.912 [2024-12-05 12:14:52.851822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.912 [2024-12-05 12:14:52.851840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:18.912 [2024-12-05 12:14:52.858527] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.912 [2024-12-05 12:14:52.858595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.912 [2024-12-05 12:14:52.858616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:18.912 [2024-12-05 12:14:52.864693] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.912 [2024-12-05 12:14:52.864762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.912 [2024-12-05 12:14:52.864780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:18.912 [2024-12-05 12:14:52.870279] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.912 [2024-12-05 12:14:52.870412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.912 [2024-12-05 12:14:52.870430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:18.913 [2024-12-05 12:14:52.876073] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.913 [2024-12-05 12:14:52.876204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.913 [2024-12-05 12:14:52.876222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:18.913 [2024-12-05 12:14:52.882707] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.913 [2024-12-05 12:14:52.882775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.913 [2024-12-05 12:14:52.882793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:18.913 [2024-12-05 12:14:52.888705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.913 [2024-12-05 12:14:52.888789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.913 [2024-12-05 12:14:52.888808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:18.913 [2024-12-05 12:14:52.894681] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.913 [2024-12-05 12:14:52.894768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.913 [2024-12-05 12:14:52.894786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:18.913 [2024-12-05 12:14:52.900639] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.913 [2024-12-05 12:14:52.900711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.913 [2024-12-05 12:14:52.900729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:18.913 [2024-12-05 12:14:52.906586] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.913 [2024-12-05 12:14:52.906714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.913 [2024-12-05 12:14:52.906732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:18.913 [2024-12-05 12:14:52.912904] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.913 [2024-12-05 12:14:52.913005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.913 [2024-12-05 12:14:52.913023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:18.913 [2024-12-05 12:14:52.917584] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.913 [2024-12-05 12:14:52.917670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.913 [2024-12-05 12:14:52.917689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:18.913 [2024-12-05 12:14:52.922085] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.913 [2024-12-05 12:14:52.922153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.913 [2024-12-05 12:14:52.922171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:18.913 [2024-12-05 12:14:52.926428] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.913 [2024-12-05 12:14:52.926536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.913 [2024-12-05 12:14:52.926554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:18.913 [2024-12-05 12:14:52.930809] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.913 [2024-12-05 12:14:52.930893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.913 [2024-12-05 12:14:52.930910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:18.913 [2024-12-05 12:14:52.935687] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.913 [2024-12-05 12:14:52.935796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.913 [2024-12-05 12:14:52.935814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:18.913 [2024-12-05 12:14:52.940317] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.913 [2024-12-05 12:14:52.940392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.913 [2024-12-05 12:14:52.940410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:18.913 [2024-12-05 12:14:52.945294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.913 [2024-12-05 12:14:52.945361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.913 [2024-12-05 12:14:52.945384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:18.913 [2024-12-05 12:14:52.949885] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.913 [2024-12-05 12:14:52.949968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.913 [2024-12-05 12:14:52.949986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:18.913 [2024-12-05 12:14:52.954162] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.913 [2024-12-05 12:14:52.954234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.913 [2024-12-05 12:14:52.954252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:18.913 [2024-12-05 12:14:52.958836] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.913 [2024-12-05 12:14:52.958888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.913 [2024-12-05 12:14:52.958906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:18.913 [2024-12-05 12:14:52.963673] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.913 [2024-12-05 12:14:52.963739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.913 [2024-12-05 12:14:52.963757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:18.913 [2024-12-05 12:14:52.968690] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.913 [2024-12-05 12:14:52.968759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.913 [2024-12-05 12:14:52.968778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:18.913 [2024-12-05 12:14:52.973686] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.913 [2024-12-05 12:14:52.973746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.913 [2024-12-05 12:14:52.973764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:18.913 [2024-12-05 12:14:52.978258] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.913 [2024-12-05 12:14:52.978311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.913 [2024-12-05 12:14:52.978328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:18.913 [2024-12-05 12:14:52.982718] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.913 [2024-12-05 12:14:52.982800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.913 [2024-12-05 12:14:52.982818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:18.913 [2024-12-05 12:14:52.986835] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.913 [2024-12-05 12:14:52.986891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.913 [2024-12-05 12:14:52.986909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:18.913 [2024-12-05 12:14:52.991272] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.913 [2024-12-05 12:14:52.991326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.913 [2024-12-05 12:14:52.991347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:18.913 [2024-12-05 12:14:52.996189] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.913 [2024-12-05 12:14:52.996260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.913 [2024-12-05 12:14:52.996278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:18.913 [2024-12-05 12:14:53.000692] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.913 [2024-12-05 12:14:53.000746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.914 [2024-12-05 12:14:53.000763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:18.914 [2024-12-05 12:14:53.004982] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.914 [2024-12-05 12:14:53.005046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.914 [2024-12-05 12:14:53.005063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:18.914 [2024-12-05 12:14:53.009025] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.914 [2024-12-05 12:14:53.009122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.914 [2024-12-05 12:14:53.009140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:18.914 [2024-12-05 12:14:53.013221] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.914 [2024-12-05 12:14:53.013283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.914 [2024-12-05 12:14:53.013300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:18.914 [2024-12-05 12:14:53.017447] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.914 [2024-12-05 12:14:53.017531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.914 [2024-12-05 12:14:53.017549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:18.914 [2024-12-05 12:14:53.021544] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.914 [2024-12-05 12:14:53.021638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.914 [2024-12-05 12:14:53.021656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:18.914 [2024-12-05 12:14:53.025728] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.914 [2024-12-05 12:14:53.025800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.914 [2024-12-05 12:14:53.025817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:18.914 [2024-12-05 12:14:53.029793] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.914 [2024-12-05 12:14:53.029854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.914 [2024-12-05 12:14:53.029871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:18.914 [2024-12-05 12:14:53.033866] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.914 [2024-12-05 12:14:53.033945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.914 [2024-12-05 12:14:53.033963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:18.914 [2024-12-05 12:14:53.038093] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.914 [2024-12-05 12:14:53.038160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.914 [2024-12-05 12:14:53.038178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:18.914 [2024-12-05 12:14:53.042820] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.914 [2024-12-05 12:14:53.042869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.914 [2024-12-05 12:14:53.042887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:18.914 [2024-12-05 12:14:53.047430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.914 [2024-12-05 12:14:53.047479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.914 [2024-12-05 12:14:53.047496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:18.914 [2024-12-05 12:14:53.051599] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.914 [2024-12-05 12:14:53.051670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.914 [2024-12-05 12:14:53.051688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:18.914 [2024-12-05 12:14:53.055746] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.914 [2024-12-05 12:14:53.055804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.914 [2024-12-05 12:14:53.055822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:18.914 [2024-12-05 12:14:53.059899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.914 [2024-12-05 12:14:53.059990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.914 [2024-12-05 12:14:53.060008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:18.914 [2024-12-05 12:14:53.064032] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.914 [2024-12-05 12:14:53.064137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.914 [2024-12-05 12:14:53.064154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:18.914 [2024-12-05 12:14:53.068168] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.914 [2024-12-05 12:14:53.068234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.914 [2024-12-05 12:14:53.068252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:18.914 [2024-12-05 12:14:53.072191] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.914 [2024-12-05 12:14:53.072260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.914 [2024-12-05 12:14:53.072277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:18.914 [2024-12-05 12:14:53.076180] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.914 [2024-12-05 12:14:53.076254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.914 [2024-12-05 12:14:53.076271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:18.914 [2024-12-05 12:14:53.080252] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.914 [2024-12-05 12:14:53.080301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.914 [2024-12-05 12:14:53.080318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:18.914 [2024-12-05 12:14:53.085177] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.914 [2024-12-05 12:14:53.085264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.914 [2024-12-05 12:14:53.085281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:18.914 [2024-12-05 12:14:53.089727] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.914 [2024-12-05 12:14:53.089806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.914 [2024-12-05 12:14:53.089827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:18.914 [2024-12-05 12:14:53.093758] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.914 [2024-12-05 12:14:53.093844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.914 [2024-12-05 12:14:53.093865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:18.914 [2024-12-05 12:14:53.097799] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.914 [2024-12-05 12:14:53.097874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.914 [2024-12-05 12:14:53.097892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:18.914 6184.00 IOPS, 773.00 MiB/s [2024-12-05T11:14:53.110Z] [2024-12-05 12:14:53.103061] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eb24c0) with pdu=0x200016efef90 00:31:18.914 [2024-12-05 12:14:53.103126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.914 [2024-12-05 12:14:53.103148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:18.914 00:31:18.914 Latency(us) 00:31:18.914 [2024-12-05T11:14:53.110Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:18.914 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:31:18.914 nvme0n1 : 2.00 6181.70 772.71 0.00 0.00 2584.09 1919.27 14105.84 00:31:18.914 [2024-12-05T11:14:53.110Z] =================================================================================================================== 00:31:18.914 [2024-12-05T11:14:53.110Z] Total : 6181.70 772.71 0.00 0.00 2584.09 1919.27 14105.84 00:31:18.914 { 00:31:18.914 "results": [ 00:31:18.914 { 00:31:18.914 "job": "nvme0n1", 00:31:18.914 "core_mask": "0x2", 00:31:18.914 "workload": "randwrite", 00:31:18.914 "status": "finished", 00:31:18.914 "queue_depth": 16, 00:31:18.914 "io_size": 131072, 00:31:18.914 "runtime": 2.003331, 00:31:18.914 "iops": 6181.704371369484, 00:31:18.914 "mibps": 772.7130464211855, 00:31:18.915 "io_failed": 0, 00:31:18.915 "io_timeout": 0, 00:31:18.915 "avg_latency_us": 2584.0851679586563, 00:31:18.915 "min_latency_us": 1919.2685714285715, 00:31:18.915 "max_latency_us": 14105.843809523809 00:31:18.915 } 00:31:18.915 ], 00:31:18.915 "core_count": 1 00:31:18.915 } 00:31:19.174 12:14:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:31:19.174 12:14:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:31:19.174 12:14:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:31:19.174 | .driver_specific 00:31:19.174 | .nvme_error 00:31:19.174 | .status_code 00:31:19.174 | .command_transient_transport_error' 00:31:19.174 12:14:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:31:19.174 12:14:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 400 > 0 )) 00:31:19.174 12:14:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 232266 00:31:19.174 12:14:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 232266 ']' 00:31:19.174 12:14:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 232266 00:31:19.174 12:14:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:31:19.174 12:14:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:19.174 12:14:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 232266 00:31:19.433 12:14:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:19.433 12:14:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:19.433 12:14:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 232266' 00:31:19.433 killing process with pid 232266 00:31:19.433 12:14:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 232266 00:31:19.433 Received shutdown signal, test time was about 2.000000 seconds 00:31:19.433 00:31:19.433 Latency(us) 00:31:19.433 [2024-12-05T11:14:53.629Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:19.433 [2024-12-05T11:14:53.629Z] =================================================================================================================== 00:31:19.433 [2024-12-05T11:14:53.629Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:19.433 12:14:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 232266 00:31:19.433 12:14:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 230606 00:31:19.433 12:14:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 230606 ']' 00:31:19.433 12:14:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 230606 00:31:19.433 12:14:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:31:19.433 12:14:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:19.433 12:14:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 230606 00:31:19.433 12:14:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:19.433 12:14:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:19.433 12:14:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 230606' 00:31:19.433 killing process with pid 230606 00:31:19.433 12:14:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 230606 00:31:19.433 12:14:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 230606 00:31:19.692 00:31:19.693 real 0m14.031s 00:31:19.693 user 0m26.800s 00:31:19.693 sys 0m4.659s 00:31:19.693 12:14:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:19.693 12:14:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:19.693 ************************************ 00:31:19.693 END TEST nvmf_digest_error 00:31:19.693 ************************************ 00:31:19.693 12:14:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:31:19.693 12:14:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:31:19.693 12:14:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@335 -- # nvmfcleanup 00:31:19.693 12:14:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@99 -- # sync 00:31:19.693 12:14:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:31:19.693 12:14:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@102 -- # set +e 00:31:19.693 12:14:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@103 -- # for i in {1..20} 00:31:19.693 12:14:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:31:19.693 rmmod nvme_tcp 00:31:19.693 rmmod nvme_fabrics 00:31:19.693 rmmod nvme_keyring 00:31:19.693 12:14:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:31:19.693 12:14:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@106 -- # set -e 00:31:19.693 12:14:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@107 -- # return 0 00:31:19.693 12:14:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # '[' -n 230606 ']' 00:31:19.693 12:14:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@337 -- # killprocess 230606 00:31:19.693 12:14:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 230606 ']' 00:31:19.693 12:14:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 230606 00:31:19.693 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (230606) - No such process 00:31:19.693 12:14:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 230606 is not found' 00:31:19.693 Process with pid 230606 is not found 00:31:19.693 12:14:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:31:19.693 12:14:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # nvmf_fini 00:31:19.693 12:14:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@264 -- # local dev 00:31:19.693 12:14:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@267 -- # remove_target_ns 00:31:19.693 12:14:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:31:19.693 12:14:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:31:19.693 12:14:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_target_ns 00:31:22.228 12:14:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@268 -- # delete_main_bridge 00:31:22.228 12:14:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:31:22.228 12:14:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@130 -- # return 0 00:31:22.228 12:14:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:31:22.228 12:14:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:31:22.228 12:14:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:31:22.228 12:14:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:31:22.228 12:14:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:31:22.228 12:14:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:31:22.228 12:14:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:31:22.228 12:14:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:31:22.228 12:14:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:31:22.228 12:14:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:31:22.228 12:14:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:31:22.228 12:14:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:31:22.228 12:14:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:31:22.228 12:14:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:31:22.228 12:14:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:31:22.229 12:14:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:31:22.229 12:14:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:31:22.229 12:14:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@41 -- # _dev=0 00:31:22.229 12:14:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@41 -- # dev_map=() 00:31:22.229 12:14:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@284 -- # iptr 00:31:22.229 12:14:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@542 -- # iptables-save 00:31:22.229 12:14:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:31:22.229 12:14:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@542 -- # iptables-restore 00:31:22.229 00:31:22.229 real 0m36.701s 00:31:22.229 user 0m55.612s 00:31:22.229 sys 0m14.000s 00:31:22.229 12:14:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:22.229 12:14:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:31:22.229 ************************************ 00:31:22.229 END TEST nvmf_digest 00:31:22.229 ************************************ 00:31:22.229 12:14:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:22.229 12:14:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:22.229 12:14:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:22.229 12:14:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.229 ************************************ 00:31:22.229 START TEST nvmf_host_discovery 00:31:22.229 ************************************ 00:31:22.229 12:14:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:22.229 * Looking for test storage... 00:31:22.229 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:22.229 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:22.229 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:31:22.229 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:22.229 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:22.229 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:22.229 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:22.229 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:22.229 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:31:22.229 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:31:22.229 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:31:22.229 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:31:22.229 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:31:22.229 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:31:22.229 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:31:22.229 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:22.229 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:31:22.229 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:31:22.229 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:22.229 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:22.229 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:31:22.229 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:31:22.229 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:22.229 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:31:22.229 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:31:22.229 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:31:22.229 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:31:22.229 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:22.229 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:31:22.229 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:31:22.229 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:22.229 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:22.229 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:31:22.229 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:22.229 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:22.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:22.229 --rc genhtml_branch_coverage=1 00:31:22.229 --rc genhtml_function_coverage=1 00:31:22.229 --rc genhtml_legend=1 00:31:22.229 --rc geninfo_all_blocks=1 00:31:22.229 --rc geninfo_unexecuted_blocks=1 00:31:22.229 00:31:22.229 ' 00:31:22.229 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:22.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:22.229 --rc genhtml_branch_coverage=1 00:31:22.229 --rc genhtml_function_coverage=1 00:31:22.229 --rc genhtml_legend=1 00:31:22.229 --rc geninfo_all_blocks=1 00:31:22.229 --rc geninfo_unexecuted_blocks=1 00:31:22.229 00:31:22.229 ' 00:31:22.229 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:22.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:22.229 --rc genhtml_branch_coverage=1 00:31:22.229 --rc genhtml_function_coverage=1 00:31:22.229 --rc genhtml_legend=1 00:31:22.229 --rc geninfo_all_blocks=1 00:31:22.229 --rc geninfo_unexecuted_blocks=1 00:31:22.229 00:31:22.229 ' 00:31:22.229 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:22.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:22.229 --rc genhtml_branch_coverage=1 00:31:22.229 --rc genhtml_function_coverage=1 00:31:22.229 --rc genhtml_legend=1 00:31:22.229 --rc geninfo_all_blocks=1 00:31:22.229 --rc geninfo_unexecuted_blocks=1 00:31:22.229 00:31:22.229 ' 00:31:22.229 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:22.229 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:31:22.229 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:22.229 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:22.229 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:22.230 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:22.230 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:22.230 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:31:22.230 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:22.230 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:31:22.230 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:22.230 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:22.230 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:22.230 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:31:22.230 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:31:22.230 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:22.230 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:22.230 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:31:22.230 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:22.230 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:22.230 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:22.230 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.230 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.230 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.230 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:31:22.230 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.230 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:31:22.230 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:31:22.230 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:31:22.230 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:31:22.230 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@50 -- # : 0 00:31:22.230 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:31:22.230 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:31:22.230 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:31:22.230 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:22.230 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:22.230 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:31:22.230 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:31:22.230 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:31:22.230 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:31:22.230 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@54 -- # have_pci_nics=0 00:31:22.230 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # DISCOVERY_PORT=8009 00:31:22.230 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@12 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:31:22.230 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@15 -- # NQN=nqn.2016-06.io.spdk:cnode 00:31:22.230 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:31:22.230 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@18 -- # HOST_SOCK=/tmp/host.sock 00:31:22.230 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # nvmftestinit 00:31:22.230 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:31:22.230 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:22.230 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # prepare_net_devs 00:31:22.230 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # local -g is_hw=no 00:31:22.230 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@260 -- # remove_target_ns 00:31:22.230 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:31:22.230 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:31:22.230 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_target_ns 00:31:22.230 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:31:22.230 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:31:22.230 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # xtrace_disable 00:31:22.230 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:28.799 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:28.799 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@131 -- # pci_devs=() 00:31:28.799 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@131 -- # local -a pci_devs 00:31:28.799 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@132 -- # pci_net_devs=() 00:31:28.799 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:31:28.799 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@133 -- # pci_drivers=() 00:31:28.799 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@133 -- # local -A pci_drivers 00:31:28.799 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@135 -- # net_devs=() 00:31:28.799 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@135 -- # local -ga net_devs 00:31:28.799 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@136 -- # e810=() 00:31:28.799 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@136 -- # local -ga e810 00:31:28.799 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@137 -- # x722=() 00:31:28.799 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@137 -- # local -ga x722 00:31:28.799 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@138 -- # mlx=() 00:31:28.799 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@138 -- # local -ga mlx 00:31:28.799 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:28.799 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:28.799 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:28.799 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:28.799 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:28.799 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:28.799 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:28.799 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:28.799 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:28.799 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:28.799 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:28.800 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:28.800 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # [[ up == up ]] 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:28.800 Found net devices under 0000:86:00.0: cvl_0_0 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # [[ up == up ]] 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:28.800 Found net devices under 0000:86:00.1: cvl_0_1 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # is_hw=yes 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@257 -- # create_target_ns 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@27 -- # local -gA dev_map 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@28 -- # local -g _dev 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@44 -- # ips=() 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@11 -- # local val=167772161 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:31:28.800 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:31:28.800 10.0.0.1 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@11 -- # local val=167772162 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:31:28.800 10.0.0.2 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@38 -- # ping_ips 1 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@107 -- # local dev=initiator0 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:31:28.800 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:28.800 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.377 ms 00:31:28.800 00:31:28.800 --- 10.0.0.1 ping statistics --- 00:31:28.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:28.800 rtt min/avg/max/mdev = 0.377/0.377/0.377/0.000 ms 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@168 -- # get_net_dev target0 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@107 -- # local dev=target0 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:31:28.800 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:31:28.801 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:28.801 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:31:28.801 00:31:28.801 --- 10.0.0.2 ping statistics --- 00:31:28.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:28.801 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@98 -- # (( pair++ )) 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@270 -- # return 0 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@107 -- # local dev=initiator0 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@107 -- # local dev=initiator1 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@109 -- # return 1 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@168 -- # dev= 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@169 -- # return 0 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@168 -- # get_net_dev target0 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@107 -- # local dev=target0 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@168 -- # get_net_dev target1 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@107 -- # local dev=target1 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@109 -- # return 1 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@168 -- # dev= 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@169 -- # return 0 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmfappstart -m 0x2 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # nvmfpid=236602 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@329 -- # waitforlisten 236602 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 236602 ']' 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:28.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:28.801 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:28.801 [2024-12-05 12:15:02.337292] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:31:28.801 [2024-12-05 12:15:02.337342] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:28.801 [2024-12-05 12:15:02.417698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:28.801 [2024-12-05 12:15:02.456787] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:28.801 [2024-12-05 12:15:02.456819] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:28.801 [2024-12-05 12:15:02.456826] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:28.801 [2024-12-05 12:15:02.456832] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:28.801 [2024-12-05 12:15:02.456837] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:28.801 [2024-12-05 12:15:02.457435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:29.060 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:29.060 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:31:29.060 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:31:29.060 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:29.060 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.060 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:29.060 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:29.060 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:29.060 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.060 [2024-12-05 12:15:03.195340] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:29.060 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:29.060 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:31:29.060 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:29.060 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.060 [2024-12-05 12:15:03.207541] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:29.060 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:29.060 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # rpc_cmd bdev_null_create null0 1000 512 00:31:29.060 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:29.060 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.060 null0 00:31:29.060 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:29.060 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@31 -- # rpc_cmd bdev_null_create null1 1000 512 00:31:29.060 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:29.060 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.060 null1 00:31:29.060 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:29.060 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd bdev_wait_for_examine 00:31:29.060 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:29.060 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.060 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:29.060 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@40 -- # hostpid=236675 00:31:29.060 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:31:29.060 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@41 -- # waitforlisten 236675 /tmp/host.sock 00:31:29.060 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 236675 ']' 00:31:29.060 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:31:29.060 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:29.060 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:29.060 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:29.060 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:29.060 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.318 [2024-12-05 12:15:03.285420] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:31:29.318 [2024-12-05 12:15:03.285459] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid236675 ] 00:31:29.318 [2024-12-05 12:15:03.360748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:29.318 [2024-12-05 12:15:03.401868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:29.318 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:29.318 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:31:29.318 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@43 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:29.319 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:31:29.319 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:29.319 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.577 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:29.577 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:31:29.577 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:29.577 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.577 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:29.577 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # notify_id=0 00:31:29.577 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@78 -- # get_subsystem_names 00:31:29.577 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:29.577 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # jq -r '.[].name' 00:31:29.577 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:29.577 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # sort 00:31:29.577 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.577 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # xargs 00:31:29.577 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:29.577 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:31:29.577 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # get_bdev_list 00:31:29.577 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:29.577 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:29.577 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.577 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # jq -r '.[].name' 00:31:29.577 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # sort 00:31:29.577 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # xargs 00:31:29.577 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:29.577 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:31:29.577 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:31:29.577 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:29.577 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.577 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:29.577 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@82 -- # get_subsystem_names 00:31:29.577 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:29.577 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # jq -r '.[].name' 00:31:29.577 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:29.577 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # sort 00:31:29.577 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.577 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # xargs 00:31:29.577 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:29.577 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:31:29.577 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_bdev_list 00:31:29.577 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # jq -r '.[].name' 00:31:29.577 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:29.577 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:29.577 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # sort 00:31:29.577 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.577 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # xargs 00:31:29.577 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:29.577 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:31:29.577 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:31:29.577 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:29.577 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.577 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:29.577 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # get_subsystem_names 00:31:29.577 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:29.577 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # jq -r '.[].name' 00:31:29.577 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:29.577 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # sort 00:31:29.577 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.577 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # xargs 00:31:29.577 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:29.577 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:31:29.577 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_bdev_list 00:31:29.577 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:29.577 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:29.577 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.836 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # jq -r '.[].name' 00:31:29.836 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # sort 00:31:29.836 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # xargs 00:31:29.836 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:29.836 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:31:29.836 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:29.836 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:29.836 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.836 [2024-12-05 12:15:03.821110] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:29.836 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:29.836 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_subsystem_names 00:31:29.836 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:29.836 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # jq -r '.[].name' 00:31:29.836 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:29.836 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.836 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # sort 00:31:29.836 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # xargs 00:31:29.836 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:29.836 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:31:29.836 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@93 -- # get_bdev_list 00:31:29.836 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:29.836 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # xargs 00:31:29.837 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # jq -r '.[].name' 00:31:29.837 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:29.837 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # sort 00:31:29.837 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.837 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:29.837 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:31:29.837 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@94 -- # is_notification_count_eq 0 00:31:29.837 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # expected_count=0 00:31:29.837 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:29.837 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:29.837 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:31:29.837 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:31:29.837 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:29.837 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:31:29.837 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:29.837 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:29.837 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.837 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # jq '. | length' 00:31:29.837 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:29.837 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # notification_count=0 00:31:29.837 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@70 -- # notify_id=0 00:31:29.837 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:31:29.837 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:31:29.837 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:31:29.837 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:29.837 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.837 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:29.837 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@100 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:29.837 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:29.837 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:31:29.837 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:31:29.837 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:29.837 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:31:29.837 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:29.837 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:29.837 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.837 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # jq -r '.[].name' 00:31:29.837 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # sort 00:31:29.837 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # xargs 00:31:29.837 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:29.837 12:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:31:29.837 12:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:31:30.406 [2024-12-05 12:15:04.573521] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:30.406 [2024-12-05 12:15:04.573540] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:30.406 [2024-12-05 12:15:04.573553] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:30.665 [2024-12-05 12:15:04.660808] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:30.924 [2024-12-05 12:15:04.885895] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:31:30.924 [2024-12-05 12:15:04.886615] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xe02ca0:1 started. 00:31:30.924 [2024-12-05 12:15:04.887999] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:30.924 [2024-12-05 12:15:04.888015] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:30.924 [2024-12-05 12:15:04.892559] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xe02ca0 was disconnected and freed. delete nvme_qpair. 00:31:30.924 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:31:30.924 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:30.924 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:31:30.924 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:30.924 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # jq -r '.[].name' 00:31:30.924 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:30.924 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # sort 00:31:30.924 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:30.924 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # xargs 00:31:30.924 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:30.924 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:30.924 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:31:30.924 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@101 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:31:30.924 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:31:30.924 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:31:30.924 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:31:30.924 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:31:30.924 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:31:30.924 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # jq -r '.[].name' 00:31:30.924 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:30.924 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:30.924 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # sort 00:31:30.924 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:30.924 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # xargs 00:31:30.924 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:30.924 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:31:30.924 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:31:30.924 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@102 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:31:30.924 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:31:30.924 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:31:30.924 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:31:30.924 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:31:31.182 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:31:31.182 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:31.182 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:31.182 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:31.182 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # sort -n 00:31:31.182 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:31.182 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # xargs 00:31:31.182 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:31.182 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:31:31.182 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:31:31.182 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # is_notification_count_eq 1 00:31:31.182 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # expected_count=1 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # jq '. | length' 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # notification_count=1 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@70 -- # notify_id=1 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # jq -r '.[].name' 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # sort 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # xargs 00:31:31.183 [2024-12-05 12:15:05.228253] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xe0fe30:1 started. 00:31:31.183 [2024-12-05 12:15:05.233267] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xe0fe30 was disconnected and freed. delete nvme_qpair. 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@109 -- # is_notification_count_eq 1 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # expected_count=1 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # jq '. | length' 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # notification_count=1 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@70 -- # notify_id=2 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:31.183 [2024-12-05 12:15:05.317154] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:31.183 [2024-12-05 12:15:05.317572] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:31.183 [2024-12-05 12:15:05.317591] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@115 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # jq -r '.[].name' 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # sort 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # xargs 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@116 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # jq -r '.[].name' 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # sort 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:31.183 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # xargs 00:31:31.441 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:31.441 [2024-12-05 12:15:05.404835] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:31:31.441 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:31.441 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:31:31.441 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@117 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:31:31.441 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:31:31.441 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:31:31.441 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:31:31.441 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:31:31.441 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:31:31.441 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:31.441 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:31.441 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:31.441 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # sort -n 00:31:31.441 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:31.441 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # xargs 00:31:31.441 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:31.441 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:31:31.441 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:31:31.441 [2024-12-05 12:15:05.509586] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:31:31.441 [2024-12-05 12:15:05.509618] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:31.441 [2024-12-05 12:15:05.509626] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:31.441 [2024-12-05 12:15:05.509634] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:32.375 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:31:32.375 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:31:32.375 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:31:32.375 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:32.375 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:32.375 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.375 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # sort -n 00:31:32.375 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:32.375 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # xargs 00:31:32.375 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.375 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:31:32.375 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:31:32.375 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # is_notification_count_eq 0 00:31:32.375 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # expected_count=0 00:31:32.375 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:32.375 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:32.375 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:31:32.375 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:31:32.375 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:32.375 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:31:32.375 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:32.375 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # jq '. | length' 00:31:32.375 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.375 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:32.375 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.375 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # notification_count=0 00:31:32.375 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@70 -- # notify_id=2 00:31:32.375 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:31:32.375 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:31:32.375 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:32.375 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.375 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:32.635 [2024-12-05 12:15:06.573529] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:32.635 [2024-12-05 12:15:06.573549] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:32.635 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.635 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@124 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:32.635 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:32.635 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:31:32.635 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:31:32.635 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:32.635 [2024-12-05 12:15:06.580193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:32.635 [2024-12-05 12:15:06.580210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.635 [2024-12-05 12:15:06.580219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:32.635 [2024-12-05 12:15:06.580226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.635 [2024-12-05 12:15:06.580233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:32.635 [2024-12-05 12:15:06.580239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.635 [2024-12-05 12:15:06.580246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:32.635 [2024-12-05 12:15:06.580253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.635 [2024-12-05 12:15:06.580260] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdd4de0 is same with the state(6) to be set 00:31:32.635 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:31:32.635 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:32.635 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # jq -r '.[].name' 00:31:32.635 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.635 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # sort 00:31:32.635 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:32.635 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # xargs 00:31:32.635 [2024-12-05 12:15:06.590205] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdd4de0 (9): Bad file descriptor 00:31:32.636 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.636 [2024-12-05 12:15:06.600242] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:31:32.636 [2024-12-05 12:15:06.600254] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:31:32.636 [2024-12-05 12:15:06.600260] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:31:32.636 [2024-12-05 12:15:06.600265] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:32.636 [2024-12-05 12:15:06.600283] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:31:32.636 [2024-12-05 12:15:06.600478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.636 [2024-12-05 12:15:06.600493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd4de0 with addr=10.0.0.2, port=4420 00:31:32.636 [2024-12-05 12:15:06.600501] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdd4de0 is same with the state(6) to be set 00:31:32.636 [2024-12-05 12:15:06.600516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdd4de0 (9): Bad file descriptor 00:31:32.636 [2024-12-05 12:15:06.600525] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:31:32.636 [2024-12-05 12:15:06.600532] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:31:32.636 [2024-12-05 12:15:06.600540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:31:32.636 [2024-12-05 12:15:06.600546] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:31:32.636 [2024-12-05 12:15:06.600551] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:31:32.636 [2024-12-05 12:15:06.600555] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:31:32.636 [2024-12-05 12:15:06.610313] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:31:32.636 [2024-12-05 12:15:06.610324] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:31:32.636 [2024-12-05 12:15:06.610328] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:31:32.636 [2024-12-05 12:15:06.610332] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:32.636 [2024-12-05 12:15:06.610346] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:31:32.636 [2024-12-05 12:15:06.610512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.636 [2024-12-05 12:15:06.610524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd4de0 with addr=10.0.0.2, port=4420 00:31:32.636 [2024-12-05 12:15:06.610531] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdd4de0 is same with the state(6) to be set 00:31:32.636 [2024-12-05 12:15:06.610541] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdd4de0 (9): Bad file descriptor 00:31:32.636 [2024-12-05 12:15:06.610557] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:31:32.636 [2024-12-05 12:15:06.610564] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:31:32.636 [2024-12-05 12:15:06.610570] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:31:32.636 [2024-12-05 12:15:06.610576] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:31:32.636 [2024-12-05 12:15:06.610580] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:31:32.636 [2024-12-05 12:15:06.610584] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:31:32.636 [2024-12-05 12:15:06.620378] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:31:32.636 [2024-12-05 12:15:06.620390] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:31:32.636 [2024-12-05 12:15:06.620394] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:31:32.636 [2024-12-05 12:15:06.620398] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:32.636 [2024-12-05 12:15:06.620413] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:31:32.636 [2024-12-05 12:15:06.620529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.636 [2024-12-05 12:15:06.620540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd4de0 with addr=10.0.0.2, port=4420 00:31:32.636 [2024-12-05 12:15:06.620553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdd4de0 is same with the state(6) to be set 00:31:32.636 [2024-12-05 12:15:06.620563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdd4de0 (9): Bad file descriptor 00:31:32.636 [2024-12-05 12:15:06.620572] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:31:32.636 [2024-12-05 12:15:06.620578] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:31:32.636 [2024-12-05 12:15:06.620585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:31:32.636 [2024-12-05 12:15:06.620590] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:31:32.636 [2024-12-05 12:15:06.620595] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:31:32.636 [2024-12-05 12:15:06.620599] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:31:32.636 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:32.636 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:31:32.636 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@125 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:32.636 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:32.636 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:31:32.636 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:31:32.636 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:32.636 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:31:32.636 [2024-12-05 12:15:06.630444] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:31:32.636 [2024-12-05 12:15:06.630454] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:31:32.636 [2024-12-05 12:15:06.630459] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:31:32.636 [2024-12-05 12:15:06.630463] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:32.636 [2024-12-05 12:15:06.630475] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:31:32.636 [2024-12-05 12:15:06.630649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.636 [2024-12-05 12:15:06.630661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd4de0 with addr=10.0.0.2, port=4420 00:31:32.636 [2024-12-05 12:15:06.630669] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdd4de0 is same with the state(6) to be set 00:31:32.636 [2024-12-05 12:15:06.630679] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdd4de0 (9): Bad file descriptor 00:31:32.636 [2024-12-05 12:15:06.630694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:31:32.636 [2024-12-05 12:15:06.630701] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:31:32.636 [2024-12-05 12:15:06.630708] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:31:32.636 [2024-12-05 12:15:06.630713] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:31:32.636 [2024-12-05 12:15:06.630718] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:31:32.636 [2024-12-05 12:15:06.630724] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:31:32.636 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:32.636 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # jq -r '.[].name' 00:31:32.637 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.637 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # sort 00:31:32.637 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:32.637 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # xargs 00:31:32.637 [2024-12-05 12:15:06.640507] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:31:32.637 [2024-12-05 12:15:06.640521] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:31:32.637 [2024-12-05 12:15:06.640525] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:31:32.637 [2024-12-05 12:15:06.640530] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:32.637 [2024-12-05 12:15:06.640544] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:31:32.637 [2024-12-05 12:15:06.640723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.637 [2024-12-05 12:15:06.640736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd4de0 with addr=10.0.0.2, port=4420 00:31:32.637 [2024-12-05 12:15:06.640744] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdd4de0 is same with the state(6) to be set 00:31:32.637 [2024-12-05 12:15:06.640756] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdd4de0 (9): Bad file descriptor 00:31:32.637 [2024-12-05 12:15:06.640766] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:31:32.637 [2024-12-05 12:15:06.640772] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:31:32.637 [2024-12-05 12:15:06.640779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:31:32.637 [2024-12-05 12:15:06.640785] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:31:32.637 [2024-12-05 12:15:06.640789] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:31:32.637 [2024-12-05 12:15:06.640794] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:31:32.637 [2024-12-05 12:15:06.650575] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:31:32.637 [2024-12-05 12:15:06.650585] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:31:32.637 [2024-12-05 12:15:06.650589] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:31:32.637 [2024-12-05 12:15:06.650593] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:32.637 [2024-12-05 12:15:06.650607] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:31:32.637 [2024-12-05 12:15:06.650811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.637 [2024-12-05 12:15:06.650823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd4de0 with addr=10.0.0.2, port=4420 00:31:32.637 [2024-12-05 12:15:06.650830] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdd4de0 is same with the state(6) to be set 00:31:32.637 [2024-12-05 12:15:06.650840] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdd4de0 (9): Bad file descriptor 00:31:32.637 [2024-12-05 12:15:06.650859] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:31:32.637 [2024-12-05 12:15:06.650865] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:31:32.637 [2024-12-05 12:15:06.650872] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:31:32.637 [2024-12-05 12:15:06.650877] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:31:32.637 [2024-12-05 12:15:06.650882] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:31:32.637 [2024-12-05 12:15:06.650886] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:31:32.637 [2024-12-05 12:15:06.660638] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:31:32.637 [2024-12-05 12:15:06.660650] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:31:32.637 [2024-12-05 12:15:06.660655] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:31:32.637 [2024-12-05 12:15:06.660659] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:32.637 [2024-12-05 12:15:06.660673] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:31:32.637 [2024-12-05 12:15:06.660840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.637 [2024-12-05 12:15:06.660853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd4de0 with addr=10.0.0.2, port=4420 00:31:32.637 [2024-12-05 12:15:06.660860] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdd4de0 is same with the state(6) to be set 00:31:32.637 [2024-12-05 12:15:06.660870] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdd4de0 (9): Bad file descriptor 00:31:32.637 [2024-12-05 12:15:06.660880] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:31:32.637 [2024-12-05 12:15:06.660886] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:31:32.637 [2024-12-05 12:15:06.660892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:31:32.637 [2024-12-05 12:15:06.660898] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:31:32.637 [2024-12-05 12:15:06.660902] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:31:32.637 [2024-12-05 12:15:06.660906] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:31:32.637 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.637 [2024-12-05 12:15:06.670703] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:31:32.637 [2024-12-05 12:15:06.670715] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:31:32.637 [2024-12-05 12:15:06.670720] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:31:32.637 [2024-12-05 12:15:06.670724] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:32.637 [2024-12-05 12:15:06.670736] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:31:32.637 [2024-12-05 12:15:06.670837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.637 [2024-12-05 12:15:06.670849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd4de0 with addr=10.0.0.2, port=4420 00:31:32.637 [2024-12-05 12:15:06.670856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdd4de0 is same with the state(6) to be set 00:31:32.637 [2024-12-05 12:15:06.670868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdd4de0 (9): Bad file descriptor 00:31:32.637 [2024-12-05 12:15:06.670879] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:31:32.637 [2024-12-05 12:15:06.670885] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:31:32.637 [2024-12-05 12:15:06.670892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:31:32.637 [2024-12-05 12:15:06.670897] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:31:32.637 [2024-12-05 12:15:06.670902] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:31:32.637 [2024-12-05 12:15:06.670907] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:31:32.637 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:32.637 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:31:32.638 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@126 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:31:32.638 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:31:32.638 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:31:32.638 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:31:32.638 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:31:32.638 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:31:32.638 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:32.638 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.638 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:32.638 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:32.638 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # sort -n 00:31:32.638 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # xargs 00:31:32.638 [2024-12-05 12:15:06.680767] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:31:32.638 [2024-12-05 12:15:06.680779] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:31:32.638 [2024-12-05 12:15:06.680783] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:31:32.638 [2024-12-05 12:15:06.680787] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:32.638 [2024-12-05 12:15:06.680799] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:31:32.638 [2024-12-05 12:15:06.680942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.638 [2024-12-05 12:15:06.680953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd4de0 with addr=10.0.0.2, port=4420 00:31:32.638 [2024-12-05 12:15:06.680961] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdd4de0 is same with the state(6) to be set 00:31:32.638 [2024-12-05 12:15:06.680970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdd4de0 (9): Bad file descriptor 00:31:32.638 [2024-12-05 12:15:06.680979] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:31:32.638 [2024-12-05 12:15:06.680988] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:31:32.638 [2024-12-05 12:15:06.680996] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:31:32.638 [2024-12-05 12:15:06.681001] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:31:32.638 [2024-12-05 12:15:06.681005] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:31:32.638 [2024-12-05 12:15:06.681010] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:31:32.638 [2024-12-05 12:15:06.690829] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:31:32.638 [2024-12-05 12:15:06.690842] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:31:32.638 [2024-12-05 12:15:06.690846] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:31:32.638 [2024-12-05 12:15:06.690850] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:32.638 [2024-12-05 12:15:06.690863] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:31:32.638 [2024-12-05 12:15:06.691022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.638 [2024-12-05 12:15:06.691033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd4de0 with addr=10.0.0.2, port=4420 00:31:32.638 [2024-12-05 12:15:06.691040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdd4de0 is same with the state(6) to be set 00:31:32.638 [2024-12-05 12:15:06.691051] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdd4de0 (9): Bad file descriptor 00:31:32.638 [2024-12-05 12:15:06.691065] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:31:32.638 [2024-12-05 12:15:06.691072] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:31:32.638 [2024-12-05 12:15:06.691079] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:31:32.638 [2024-12-05 12:15:06.691085] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:31:32.638 [2024-12-05 12:15:06.691089] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:31:32.638 [2024-12-05 12:15:06.691093] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:31:32.638 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.638 [2024-12-05 12:15:06.700548] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:31:32.638 [2024-12-05 12:15:06.700563] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:32.638 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:31:32.638 12:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:31:33.575 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:31:33.575 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:31:33.575 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:31:33.575 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:33.575 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:33.575 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:33.575 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # sort -n 00:31:33.575 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:33.575 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # xargs 00:31:33.575 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # is_notification_count_eq 0 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # expected_count=0 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # jq '. | length' 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # notification_count=0 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@70 -- # notify_id=2 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # jq -r '.[].name' 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # sort 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # xargs 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # jq -r '.[].name' 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # sort 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # xargs 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@133 -- # is_notification_count_eq 2 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # expected_count=2 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # jq '. | length' 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # notification_count=2 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@70 -- # notify_id=4 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:33.834 12:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:35.211 [2024-12-05 12:15:09.044508] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:35.211 [2024-12-05 12:15:09.044528] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:35.211 [2024-12-05 12:15:09.044540] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:35.211 [2024-12-05 12:15:09.132799] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:31:35.211 [2024-12-05 12:15:09.238441] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:31:35.211 [2024-12-05 12:15:09.239048] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0xdcd870:1 started. 00:31:35.211 [2024-12-05 12:15:09.240522] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:35.211 [2024-12-05 12:15:09.240547] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:35.211 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:35.211 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:35.211 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:31:35.211 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:35.211 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:31:35.211 [2024-12-05 12:15:09.243746] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0xdcd870 was disconnected and freed. delete nvme_qpair. 00:31:35.211 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:35.211 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:31:35.211 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:35.211 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:35.211 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:35.211 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:35.211 request: 00:31:35.211 { 00:31:35.211 "name": "nvme", 00:31:35.211 "trtype": "tcp", 00:31:35.211 "traddr": "10.0.0.2", 00:31:35.211 "adrfam": "ipv4", 00:31:35.211 "trsvcid": "8009", 00:31:35.211 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:35.211 "wait_for_attach": true, 00:31:35.211 "method": "bdev_nvme_start_discovery", 00:31:35.211 "req_id": 1 00:31:35.211 } 00:31:35.211 Got JSON-RPC error response 00:31:35.211 response: 00:31:35.211 { 00:31:35.211 "code": -17, 00:31:35.211 "message": "File exists" 00:31:35.211 } 00:31:35.211 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:35.211 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:31:35.211 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:35.211 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:35.211 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:35.211 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@140 -- # get_discovery_ctrlrs 00:31:35.211 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:35.211 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@62 -- # jq -r '.[].name' 00:31:35.211 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:35.211 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@62 -- # sort 00:31:35.211 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:35.211 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@62 -- # xargs 00:31:35.211 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:35.211 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@140 -- # [[ nvme == \n\v\m\e ]] 00:31:35.211 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # get_bdev_list 00:31:35.211 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # jq -r '.[].name' 00:31:35.211 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:35.211 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:35.211 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # sort 00:31:35.211 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:35.211 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # xargs 00:31:35.211 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:35.211 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:35.211 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:35.211 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:31:35.211 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:35.211 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:31:35.211 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:35.211 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:31:35.211 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:35.211 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:35.211 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:35.211 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:35.211 request: 00:31:35.211 { 00:31:35.211 "name": "nvme_second", 00:31:35.211 "trtype": "tcp", 00:31:35.211 "traddr": "10.0.0.2", 00:31:35.211 "adrfam": "ipv4", 00:31:35.211 "trsvcid": "8009", 00:31:35.211 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:35.211 "wait_for_attach": true, 00:31:35.211 "method": "bdev_nvme_start_discovery", 00:31:35.211 "req_id": 1 00:31:35.211 } 00:31:35.211 Got JSON-RPC error response 00:31:35.211 response: 00:31:35.211 { 00:31:35.211 "code": -17, 00:31:35.211 "message": "File exists" 00:31:35.211 } 00:31:35.211 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:35.211 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:31:35.211 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:35.211 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:35.211 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:35.211 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:31:35.211 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:35.211 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@62 -- # jq -r '.[].name' 00:31:35.211 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:35.211 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@62 -- # xargs 00:31:35.211 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@62 -- # sort 00:31:35.211 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:35.211 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:35.471 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:31:35.471 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@147 -- # get_bdev_list 00:31:35.471 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:35.471 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # jq -r '.[].name' 00:31:35.471 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:35.471 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # sort 00:31:35.471 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:35.471 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # xargs 00:31:35.471 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:35.471 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:35.471 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:35.471 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:31:35.471 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:35.471 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:31:35.471 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:35.471 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:31:35.471 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:35.471 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:35.471 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:35.471 12:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:36.406 [2024-12-05 12:15:10.471882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.406 [2024-12-05 12:15:10.471920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe560e0 with addr=10.0.0.2, port=8010 00:31:36.406 [2024-12-05 12:15:10.471939] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:36.406 [2024-12-05 12:15:10.471946] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:36.406 [2024-12-05 12:15:10.471954] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:37.340 [2024-12-05 12:15:11.474304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.341 [2024-12-05 12:15:11.474328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe560e0 with addr=10.0.0.2, port=8010 00:31:37.341 [2024-12-05 12:15:11.474339] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:37.341 [2024-12-05 12:15:11.474349] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:37.341 [2024-12-05 12:15:11.474355] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:38.716 [2024-12-05 12:15:12.476556] bdev_nvme.c:7554:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:31:38.716 request: 00:31:38.716 { 00:31:38.716 "name": "nvme_second", 00:31:38.716 "trtype": "tcp", 00:31:38.716 "traddr": "10.0.0.2", 00:31:38.716 "adrfam": "ipv4", 00:31:38.716 "trsvcid": "8010", 00:31:38.716 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:38.716 "wait_for_attach": false, 00:31:38.716 "attach_timeout_ms": 3000, 00:31:38.716 "method": "bdev_nvme_start_discovery", 00:31:38.716 "req_id": 1 00:31:38.716 } 00:31:38.716 Got JSON-RPC error response 00:31:38.716 response: 00:31:38.716 { 00:31:38.716 "code": -110, 00:31:38.716 "message": "Connection timed out" 00:31:38.716 } 00:31:38.716 12:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:38.716 12:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:31:38.716 12:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:38.716 12:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:38.716 12:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:38.716 12:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:31:38.716 12:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:38.716 12:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@62 -- # jq -r '.[].name' 00:31:38.716 12:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@62 -- # sort 00:31:38.716 12:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:38.716 12:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@62 -- # xargs 00:31:38.716 12:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:38.716 12:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:38.716 12:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:31:38.716 12:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@154 -- # trap - SIGINT SIGTERM EXIT 00:31:38.716 12:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@156 -- # kill 236675 00:31:38.716 12:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # nvmftestfini 00:31:38.716 12:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@335 -- # nvmfcleanup 00:31:38.716 12:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@99 -- # sync 00:31:38.716 12:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:31:38.716 12:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@102 -- # set +e 00:31:38.716 12:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@103 -- # for i in {1..20} 00:31:38.716 12:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:31:38.716 rmmod nvme_tcp 00:31:38.716 rmmod nvme_fabrics 00:31:38.716 rmmod nvme_keyring 00:31:38.716 12:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:31:38.716 12:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@106 -- # set -e 00:31:38.716 12:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@107 -- # return 0 00:31:38.716 12:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # '[' -n 236602 ']' 00:31:38.716 12:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@337 -- # killprocess 236602 00:31:38.716 12:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 236602 ']' 00:31:38.716 12:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 236602 00:31:38.716 12:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:31:38.716 12:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:38.716 12:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 236602 00:31:38.716 12:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:38.716 12:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:38.716 12:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 236602' 00:31:38.716 killing process with pid 236602 00:31:38.716 12:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 236602 00:31:38.716 12:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 236602 00:31:38.716 12:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:31:38.716 12:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # nvmf_fini 00:31:38.716 12:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@264 -- # local dev 00:31:38.716 12:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@267 -- # remove_target_ns 00:31:38.716 12:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:31:38.716 12:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:31:38.716 12:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_target_ns 00:31:41.251 12:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@268 -- # delete_main_bridge 00:31:41.251 12:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:31:41.251 12:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@130 -- # return 0 00:31:41.251 12:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:31:41.251 12:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:31:41.251 12:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:31:41.251 12:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:31:41.251 12:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:31:41.251 12:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:31:41.251 12:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:31:41.251 12:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:31:41.251 12:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:31:41.251 12:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:31:41.251 12:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:31:41.251 12:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:31:41.251 12:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:31:41.251 12:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:31:41.251 12:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:31:41.251 12:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:31:41.251 12:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:31:41.251 12:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@41 -- # _dev=0 00:31:41.251 12:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@41 -- # dev_map=() 00:31:41.251 12:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@284 -- # iptr 00:31:41.251 12:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@542 -- # iptables-save 00:31:41.251 12:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:31:41.251 12:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@542 -- # iptables-restore 00:31:41.251 00:31:41.251 real 0m18.896s 00:31:41.251 user 0m23.161s 00:31:41.251 sys 0m6.009s 00:31:41.251 12:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:41.251 12:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:41.251 ************************************ 00:31:41.251 END TEST nvmf_host_discovery 00:31:41.251 ************************************ 00:31:41.251 12:15:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:41.251 12:15:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:41.251 12:15:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:41.251 12:15:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.251 ************************************ 00:31:41.251 START TEST nvmf_discovery_remove_ifc 00:31:41.251 ************************************ 00:31:41.251 12:15:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:41.251 * Looking for test storage... 00:31:41.251 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:41.251 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:41.251 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:31:41.251 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:41.251 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:41.251 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:41.251 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:41.251 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:41.251 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:31:41.251 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:31:41.251 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:31:41.251 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:31:41.251 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:31:41.251 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:31:41.251 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:31:41.251 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:41.251 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:31:41.251 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:31:41.251 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:41.251 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:41.251 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:31:41.251 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:31:41.251 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:41.251 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:31:41.251 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:31:41.251 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:31:41.251 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:31:41.251 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:41.251 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:31:41.251 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:31:41.251 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:41.251 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:41.251 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:31:41.251 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:41.251 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:41.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:41.251 --rc genhtml_branch_coverage=1 00:31:41.251 --rc genhtml_function_coverage=1 00:31:41.251 --rc genhtml_legend=1 00:31:41.251 --rc geninfo_all_blocks=1 00:31:41.251 --rc geninfo_unexecuted_blocks=1 00:31:41.251 00:31:41.251 ' 00:31:41.251 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:41.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:41.251 --rc genhtml_branch_coverage=1 00:31:41.251 --rc genhtml_function_coverage=1 00:31:41.251 --rc genhtml_legend=1 00:31:41.251 --rc geninfo_all_blocks=1 00:31:41.252 --rc geninfo_unexecuted_blocks=1 00:31:41.252 00:31:41.252 ' 00:31:41.252 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:41.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:41.252 --rc genhtml_branch_coverage=1 00:31:41.252 --rc genhtml_function_coverage=1 00:31:41.252 --rc genhtml_legend=1 00:31:41.252 --rc geninfo_all_blocks=1 00:31:41.252 --rc geninfo_unexecuted_blocks=1 00:31:41.252 00:31:41.252 ' 00:31:41.252 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:41.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:41.252 --rc genhtml_branch_coverage=1 00:31:41.252 --rc genhtml_function_coverage=1 00:31:41.252 --rc genhtml_legend=1 00:31:41.252 --rc geninfo_all_blocks=1 00:31:41.252 --rc geninfo_unexecuted_blocks=1 00:31:41.252 00:31:41.252 ' 00:31:41.252 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:41.252 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:31:41.252 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:41.252 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:41.252 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:41.252 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:41.252 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:41.252 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:31:41.252 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:41.252 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:31:41.252 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:41.252 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:41.252 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:41.252 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:31:41.252 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:31:41.252 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:41.252 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:41.252 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:31:41.252 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:41.252 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:41.252 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:41.252 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:41.252 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:41.252 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:41.252 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:31:41.252 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:41.252 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:31:41.252 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:31:41.252 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:31:41.252 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:31:41.252 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@50 -- # : 0 00:31:41.252 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:31:41.252 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:31:41.252 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:31:41.252 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:41.252 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:41.252 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:31:41.252 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:31:41.252 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:31:41.252 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:31:41.252 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@54 -- # have_pci_nics=0 00:31:41.252 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # discovery_port=8009 00:31:41.252 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@15 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:31:41.252 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@18 -- # nqn=nqn.2016-06.io.spdk:cnode 00:31:41.252 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # host_nqn=nqn.2021-12.io.spdk:test 00:31:41.252 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@21 -- # host_sock=/tmp/host.sock 00:31:41.252 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # nvmftestinit 00:31:41.252 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:31:41.252 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:41.252 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # prepare_net_devs 00:31:41.252 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # local -g is_hw=no 00:31:41.252 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # remove_target_ns 00:31:41.252 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:31:41.252 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:31:41.252 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_target_ns 00:31:41.252 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:31:41.252 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:31:41.252 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # xtrace_disable 00:31:41.252 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:47.824 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:47.824 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@131 -- # pci_devs=() 00:31:47.824 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@131 -- # local -a pci_devs 00:31:47.824 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@132 -- # pci_net_devs=() 00:31:47.824 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:31:47.824 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@133 -- # pci_drivers=() 00:31:47.824 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@133 -- # local -A pci_drivers 00:31:47.824 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@135 -- # net_devs=() 00:31:47.824 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@135 -- # local -ga net_devs 00:31:47.824 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@136 -- # e810=() 00:31:47.824 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@136 -- # local -ga e810 00:31:47.824 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@137 -- # x722=() 00:31:47.824 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@137 -- # local -ga x722 00:31:47.824 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@138 -- # mlx=() 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@138 -- # local -ga mlx 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:47.825 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:47.825 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # [[ up == up ]] 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:47.825 Found net devices under 0000:86:00.0: cvl_0_0 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # [[ up == up ]] 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:47.825 Found net devices under 0000:86:00.1: cvl_0_1 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # is_hw=yes 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@257 -- # create_target_ns 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@27 -- # local -gA dev_map 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@28 -- # local -g _dev 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@44 -- # ips=() 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:31:47.825 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:31:47.826 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:31:47.826 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:31:47.826 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:31:47.826 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:31:47.826 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:31:47.826 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@11 -- # local val=167772161 00:31:47.826 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:31:47.826 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:31:47.826 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:31:47.826 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:31:47.826 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:31:47.826 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:31:47.826 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:31:47.826 10.0.0.1 00:31:47.826 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:31:47.826 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:31:47.826 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:47.826 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:47.826 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:31:47.826 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@11 -- # local val=167772162 00:31:47.826 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:31:47.826 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:31:47.826 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:31:47.826 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:31:47.826 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:31:47.826 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:31:47.826 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:31:47.826 10.0.0.2 00:31:47.826 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:31:47.826 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:31:47.826 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:31:47.826 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:31:47.826 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:31:47.826 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:31:47.826 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:31:47.826 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:47.826 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:47.826 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:31:47.826 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:31:47.826 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:31:47.826 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:31:47.826 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:31:47.826 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:31:47.826 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:31:47.826 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:31:47.826 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:31:47.826 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:31:47.826 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:31:47.826 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@38 -- # ping_ips 1 00:31:47.826 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:31:47.826 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:31:47.826 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:31:47.826 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:31:47.826 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:31:47.826 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:31:47.826 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:31:47.826 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:31:47.826 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:31:47.826 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@107 -- # local dev=initiator0 00:31:47.826 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:31:47.826 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:31:47.826 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:31:47.826 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:31:47.826 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:31:47.826 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:31:47.826 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:31:47.826 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:31:47.826 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:31:47.826 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:31:47.826 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:31:47.826 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:47.826 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:47.826 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:31:47.826 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:31:47.826 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:47.826 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.433 ms 00:31:47.826 00:31:47.826 --- 10.0.0.1 ping statistics --- 00:31:47.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:47.826 rtt min/avg/max/mdev = 0.433/0.433/0.433/0.000 ms 00:31:47.826 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:31:47.826 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:31:47.826 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:31:47.826 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:31:47.826 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:47.826 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:47.826 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # get_net_dev target0 00:31:47.826 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@107 -- # local dev=target0 00:31:47.826 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:31:47.826 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:31:47.826 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:31:47.826 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:31:47.826 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:31:47.827 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:47.827 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:31:47.827 00:31:47.827 --- 10.0.0.2 ping statistics --- 00:31:47.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:47.827 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # (( pair++ )) 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # return 0 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@107 -- # local dev=initiator0 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@107 -- # local dev=initiator1 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # return 1 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # dev= 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@169 -- # return 0 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # get_net_dev target0 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@107 -- # local dev=target0 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # get_net_dev target1 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@107 -- # local dev=target1 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # return 1 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # dev= 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@169 -- # return 0 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@35 -- # nvmfappstart -m 0x2 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # nvmfpid=242388 00:31:47.827 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:47.828 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # waitforlisten 242388 00:31:47.828 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 242388 ']' 00:31:47.828 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:47.828 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:47.828 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:47.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:47.828 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:47.828 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:47.828 [2024-12-05 12:15:21.281406] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:31:47.828 [2024-12-05 12:15:21.281449] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:47.828 [2024-12-05 12:15:21.359807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:47.828 [2024-12-05 12:15:21.399890] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:47.828 [2024-12-05 12:15:21.399924] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:47.828 [2024-12-05 12:15:21.399931] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:47.828 [2024-12-05 12:15:21.399937] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:47.828 [2024-12-05 12:15:21.399942] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:47.828 [2024-12-05 12:15:21.400499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:47.828 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:47.828 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:31:47.828 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:31:47.828 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:47.828 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:47.828 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:47.828 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@38 -- # rpc_cmd 00:31:47.828 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.828 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:47.828 [2024-12-05 12:15:21.543147] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:47.828 [2024-12-05 12:15:21.551311] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:47.828 null0 00:31:47.828 [2024-12-05 12:15:21.583303] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:47.828 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.828 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@54 -- # hostpid=242414 00:31:47.828 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:31:47.828 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@55 -- # waitforlisten 242414 /tmp/host.sock 00:31:47.828 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 242414 ']' 00:31:47.828 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:31:47.828 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:47.828 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:47.828 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:47.828 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:47.828 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:47.828 [2024-12-05 12:15:21.653735] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:31:47.828 [2024-12-05 12:15:21.653772] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid242414 ] 00:31:47.828 [2024-12-05 12:15:21.726718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:47.828 [2024-12-05 12:15:21.766949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:47.828 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:47.828 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:31:47.828 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@57 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:47.828 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:31:47.828 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.828 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:47.828 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.828 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@61 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:31:47.828 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.828 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:47.828 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.828 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:31:47.828 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.828 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:48.762 [2024-12-05 12:15:22.909236] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:48.762 [2024-12-05 12:15:22.909255] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:48.762 [2024-12-05 12:15:22.909269] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:49.020 [2024-12-05 12:15:23.035655] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:49.020 [2024-12-05 12:15:23.211595] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:31:49.021 [2024-12-05 12:15:23.212393] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x23ac850:1 started. 00:31:49.021 [2024-12-05 12:15:23.213731] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:49.021 [2024-12-05 12:15:23.213769] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:49.021 [2024-12-05 12:15:23.213788] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:49.021 [2024-12-05 12:15:23.213800] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:49.021 [2024-12-05 12:15:23.213818] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:49.021 12:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.021 12:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@67 -- # wait_for_bdev nvme0n1 00:31:49.021 12:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:31:49.279 [2024-12-05 12:15:23.218755] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x23ac850 was disconnected and freed. delete nvme_qpair. 00:31:49.279 12:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:49.279 12:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:31:49.279 12:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:31:49.279 12:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.279 12:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:49.279 12:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:31:49.279 12:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.279 12:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:31:49.279 12:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@70 -- # ip netns exec nvmf_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_1 00:31:49.279 12:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@71 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 down 00:31:49.279 12:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@74 -- # wait_for_bdev '' 00:31:49.279 12:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:31:49.279 12:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:49.279 12:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:31:49.279 12:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.279 12:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:31:49.279 12:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:49.279 12:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:31:49.279 12:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.279 12:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != '' ]] 00:31:49.279 12:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:31:50.654 12:15:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:31:50.654 12:15:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:50.654 12:15:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:31:50.654 12:15:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.654 12:15:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:31:50.654 12:15:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:50.654 12:15:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:31:50.654 12:15:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.654 12:15:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != '' ]] 00:31:50.654 12:15:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:31:51.587 12:15:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:31:51.587 12:15:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:51.587 12:15:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:31:51.587 12:15:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.587 12:15:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:31:51.587 12:15:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:51.587 12:15:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:31:51.587 12:15:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.587 12:15:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != '' ]] 00:31:51.587 12:15:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:31:52.519 12:15:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:31:52.519 12:15:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:52.519 12:15:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:31:52.519 12:15:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:52.519 12:15:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:31:52.519 12:15:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:52.519 12:15:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:31:52.519 12:15:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:52.519 12:15:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != '' ]] 00:31:52.519 12:15:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:31:53.453 12:15:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:31:53.453 12:15:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:53.453 12:15:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:31:53.453 12:15:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.453 12:15:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:31:53.453 12:15:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:53.453 12:15:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:31:53.453 12:15:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.453 12:15:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != '' ]] 00:31:53.453 12:15:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:31:54.505 12:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:31:54.505 12:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:54.505 12:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:31:54.505 12:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.505 12:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:31:54.505 12:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:54.505 12:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:31:54.505 12:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.505 [2024-12-05 12:15:28.655247] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:31:54.505 [2024-12-05 12:15:28.655281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:54.505 [2024-12-05 12:15:28.655293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.505 [2024-12-05 12:15:28.655318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:54.505 [2024-12-05 12:15:28.655325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.505 [2024-12-05 12:15:28.655333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:54.505 [2024-12-05 12:15:28.655340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.505 [2024-12-05 12:15:28.655347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:54.505 [2024-12-05 12:15:28.655354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.505 [2024-12-05 12:15:28.655362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:54.505 [2024-12-05 12:15:28.655372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.505 [2024-12-05 12:15:28.655379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2389070 is same with the state(6) to be set 00:31:54.505 [2024-12-05 12:15:28.665269] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2389070 (9): Bad file descriptor 00:31:54.505 12:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != '' ]] 00:31:54.505 12:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:31:54.505 [2024-12-05 12:15:28.675304] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:31:54.505 [2024-12-05 12:15:28.675321] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:31:54.505 [2024-12-05 12:15:28.675327] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:31:54.505 [2024-12-05 12:15:28.675333] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:54.505 [2024-12-05 12:15:28.675351] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:31:55.901 12:15:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:31:55.901 12:15:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:55.901 12:15:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:31:55.901 12:15:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.901 12:15:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:31:55.901 12:15:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:55.901 12:15:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:31:55.901 [2024-12-05 12:15:29.714722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:31:55.901 [2024-12-05 12:15:29.714798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2389070 with addr=10.0.0.2, port=4420 00:31:55.901 [2024-12-05 12:15:29.714831] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2389070 is same with the state(6) to be set 00:31:55.901 [2024-12-05 12:15:29.714884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2389070 (9): Bad file descriptor 00:31:55.901 [2024-12-05 12:15:29.715833] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:31:55.901 [2024-12-05 12:15:29.715898] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:31:55.901 [2024-12-05 12:15:29.715922] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:31:55.901 [2024-12-05 12:15:29.715945] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:31:55.901 [2024-12-05 12:15:29.715965] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:31:55.901 [2024-12-05 12:15:29.715981] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:31:55.901 [2024-12-05 12:15:29.715994] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:31:55.901 [2024-12-05 12:15:29.716016] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:31:55.901 [2024-12-05 12:15:29.716031] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:55.901 12:15:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.901 12:15:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != '' ]] 00:31:55.901 12:15:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:31:56.837 [2024-12-05 12:15:30.718550] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:31:56.837 [2024-12-05 12:15:30.718576] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:31:56.837 [2024-12-05 12:15:30.718595] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:31:56.837 [2024-12-05 12:15:30.718602] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:31:56.837 [2024-12-05 12:15:30.718610] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:31:56.837 [2024-12-05 12:15:30.718632] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:31:56.837 [2024-12-05 12:15:30.718637] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:31:56.837 [2024-12-05 12:15:30.718642] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:31:56.837 [2024-12-05 12:15:30.718664] bdev_nvme.c:7262:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:31:56.837 [2024-12-05 12:15:30.718690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:56.837 [2024-12-05 12:15:30.718700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.837 [2024-12-05 12:15:30.718710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:56.837 [2024-12-05 12:15:30.718717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.837 [2024-12-05 12:15:30.718724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:56.837 [2024-12-05 12:15:30.718731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.837 [2024-12-05 12:15:30.718738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:56.837 [2024-12-05 12:15:30.718745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.837 [2024-12-05 12:15:30.718753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:56.837 [2024-12-05 12:15:30.718760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.837 [2024-12-05 12:15:30.718767] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:31:56.837 [2024-12-05 12:15:30.719089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2378760 (9): Bad file descriptor 00:31:56.837 [2024-12-05 12:15:30.720099] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:31:56.837 [2024-12-05 12:15:30.720110] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:31:56.837 12:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:31:56.837 12:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:56.837 12:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:31:56.837 12:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.837 12:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:31:56.837 12:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:56.837 12:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:31:56.837 12:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.837 12:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ '' != '' ]] 00:31:56.837 12:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@77 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:31:56.837 12:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@78 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:31:56.837 12:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@81 -- # wait_for_bdev nvme1n1 00:31:56.837 12:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:31:56.837 12:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:56.837 12:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:31:56.837 12:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:31:56.838 12:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.838 12:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:56.838 12:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:31:56.838 12:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.838 12:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:56.838 12:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:31:57.772 12:15:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:31:57.772 12:15:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:57.772 12:15:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:31:57.772 12:15:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.772 12:15:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:31:57.772 12:15:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:57.772 12:15:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:31:57.772 12:15:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.772 12:15:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:57.772 12:15:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:31:58.707 [2024-12-05 12:15:32.775936] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:58.707 [2024-12-05 12:15:32.775953] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:58.707 [2024-12-05 12:15:32.775965] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:58.707 [2024-12-05 12:15:32.863218] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:31:58.966 12:15:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:31:58.966 12:15:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:58.966 12:15:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:31:58.966 12:15:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.966 12:15:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:31:58.966 12:15:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:58.966 12:15:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:31:58.966 12:15:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.966 12:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:58.966 12:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:31:58.966 [2024-12-05 12:15:33.086325] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:31:58.966 [2024-12-05 12:15:33.086960] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x23849c0:1 started. 00:31:58.966 [2024-12-05 12:15:33.088002] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:58.966 [2024-12-05 12:15:33.088034] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:58.966 [2024-12-05 12:15:33.088051] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:58.967 [2024-12-05 12:15:33.088064] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:31:58.967 [2024-12-05 12:15:33.088071] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:58.967 [2024-12-05 12:15:33.093978] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x23849c0 was disconnected and freed. delete nvme_qpair. 00:31:59.900 12:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:31:59.900 12:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:59.900 12:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:31:59.900 12:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.900 12:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:31:59.900 12:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:59.900 12:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:31:59.900 12:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.900 12:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:31:59.900 12:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:31:59.900 12:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@85 -- # killprocess 242414 00:31:59.900 12:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 242414 ']' 00:31:59.900 12:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 242414 00:31:59.900 12:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:31:59.900 12:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:59.900 12:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 242414 00:32:00.160 12:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:00.160 12:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:00.160 12:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 242414' 00:32:00.160 killing process with pid 242414 00:32:00.160 12:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 242414 00:32:00.160 12:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 242414 00:32:00.160 12:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # nvmftestfini 00:32:00.160 12:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # nvmfcleanup 00:32:00.160 12:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@99 -- # sync 00:32:00.160 12:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:32:00.160 12:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@102 -- # set +e 00:32:00.160 12:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@103 -- # for i in {1..20} 00:32:00.160 12:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:32:00.160 rmmod nvme_tcp 00:32:00.160 rmmod nvme_fabrics 00:32:00.160 rmmod nvme_keyring 00:32:00.160 12:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:32:00.160 12:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@106 -- # set -e 00:32:00.160 12:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@107 -- # return 0 00:32:00.160 12:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # '[' -n 242388 ']' 00:32:00.160 12:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@337 -- # killprocess 242388 00:32:00.160 12:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 242388 ']' 00:32:00.160 12:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 242388 00:32:00.160 12:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:32:00.160 12:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:00.160 12:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 242388 00:32:00.419 12:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:00.419 12:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:00.419 12:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 242388' 00:32:00.419 killing process with pid 242388 00:32:00.419 12:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 242388 00:32:00.419 12:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 242388 00:32:00.419 12:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:32:00.419 12:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # nvmf_fini 00:32:00.419 12:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@264 -- # local dev 00:32:00.419 12:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@267 -- # remove_target_ns 00:32:00.419 12:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:32:00.419 12:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:32:00.419 12:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_target_ns 00:32:02.957 12:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@268 -- # delete_main_bridge 00:32:02.957 12:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:32:02.957 12:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@130 -- # return 0 00:32:02.957 12:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:32:02.957 12:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:32:02.957 12:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:32:02.957 12:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:32:02.957 12:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:32:02.957 12:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:32:02.957 12:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:32:02.957 12:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:32:02.957 12:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:32:02.957 12:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:32:02.957 12:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:32:02.957 12:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:32:02.957 12:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:32:02.957 12:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:32:02.957 12:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:32:02.957 12:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:32:02.957 12:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:32:02.957 12:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@41 -- # _dev=0 00:32:02.957 12:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@41 -- # dev_map=() 00:32:02.957 12:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@284 -- # iptr 00:32:02.957 12:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@542 -- # iptables-save 00:32:02.957 12:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:32:02.957 12:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@542 -- # iptables-restore 00:32:02.957 00:32:02.957 real 0m21.672s 00:32:02.957 user 0m26.875s 00:32:02.957 sys 0m5.962s 00:32:02.957 12:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:02.957 12:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:02.957 ************************************ 00:32:02.957 END TEST nvmf_discovery_remove_ifc 00:32:02.957 ************************************ 00:32:02.957 12:15:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@34 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:32:02.957 12:15:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:02.957 12:15:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:02.957 12:15:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.957 ************************************ 00:32:02.957 START TEST nvmf_multicontroller 00:32:02.957 ************************************ 00:32:02.957 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:32:02.957 * Looking for test storage... 00:32:02.957 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:02.957 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:02.957 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:32:02.957 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:02.957 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:02.957 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:02.957 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:02.957 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:02.957 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:32:02.957 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:32:02.957 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:32:02.957 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:32:02.957 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:32:02.957 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:32:02.957 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:32:02.957 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:02.957 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:32:02.957 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:32:02.957 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:02.957 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:02.957 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:32:02.957 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:32:02.957 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:02.957 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:32:02.957 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:32:02.957 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:32:02.957 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:32:02.957 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:02.957 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:32:02.957 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:32:02.957 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:02.957 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:02.958 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:32:02.958 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:02.958 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:02.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:02.958 --rc genhtml_branch_coverage=1 00:32:02.958 --rc genhtml_function_coverage=1 00:32:02.958 --rc genhtml_legend=1 00:32:02.958 --rc geninfo_all_blocks=1 00:32:02.958 --rc geninfo_unexecuted_blocks=1 00:32:02.958 00:32:02.958 ' 00:32:02.958 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:02.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:02.958 --rc genhtml_branch_coverage=1 00:32:02.958 --rc genhtml_function_coverage=1 00:32:02.958 --rc genhtml_legend=1 00:32:02.958 --rc geninfo_all_blocks=1 00:32:02.958 --rc geninfo_unexecuted_blocks=1 00:32:02.958 00:32:02.958 ' 00:32:02.958 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:02.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:02.958 --rc genhtml_branch_coverage=1 00:32:02.958 --rc genhtml_function_coverage=1 00:32:02.958 --rc genhtml_legend=1 00:32:02.958 --rc geninfo_all_blocks=1 00:32:02.958 --rc geninfo_unexecuted_blocks=1 00:32:02.958 00:32:02.958 ' 00:32:02.958 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:02.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:02.958 --rc genhtml_branch_coverage=1 00:32:02.958 --rc genhtml_function_coverage=1 00:32:02.958 --rc genhtml_legend=1 00:32:02.958 --rc geninfo_all_blocks=1 00:32:02.958 --rc geninfo_unexecuted_blocks=1 00:32:02.958 00:32:02.958 ' 00:32:02.958 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:02.958 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:32:02.958 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:02.958 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:02.958 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:02.958 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:02.958 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:02.958 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:32:02.958 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:02.958 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:32:02.958 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:32:02.958 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:32:02.958 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:02.958 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:32:02.958 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:32:02.958 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:02.958 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:02.958 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:32:02.958 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:02.958 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:02.958 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:02.958 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:02.958 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:02.958 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:02.958 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:32:02.958 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:02.958 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:32:02.958 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:32:02.958 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:32:02.958 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:32:02.958 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@50 -- # : 0 00:32:02.958 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:32:02.958 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:32:02.958 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:32:02.958 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:02.958 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:02.958 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:32:02.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:32:02.958 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:32:02.958 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:32:02.958 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@54 -- # have_pci_nics=0 00:32:02.958 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:02.958 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:02.958 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:32:02.958 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:32:02.958 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:02.958 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # nvmftestinit 00:32:02.958 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:32:02.958 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:02.958 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # prepare_net_devs 00:32:02.958 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # local -g is_hw=no 00:32:02.958 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@260 -- # remove_target_ns 00:32:02.958 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:32:02.958 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:32:02.958 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_target_ns 00:32:02.958 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:32:02.958 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:32:02.958 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # xtrace_disable 00:32:02.958 12:15:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:09.530 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:09.530 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@131 -- # pci_devs=() 00:32:09.530 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@131 -- # local -a pci_devs 00:32:09.530 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@132 -- # pci_net_devs=() 00:32:09.530 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:32:09.530 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@133 -- # pci_drivers=() 00:32:09.530 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@133 -- # local -A pci_drivers 00:32:09.530 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@135 -- # net_devs=() 00:32:09.530 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@135 -- # local -ga net_devs 00:32:09.530 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@136 -- # e810=() 00:32:09.530 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@136 -- # local -ga e810 00:32:09.530 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@137 -- # x722=() 00:32:09.530 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@137 -- # local -ga x722 00:32:09.530 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@138 -- # mlx=() 00:32:09.530 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@138 -- # local -ga mlx 00:32:09.530 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:09.530 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:09.530 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:09.530 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:09.530 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:09.530 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:09.530 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:09.530 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:09.530 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:09.530 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:09.530 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:09.530 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:09.530 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:32:09.530 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:32:09.530 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:32:09.530 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:32:09.530 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:32:09.530 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:32:09.530 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:32:09.530 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:09.530 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:09.530 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:32:09.530 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:32:09.530 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:09.530 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:09.530 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:32:09.530 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:32:09.530 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:09.530 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:09.530 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:32:09.530 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:32:09.530 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:09.530 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:09.530 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:32:09.530 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:32:09.530 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:32:09.530 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:32:09.530 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:32:09.530 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:09.530 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:32:09.530 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:09.530 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # [[ up == up ]] 00:32:09.530 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:09.531 Found net devices under 0000:86:00.0: cvl_0_0 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # [[ up == up ]] 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:09.531 Found net devices under 0000:86:00.1: cvl_0_1 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # is_hw=yes 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@257 -- # create_target_ns 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@27 -- # local -gA dev_map 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@28 -- # local -g _dev 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@44 -- # ips=() 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@11 -- # local val=167772161 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:32:09.531 10.0.0.1 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@11 -- # local val=167772162 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:32:09.531 10.0.0.2 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@38 -- # ping_ips 1 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:32:09.531 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@107 -- # local dev=initiator0 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:32:09.532 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:09.532 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.456 ms 00:32:09.532 00:32:09.532 --- 10.0.0.1 ping statistics --- 00:32:09.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:09.532 rtt min/avg/max/mdev = 0.456/0.456/0.456/0.000 ms 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@168 -- # get_net_dev target0 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@107 -- # local dev=target0 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:32:09.532 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:09.532 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:32:09.532 00:32:09.532 --- 10.0.0.2 ping statistics --- 00:32:09.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:09.532 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@98 -- # (( pair++ )) 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@270 -- # return 0 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@107 -- # local dev=initiator0 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@107 -- # local dev=initiator1 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@109 -- # return 1 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@168 -- # dev= 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@169 -- # return 0 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@168 -- # get_net_dev target0 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@107 -- # local dev=target0 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@168 -- # get_net_dev target1 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@107 -- # local dev=target1 00:32:09.532 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:32:09.533 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:32:09.533 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@109 -- # return 1 00:32:09.533 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@168 -- # dev= 00:32:09.533 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@169 -- # return 0 00:32:09.533 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:32:09.533 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:09.533 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:32:09.533 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:32:09.533 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:09.533 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:32:09.533 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:32:09.533 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@20 -- # nvmfappstart -m 0xE 00:32:09.533 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:32:09.533 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:09.533 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:09.533 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # nvmfpid=248112 00:32:09.533 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:09.533 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@329 -- # waitforlisten 248112 00:32:09.533 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 248112 ']' 00:32:09.533 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:09.533 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:09.533 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:09.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:09.533 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:09.533 12:15:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:09.533 [2024-12-05 12:15:42.965912] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:32:09.533 [2024-12-05 12:15:42.965965] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:09.533 [2024-12-05 12:15:43.027313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:09.533 [2024-12-05 12:15:43.070601] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:09.533 [2024-12-05 12:15:43.070634] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:09.533 [2024-12-05 12:15:43.070641] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:09.533 [2024-12-05 12:15:43.070647] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:09.533 [2024-12-05 12:15:43.070652] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:09.533 [2024-12-05 12:15:43.071974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:09.533 [2024-12-05 12:15:43.072079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:09.533 [2024-12-05 12:15:43.072079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:09.533 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:09.533 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:32:09.533 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:32:09.533 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:09.533 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:09.533 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:09.533 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:09.533 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:09.533 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:09.533 [2024-12-05 12:15:43.213691] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:09.533 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:09.533 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:09.533 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:09.533 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:09.533 Malloc0 00:32:09.533 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:09.533 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:09.533 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:09.533 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:09.533 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:09.533 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:09.533 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:09.533 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:09.533 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:09.533 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:09.533 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:09.533 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:09.533 [2024-12-05 12:15:43.287754] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:09.533 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:09.533 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:09.533 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:09.533 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:09.533 [2024-12-05 12:15:43.295683] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:09.533 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:09.533 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:32:09.533 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:09.533 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:09.533 Malloc1 00:32:09.533 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:09.533 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@32 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:32:09.533 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:09.533 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:09.533 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:09.533 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:32:09.533 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:09.533 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:09.533 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:09.533 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:09.533 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:09.533 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:09.533 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:09.533 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:32:09.533 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:09.533 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:09.533 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:09.533 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@39 -- # bdevperf_pid=248134 00:32:09.533 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:32:09.533 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:09.533 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@42 -- # waitforlisten 248134 /var/tmp/bdevperf.sock 00:32:09.533 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 248134 ']' 00:32:09.533 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:09.533 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:09.533 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:09.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:09.533 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:09.533 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:09.533 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:09.533 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:32:09.533 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@45 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:32:09.534 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:09.534 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:09.792 NVMe0n1 00:32:09.792 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:09.792 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@49 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:09.792 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:09.792 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@49 -- # grep -c NVMe 00:32:09.792 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:09.792 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:09.792 1 00:32:09.792 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@55 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:32:09.793 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:32:09.793 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:32:09.793 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:09.793 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:09.793 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:09.793 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:09.793 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:32:09.793 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:09.793 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:09.793 request: 00:32:09.793 { 00:32:09.793 "name": "NVMe0", 00:32:09.793 "trtype": "tcp", 00:32:09.793 "traddr": "10.0.0.2", 00:32:09.793 "adrfam": "ipv4", 00:32:09.793 "trsvcid": "4420", 00:32:09.793 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:09.793 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:32:09.793 "hostaddr": "10.0.0.1", 00:32:09.793 "prchk_reftag": false, 00:32:09.793 "prchk_guard": false, 00:32:09.793 "hdgst": false, 00:32:09.793 "ddgst": false, 00:32:09.793 "allow_unrecognized_csi": false, 00:32:09.793 "method": "bdev_nvme_attach_controller", 00:32:09.793 "req_id": 1 00:32:09.793 } 00:32:09.793 Got JSON-RPC error response 00:32:09.793 response: 00:32:09.793 { 00:32:09.793 "code": -114, 00:32:09.793 "message": "A controller named NVMe0 already exists with the specified network path" 00:32:09.793 } 00:32:09.793 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:09.793 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:32:09.793 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:09.793 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:09.793 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:09.793 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:32:09.793 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:32:09.793 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:32:09.793 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:09.793 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:09.793 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:09.793 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:09.793 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:32:09.793 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:09.793 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:09.793 request: 00:32:09.793 { 00:32:09.793 "name": "NVMe0", 00:32:09.793 "trtype": "tcp", 00:32:09.793 "traddr": "10.0.0.2", 00:32:09.793 "adrfam": "ipv4", 00:32:09.793 "trsvcid": "4420", 00:32:09.793 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:32:09.793 "hostaddr": "10.0.0.1", 00:32:09.793 "prchk_reftag": false, 00:32:09.793 "prchk_guard": false, 00:32:09.793 "hdgst": false, 00:32:09.793 "ddgst": false, 00:32:09.793 "allow_unrecognized_csi": false, 00:32:09.793 "method": "bdev_nvme_attach_controller", 00:32:09.793 "req_id": 1 00:32:09.793 } 00:32:09.793 Got JSON-RPC error response 00:32:09.793 response: 00:32:09.793 { 00:32:09.793 "code": -114, 00:32:09.793 "message": "A controller named NVMe0 already exists with the specified network path" 00:32:09.793 } 00:32:09.793 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:09.793 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:32:09.793 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:09.793 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:09.793 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:09.793 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@64 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:32:09.793 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:32:09.793 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:32:09.793 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:09.793 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:09.793 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:09.793 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:09.793 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:32:09.793 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:09.793 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:09.793 request: 00:32:09.793 { 00:32:09.793 "name": "NVMe0", 00:32:09.793 "trtype": "tcp", 00:32:09.793 "traddr": "10.0.0.2", 00:32:09.793 "adrfam": "ipv4", 00:32:09.793 "trsvcid": "4420", 00:32:09.793 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:09.793 "hostaddr": "10.0.0.1", 00:32:09.793 "prchk_reftag": false, 00:32:09.793 "prchk_guard": false, 00:32:09.793 "hdgst": false, 00:32:09.793 "ddgst": false, 00:32:09.793 "multipath": "disable", 00:32:09.793 "allow_unrecognized_csi": false, 00:32:09.793 "method": "bdev_nvme_attach_controller", 00:32:09.793 "req_id": 1 00:32:09.793 } 00:32:09.793 Got JSON-RPC error response 00:32:09.793 response: 00:32:09.793 { 00:32:09.793 "code": -114, 00:32:09.793 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:32:09.793 } 00:32:09.793 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:09.793 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:32:09.793 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:09.793 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:09.793 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:09.793 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:32:09.793 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:32:09.793 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:32:09.793 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:09.793 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:09.793 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:09.793 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:09.793 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:32:09.793 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:09.793 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:09.793 request: 00:32:09.793 { 00:32:09.793 "name": "NVMe0", 00:32:09.793 "trtype": "tcp", 00:32:09.793 "traddr": "10.0.0.2", 00:32:09.793 "adrfam": "ipv4", 00:32:09.793 "trsvcid": "4420", 00:32:09.793 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:09.793 "hostaddr": "10.0.0.1", 00:32:09.793 "prchk_reftag": false, 00:32:09.793 "prchk_guard": false, 00:32:09.793 "hdgst": false, 00:32:09.793 "ddgst": false, 00:32:09.793 "multipath": "failover", 00:32:09.793 "allow_unrecognized_csi": false, 00:32:09.793 "method": "bdev_nvme_attach_controller", 00:32:09.793 "req_id": 1 00:32:09.793 } 00:32:09.793 Got JSON-RPC error response 00:32:09.793 response: 00:32:09.793 { 00:32:09.793 "code": -114, 00:32:09.793 "message": "A controller named NVMe0 already exists with the specified network path" 00:32:09.793 } 00:32:09.793 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:09.793 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:32:09.793 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:09.793 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:09.793 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:09.794 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:09.794 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:09.794 12:15:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:10.053 NVMe0n1 00:32:10.053 12:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:10.053 12:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@78 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:10.053 12:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:10.053 12:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:10.053 12:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:10.053 12:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@82 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:32:10.053 12:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:10.053 12:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:10.053 00:32:10.053 12:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:10.053 12:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@85 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:10.053 12:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@85 -- # grep -c NVMe 00:32:10.053 12:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:10.053 12:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:10.053 12:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:10.053 12:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@85 -- # '[' 2 '!=' 2 ']' 00:32:10.053 12:15:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:11.428 { 00:32:11.428 "results": [ 00:32:11.428 { 00:32:11.428 "job": "NVMe0n1", 00:32:11.428 "core_mask": "0x1", 00:32:11.428 "workload": "write", 00:32:11.428 "status": "finished", 00:32:11.428 "queue_depth": 128, 00:32:11.428 "io_size": 4096, 00:32:11.428 "runtime": 1.003266, 00:32:11.428 "iops": 25013.306540837624, 00:32:11.428 "mibps": 97.70822867514697, 00:32:11.428 "io_failed": 0, 00:32:11.428 "io_timeout": 0, 00:32:11.428 "avg_latency_us": 5110.964231539198, 00:32:11.428 "min_latency_us": 3089.554285714286, 00:32:11.428 "max_latency_us": 15416.56380952381 00:32:11.428 } 00:32:11.428 ], 00:32:11.428 "core_count": 1 00:32:11.428 } 00:32:11.428 12:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@93 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:32:11.428 12:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.428 12:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:11.428 12:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.428 12:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # [[ -n '' ]] 00:32:11.428 12:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@111 -- # killprocess 248134 00:32:11.428 12:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 248134 ']' 00:32:11.428 12:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 248134 00:32:11.428 12:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:32:11.428 12:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:11.428 12:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 248134 00:32:11.428 12:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:11.428 12:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:11.428 12:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 248134' 00:32:11.428 killing process with pid 248134 00:32:11.428 12:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 248134 00:32:11.428 12:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 248134 00:32:11.428 12:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:11.428 12:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.428 12:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:11.428 12:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.428 12:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@114 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:32:11.428 12:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.428 12:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:11.428 12:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.428 12:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # trap - SIGINT SIGTERM EXIT 00:32:11.428 12:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:11.428 12:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:32:11.428 12:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:32:11.428 12:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:32:11.428 12:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:32:11.686 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:32:11.686 [2024-12-05 12:15:43.399186] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:32:11.687 [2024-12-05 12:15:43.399232] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid248134 ] 00:32:11.687 [2024-12-05 12:15:43.473384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:11.687 [2024-12-05 12:15:43.514144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:11.687 [2024-12-05 12:15:44.221790] bdev.c:4934:bdev_name_add: *ERROR*: Bdev name 24b7c96d-b055-4f09-bdbb-a827cb38cd6a already exists 00:32:11.687 [2024-12-05 12:15:44.221816] bdev.c:8154:bdev_register: *ERROR*: Unable to add uuid:24b7c96d-b055-4f09-bdbb-a827cb38cd6a alias for bdev NVMe1n1 00:32:11.687 [2024-12-05 12:15:44.221823] bdev_nvme.c:4665:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:32:11.687 Running I/O for 1 seconds... 00:32:11.687 24967.00 IOPS, 97.53 MiB/s 00:32:11.687 Latency(us) 00:32:11.687 [2024-12-05T11:15:45.883Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:11.687 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:32:11.687 NVMe0n1 : 1.00 25013.31 97.71 0.00 0.00 5110.96 3089.55 15416.56 00:32:11.687 [2024-12-05T11:15:45.883Z] =================================================================================================================== 00:32:11.687 [2024-12-05T11:15:45.883Z] Total : 25013.31 97.71 0.00 0.00 5110.96 3089.55 15416.56 00:32:11.687 Received shutdown signal, test time was about 1.000000 seconds 00:32:11.687 00:32:11.687 Latency(us) 00:32:11.687 [2024-12-05T11:15:45.883Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:11.687 [2024-12-05T11:15:45.883Z] =================================================================================================================== 00:32:11.687 [2024-12-05T11:15:45.883Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:11.687 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:32:11.687 12:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:11.687 12:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:32:11.687 12:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # nvmftestfini 00:32:11.687 12:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@335 -- # nvmfcleanup 00:32:11.687 12:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@99 -- # sync 00:32:11.687 12:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:32:11.687 12:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@102 -- # set +e 00:32:11.687 12:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@103 -- # for i in {1..20} 00:32:11.687 12:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:32:11.687 rmmod nvme_tcp 00:32:11.687 rmmod nvme_fabrics 00:32:11.687 rmmod nvme_keyring 00:32:11.687 12:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:32:11.687 12:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@106 -- # set -e 00:32:11.687 12:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@107 -- # return 0 00:32:11.687 12:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # '[' -n 248112 ']' 00:32:11.687 12:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@337 -- # killprocess 248112 00:32:11.687 12:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 248112 ']' 00:32:11.687 12:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 248112 00:32:11.687 12:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:32:11.687 12:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:11.687 12:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 248112 00:32:11.687 12:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:11.687 12:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:11.687 12:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 248112' 00:32:11.687 killing process with pid 248112 00:32:11.687 12:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 248112 00:32:11.687 12:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 248112 00:32:11.945 12:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:32:11.945 12:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # nvmf_fini 00:32:11.945 12:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@264 -- # local dev 00:32:11.945 12:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@267 -- # remove_target_ns 00:32:11.945 12:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:32:11.945 12:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:32:11.945 12:15:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_target_ns 00:32:13.850 12:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@268 -- # delete_main_bridge 00:32:13.850 12:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:32:13.850 12:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@130 -- # return 0 00:32:13.850 12:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:32:13.850 12:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:32:13.850 12:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:32:13.850 12:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:32:13.850 12:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:32:13.850 12:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:32:13.850 12:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:32:13.850 12:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:32:13.850 12:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:32:13.850 12:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:32:13.850 12:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:32:13.850 12:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:32:13.850 12:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:32:13.850 12:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:32:13.850 12:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:32:13.850 12:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:32:13.850 12:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:32:13.850 12:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@41 -- # _dev=0 00:32:13.850 12:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@41 -- # dev_map=() 00:32:13.850 12:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@284 -- # iptr 00:32:13.850 12:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@542 -- # iptables-save 00:32:13.850 12:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:32:13.850 12:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@542 -- # iptables-restore 00:32:13.850 00:32:13.850 real 0m11.332s 00:32:13.850 user 0m12.743s 00:32:13.850 sys 0m5.204s 00:32:13.850 12:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:13.850 12:15:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:13.850 ************************************ 00:32:13.850 END TEST nvmf_multicontroller 00:32:13.850 ************************************ 00:32:14.109 12:15:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@37 -- # [[ tcp == \r\d\m\a ]] 00:32:14.109 12:15:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:32:14.109 12:15:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # [[ 0 -eq 1 ]] 00:32:14.109 12:15:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:32:14.109 00:32:14.109 real 5m56.732s 00:32:14.109 user 10m39.898s 00:32:14.109 sys 1m59.060s 00:32:14.109 12:15:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:14.109 12:15:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.109 ************************************ 00:32:14.109 END TEST nvmf_host 00:32:14.109 ************************************ 00:32:14.109 12:15:48 nvmf_tcp -- nvmf/nvmf.sh@15 -- # [[ tcp = \t\c\p ]] 00:32:14.109 12:15:48 nvmf_tcp -- nvmf/nvmf.sh@15 -- # [[ 0 -eq 0 ]] 00:32:14.109 12:15:48 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:32:14.109 12:15:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:14.109 12:15:48 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:14.109 12:15:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:14.109 ************************************ 00:32:14.109 START TEST nvmf_target_core_interrupt_mode 00:32:14.109 ************************************ 00:32:14.109 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:32:14.109 * Looking for test storage... 00:32:14.109 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:32:14.109 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:14.109 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:32:14.109 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:14.109 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:14.109 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:14.109 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:14.109 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:14.109 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:32:14.109 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:32:14.109 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:32:14.109 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:32:14.109 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:32:14.109 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:32:14.109 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:32:14.109 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:14.109 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:32:14.109 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:32:14.109 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:14.109 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:14.369 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:32:14.369 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:32:14.369 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:14.369 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:32:14.369 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:32:14.369 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:32:14.369 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:32:14.369 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:14.369 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:32:14.369 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:32:14.369 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:14.369 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:14.369 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:32:14.369 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:14.369 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:14.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:14.369 --rc genhtml_branch_coverage=1 00:32:14.369 --rc genhtml_function_coverage=1 00:32:14.369 --rc genhtml_legend=1 00:32:14.369 --rc geninfo_all_blocks=1 00:32:14.369 --rc geninfo_unexecuted_blocks=1 00:32:14.369 00:32:14.369 ' 00:32:14.369 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:14.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:14.369 --rc genhtml_branch_coverage=1 00:32:14.369 --rc genhtml_function_coverage=1 00:32:14.369 --rc genhtml_legend=1 00:32:14.369 --rc geninfo_all_blocks=1 00:32:14.369 --rc geninfo_unexecuted_blocks=1 00:32:14.369 00:32:14.369 ' 00:32:14.369 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:14.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:14.369 --rc genhtml_branch_coverage=1 00:32:14.369 --rc genhtml_function_coverage=1 00:32:14.369 --rc genhtml_legend=1 00:32:14.369 --rc geninfo_all_blocks=1 00:32:14.369 --rc geninfo_unexecuted_blocks=1 00:32:14.369 00:32:14.369 ' 00:32:14.369 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:14.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:14.369 --rc genhtml_branch_coverage=1 00:32:14.369 --rc genhtml_function_coverage=1 00:32:14.369 --rc genhtml_legend=1 00:32:14.369 --rc geninfo_all_blocks=1 00:32:14.369 --rc geninfo_unexecuted_blocks=1 00:32:14.369 00:32:14.369 ' 00:32:14.369 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:14.369 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:32:14.369 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:14.369 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:14.369 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:14.369 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:14.369 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:14.369 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:32:14.369 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:14.369 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:32:14.369 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:32:14.369 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:32:14.369 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:14.369 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:32:14.369 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:32:14.369 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:14.369 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:14.369 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:32:14.369 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:14.369 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:14.369 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:14.369 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.369 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.369 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.369 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:32:14.369 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.369 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:32:14.369 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:32:14.369 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:32:14.369 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:32:14.369 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@50 -- # : 0 00:32:14.369 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:32:14.369 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:32:14.369 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:32:14.369 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:14.369 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:14.369 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:32:14.369 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:32:14.369 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:32:14.369 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:32:14.369 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@54 -- # have_pci_nics=0 00:32:14.369 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:32:14.369 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@13 -- # TEST_ARGS=("$@") 00:32:14.370 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@15 -- # [[ 0 -eq 0 ]] 00:32:14.370 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:32:14.370 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:14.370 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:14.370 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:14.370 ************************************ 00:32:14.370 START TEST nvmf_abort 00:32:14.370 ************************************ 00:32:14.370 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:32:14.370 * Looking for test storage... 00:32:14.370 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:14.370 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:14.370 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:32:14.370 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:14.370 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:14.370 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:14.370 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:14.370 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:14.370 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:32:14.370 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:32:14.370 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:32:14.370 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:32:14.370 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:32:14.370 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:32:14.370 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:32:14.370 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:14.370 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:32:14.370 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:32:14.370 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:14.370 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:14.370 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:32:14.370 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:32:14.370 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:14.370 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:32:14.370 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:32:14.370 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:32:14.370 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:32:14.370 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:14.370 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:32:14.370 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:32:14.370 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:14.370 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:14.370 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:32:14.370 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:14.370 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:14.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:14.370 --rc genhtml_branch_coverage=1 00:32:14.370 --rc genhtml_function_coverage=1 00:32:14.370 --rc genhtml_legend=1 00:32:14.370 --rc geninfo_all_blocks=1 00:32:14.370 --rc geninfo_unexecuted_blocks=1 00:32:14.370 00:32:14.370 ' 00:32:14.370 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:14.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:14.370 --rc genhtml_branch_coverage=1 00:32:14.370 --rc genhtml_function_coverage=1 00:32:14.370 --rc genhtml_legend=1 00:32:14.370 --rc geninfo_all_blocks=1 00:32:14.370 --rc geninfo_unexecuted_blocks=1 00:32:14.370 00:32:14.370 ' 00:32:14.370 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:14.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:14.370 --rc genhtml_branch_coverage=1 00:32:14.370 --rc genhtml_function_coverage=1 00:32:14.370 --rc genhtml_legend=1 00:32:14.370 --rc geninfo_all_blocks=1 00:32:14.370 --rc geninfo_unexecuted_blocks=1 00:32:14.370 00:32:14.370 ' 00:32:14.370 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:14.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:14.370 --rc genhtml_branch_coverage=1 00:32:14.370 --rc genhtml_function_coverage=1 00:32:14.370 --rc genhtml_legend=1 00:32:14.370 --rc geninfo_all_blocks=1 00:32:14.370 --rc geninfo_unexecuted_blocks=1 00:32:14.370 00:32:14.370 ' 00:32:14.370 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:14.370 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:32:14.370 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:14.370 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:14.370 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:14.370 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:14.370 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:14.370 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:32:14.370 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:14.370 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:32:14.629 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:32:14.629 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:32:14.629 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:14.629 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:32:14.629 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:32:14.629 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:14.629 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:14.629 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:32:14.629 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:14.629 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:14.629 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:14.629 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.630 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.630 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.630 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:32:14.630 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.630 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:32:14.630 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:32:14.630 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:32:14.630 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:32:14.630 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@50 -- # : 0 00:32:14.630 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:32:14.630 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:32:14.630 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:32:14.630 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:14.630 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:14.630 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:32:14.630 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:32:14.630 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:32:14.630 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:32:14.630 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@54 -- # have_pci_nics=0 00:32:14.630 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:14.630 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:32:14.630 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:32:14.630 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:32:14.630 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:14.630 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@296 -- # prepare_net_devs 00:32:14.630 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # local -g is_hw=no 00:32:14.630 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@260 -- # remove_target_ns 00:32:14.630 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:32:14.630 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:32:14.630 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_target_ns 00:32:14.630 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:32:14.630 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:32:14.630 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # xtrace_disable 00:32:14.630 12:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@131 -- # pci_devs=() 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@131 -- # local -a pci_devs 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@132 -- # pci_net_devs=() 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@133 -- # pci_drivers=() 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@133 -- # local -A pci_drivers 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@135 -- # net_devs=() 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@135 -- # local -ga net_devs 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@136 -- # e810=() 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@136 -- # local -ga e810 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@137 -- # x722=() 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@137 -- # local -ga x722 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@138 -- # mlx=() 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@138 -- # local -ga mlx 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:21.199 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:21.199 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@234 -- # [[ up == up ]] 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:21.199 Found net devices under 0000:86:00.0: cvl_0_0 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@234 -- # [[ up == up ]] 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:21.199 Found net devices under 0000:86:00.1: cvl_0_1 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # is_hw=yes 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@257 -- # create_target_ns 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@27 -- # local -gA dev_map 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@28 -- # local -g _dev 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@44 -- # ips=() 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:32:21.199 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@11 -- # local val=167772161 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:32:21.200 10.0.0.1 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@11 -- # local val=167772162 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:32:21.200 10.0.0.2 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@38 -- # ping_ips 1 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@107 -- # local dev=initiator0 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:32:21.200 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:21.200 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.403 ms 00:32:21.200 00:32:21.200 --- 10.0.0.1 ping statistics --- 00:32:21.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:21.200 rtt min/avg/max/mdev = 0.403/0.403/0.403/0.000 ms 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@168 -- # get_net_dev target0 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@107 -- # local dev=target0 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:32:21.200 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:21.200 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:32:21.200 00:32:21.200 --- 10.0.0.2 ping statistics --- 00:32:21.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:21.200 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@98 -- # (( pair++ )) 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@270 -- # return 0 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@107 -- # local dev=initiator0 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:21.200 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@107 -- # local dev=initiator1 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@109 -- # return 1 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@168 -- # dev= 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@169 -- # return 0 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@168 -- # get_net_dev target0 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@107 -- # local dev=target0 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@168 -- # get_net_dev target1 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@107 -- # local dev=target1 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@109 -- # return 1 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@168 -- # dev= 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@169 -- # return 0 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # nvmfpid=252209 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@329 -- # waitforlisten 252209 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 252209 ']' 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:21.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:21.201 [2024-12-05 12:15:54.664907] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:21.201 [2024-12-05 12:15:54.665823] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:32:21.201 [2024-12-05 12:15:54.665856] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:21.201 [2024-12-05 12:15:54.744391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:21.201 [2024-12-05 12:15:54.785327] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:21.201 [2024-12-05 12:15:54.785362] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:21.201 [2024-12-05 12:15:54.785373] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:21.201 [2024-12-05 12:15:54.785379] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:21.201 [2024-12-05 12:15:54.785383] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:21.201 [2024-12-05 12:15:54.786694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:21.201 [2024-12-05 12:15:54.786798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:21.201 [2024-12-05 12:15:54.786799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:21.201 [2024-12-05 12:15:54.853423] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:21.201 [2024-12-05 12:15:54.854243] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:21.201 [2024-12-05 12:15:54.854285] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:21.201 [2024-12-05 12:15:54.854461] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:21.201 [2024-12-05 12:15:54.919524] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:21.201 Malloc0 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:21.201 Delay0 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.201 12:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:21.201 [2024-12-05 12:15:55.007475] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:21.201 12:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.201 12:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:21.201 12:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.201 12:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:21.201 12:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.201 12:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:32:21.201 [2024-12-05 12:15:55.138094] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:32:23.101 Initializing NVMe Controllers 00:32:23.101 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:32:23.101 controller IO queue size 128 less than required 00:32:23.101 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:32:23.101 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:32:23.101 Initialization complete. Launching workers. 00:32:23.101 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 38909 00:32:23.101 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 38966, failed to submit 66 00:32:23.101 success 38909, unsuccessful 57, failed 0 00:32:23.101 12:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:23.101 12:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:23.101 12:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:23.101 12:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:23.101 12:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:32:23.101 12:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:32:23.101 12:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@335 -- # nvmfcleanup 00:32:23.101 12:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@99 -- # sync 00:32:23.101 12:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:32:23.101 12:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@102 -- # set +e 00:32:23.101 12:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@103 -- # for i in {1..20} 00:32:23.102 12:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:32:23.102 rmmod nvme_tcp 00:32:23.102 rmmod nvme_fabrics 00:32:23.102 rmmod nvme_keyring 00:32:23.102 12:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:32:23.102 12:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@106 -- # set -e 00:32:23.102 12:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@107 -- # return 0 00:32:23.102 12:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # '[' -n 252209 ']' 00:32:23.102 12:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@337 -- # killprocess 252209 00:32:23.102 12:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 252209 ']' 00:32:23.102 12:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 252209 00:32:23.102 12:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:32:23.102 12:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:23.102 12:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 252209 00:32:23.102 12:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:23.102 12:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:23.102 12:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 252209' 00:32:23.102 killing process with pid 252209 00:32:23.102 12:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 252209 00:32:23.102 12:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 252209 00:32:23.361 12:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:32:23.361 12:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@342 -- # nvmf_fini 00:32:23.361 12:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@264 -- # local dev 00:32:23.361 12:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@267 -- # remove_target_ns 00:32:23.361 12:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:32:23.361 12:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:32:23.361 12:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_target_ns 00:32:25.898 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@268 -- # delete_main_bridge 00:32:25.898 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:32:25.898 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@130 -- # return 0 00:32:25.898 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:32:25.898 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:32:25.898 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:32:25.898 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:32:25.898 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:32:25.898 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:32:25.898 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:32:25.898 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:32:25.898 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:32:25.898 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:32:25.898 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:32:25.898 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:32:25.898 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:32:25.898 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:32:25.898 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:32:25.898 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:32:25.898 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:32:25.898 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@41 -- # _dev=0 00:32:25.898 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@41 -- # dev_map=() 00:32:25.898 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@284 -- # iptr 00:32:25.898 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@542 -- # iptables-save 00:32:25.898 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:32:25.899 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@542 -- # iptables-restore 00:32:25.899 00:32:25.899 real 0m11.173s 00:32:25.899 user 0m10.326s 00:32:25.899 sys 0m5.614s 00:32:25.899 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:25.899 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:25.899 ************************************ 00:32:25.899 END TEST nvmf_abort 00:32:25.899 ************************************ 00:32:25.899 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@17 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:32:25.899 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:25.899 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:25.899 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:25.899 ************************************ 00:32:25.899 START TEST nvmf_ns_hotplug_stress 00:32:25.899 ************************************ 00:32:25.899 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:32:25.899 * Looking for test storage... 00:32:25.899 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:25.899 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:25.899 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:32:25.899 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:25.899 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:25.899 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:25.899 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:25.899 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:25.899 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:32:25.899 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:32:25.899 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:32:25.899 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:32:25.899 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:32:25.899 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:32:25.899 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:32:25.899 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:25.899 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:32:25.899 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:32:25.899 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:25.899 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:25.899 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:32:25.899 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:32:25.899 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:25.899 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:32:25.899 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:32:25.899 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:32:25.899 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:32:25.899 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:25.899 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:32:25.899 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:32:25.899 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:25.899 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:25.899 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:32:25.899 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:25.899 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:25.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.899 --rc genhtml_branch_coverage=1 00:32:25.899 --rc genhtml_function_coverage=1 00:32:25.899 --rc genhtml_legend=1 00:32:25.899 --rc geninfo_all_blocks=1 00:32:25.899 --rc geninfo_unexecuted_blocks=1 00:32:25.899 00:32:25.899 ' 00:32:25.899 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:25.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.899 --rc genhtml_branch_coverage=1 00:32:25.899 --rc genhtml_function_coverage=1 00:32:25.899 --rc genhtml_legend=1 00:32:25.899 --rc geninfo_all_blocks=1 00:32:25.899 --rc geninfo_unexecuted_blocks=1 00:32:25.899 00:32:25.899 ' 00:32:25.899 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:25.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.899 --rc genhtml_branch_coverage=1 00:32:25.899 --rc genhtml_function_coverage=1 00:32:25.899 --rc genhtml_legend=1 00:32:25.899 --rc geninfo_all_blocks=1 00:32:25.899 --rc geninfo_unexecuted_blocks=1 00:32:25.899 00:32:25.899 ' 00:32:25.899 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:25.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.899 --rc genhtml_branch_coverage=1 00:32:25.899 --rc genhtml_function_coverage=1 00:32:25.899 --rc genhtml_legend=1 00:32:25.899 --rc geninfo_all_blocks=1 00:32:25.899 --rc geninfo_unexecuted_blocks=1 00:32:25.899 00:32:25.899 ' 00:32:25.899 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:25.899 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:32:25.899 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:25.899 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:25.899 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:25.899 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:25.899 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:25.899 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:32:25.899 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:25.899 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:32:25.899 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:32:25.899 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:32:25.899 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:25.899 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:32:25.899 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:32:25.899 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:25.899 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:25.899 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:32:25.899 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:25.899 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:25.899 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:25.899 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.899 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.900 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.900 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:32:25.900 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.900 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:32:25.900 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:32:25.900 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:32:25.900 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:32:25.900 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@50 -- # : 0 00:32:25.900 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:32:25.900 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:32:25.900 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:32:25.900 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:25.900 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:25.900 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:32:25.900 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:32:25.900 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:32:25.900 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:32:25.900 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@54 -- # have_pci_nics=0 00:32:25.900 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:25.900 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:32:25.900 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:32:25.900 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:25.900 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # prepare_net_devs 00:32:25.900 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # local -g is_hw=no 00:32:25.900 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # remove_target_ns 00:32:25.900 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:32:25.900 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:32:25.900 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_target_ns 00:32:25.900 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:32:25.900 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:32:25.900 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # xtrace_disable 00:32:25.900 12:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:32:32.468 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:32.468 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@131 -- # pci_devs=() 00:32:32.468 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@131 -- # local -a pci_devs 00:32:32.468 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@132 -- # pci_net_devs=() 00:32:32.468 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:32:32.468 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@133 -- # pci_drivers=() 00:32:32.468 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@133 -- # local -A pci_drivers 00:32:32.468 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@135 -- # net_devs=() 00:32:32.468 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@135 -- # local -ga net_devs 00:32:32.468 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@136 -- # e810=() 00:32:32.468 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@136 -- # local -ga e810 00:32:32.468 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@137 -- # x722=() 00:32:32.468 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@137 -- # local -ga x722 00:32:32.468 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@138 -- # mlx=() 00:32:32.468 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@138 -- # local -ga mlx 00:32:32.468 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:32.468 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:32.468 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:32.468 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:32.468 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:32.468 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:32.468 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:32.468 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:32.468 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:32.468 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:32.468 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:32.468 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:32.468 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:32:32.468 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:32:32.468 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:32:32.468 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:32:32.468 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:32:32.468 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:32:32.468 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:32:32.468 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:32.468 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:32.469 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # [[ up == up ]] 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:32.469 Found net devices under 0000:86:00.0: cvl_0_0 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # [[ up == up ]] 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:32.469 Found net devices under 0000:86:00.1: cvl_0_1 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # is_hw=yes 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@257 -- # create_target_ns 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@27 -- # local -gA dev_map 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@28 -- # local -g _dev 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@44 -- # ips=() 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@11 -- # local val=167772161 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:32:32.469 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:32:32.469 10.0.0.1 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@11 -- # local val=167772162 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:32:32.470 10.0.0.2 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@38 -- # ping_ips 1 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@107 -- # local dev=initiator0 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:32:32.470 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:32.470 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.424 ms 00:32:32.470 00:32:32.470 --- 10.0.0.1 ping statistics --- 00:32:32.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:32.470 rtt min/avg/max/mdev = 0.424/0.424/0.424/0.000 ms 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # get_net_dev target0 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@107 -- # local dev=target0 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:32:32.470 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:32:32.471 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:32.471 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:32:32.471 00:32:32.471 --- 10.0.0.2 ping statistics --- 00:32:32.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:32.471 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # (( pair++ )) 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # return 0 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@107 -- # local dev=initiator0 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@107 -- # local dev=initiator1 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # return 1 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # dev= 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@169 -- # return 0 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # get_net_dev target0 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@107 -- # local dev=target0 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # get_net_dev target1 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@107 -- # local dev=target1 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # return 1 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # dev= 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@169 -- # return 0 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # nvmfpid=256145 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # waitforlisten 256145 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 256145 ']' 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:32.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:32.471 12:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:32:32.471 [2024-12-05 12:16:05.932277] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:32.472 [2024-12-05 12:16:05.933192] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:32:32.472 [2024-12-05 12:16:05.933225] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:32.472 [2024-12-05 12:16:06.011123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:32.472 [2024-12-05 12:16:06.053385] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:32.472 [2024-12-05 12:16:06.053422] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:32.472 [2024-12-05 12:16:06.053429] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:32.472 [2024-12-05 12:16:06.053434] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:32.472 [2024-12-05 12:16:06.053439] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:32.472 [2024-12-05 12:16:06.054869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:32.472 [2024-12-05 12:16:06.054960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:32.472 [2024-12-05 12:16:06.054958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:32.472 [2024-12-05 12:16:06.123744] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:32.472 [2024-12-05 12:16:06.124475] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:32.472 [2024-12-05 12:16:06.124543] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:32.472 [2024-12-05 12:16:06.124701] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:32.472 12:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:32.472 12:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:32:32.472 12:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:32:32.472 12:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:32.472 12:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:32:32.472 12:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:32.472 12:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:32:32.472 12:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:32.472 [2024-12-05 12:16:06.367769] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:32.472 12:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:32:32.472 12:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:32.729 [2024-12-05 12:16:06.764225] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:32.729 12:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:32.987 12:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:32:32.987 Malloc0 00:32:33.245 12:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:32:33.245 Delay0 00:32:33.245 12:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:33.503 12:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:32:33.761 NULL1 00:32:33.761 12:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:32:33.761 12:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=256492 00:32:33.761 12:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:32:33.761 12:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 256492 00:32:33.761 12:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:35.131 Read completed with error (sct=0, sc=11) 00:32:35.131 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:35.131 12:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:35.131 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:35.131 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:35.131 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:35.131 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:35.131 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:35.388 12:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:32:35.388 12:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:32:35.388 true 00:32:35.388 12:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 256492 00:32:35.388 12:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:36.321 12:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:36.579 12:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:32:36.579 12:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:32:36.579 true 00:32:36.579 12:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 256492 00:32:36.579 12:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:36.837 12:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:37.096 12:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:32:37.096 12:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:32:37.354 true 00:32:37.354 12:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 256492 00:32:37.354 12:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:38.288 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:38.288 12:16:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:38.288 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:38.288 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:38.545 12:16:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:32:38.545 12:16:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:32:38.803 true 00:32:38.803 12:16:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 256492 00:32:38.803 12:16:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:39.061 12:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:39.061 12:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:32:39.061 12:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:32:39.318 true 00:32:39.318 12:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 256492 00:32:39.318 12:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:40.691 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:40.691 12:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:40.691 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:40.691 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:40.691 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:40.691 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:40.691 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:40.691 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:40.691 12:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:32:40.691 12:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:32:40.948 true 00:32:40.948 12:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 256492 00:32:40.948 12:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:41.894 12:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:41.894 12:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:32:41.894 12:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:32:42.152 true 00:32:42.152 12:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 256492 00:32:42.152 12:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:42.410 12:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:42.410 12:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:32:42.410 12:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:32:42.668 true 00:32:42.668 12:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 256492 00:32:42.668 12:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:43.605 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:43.864 12:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:43.864 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:43.864 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:43.864 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:43.864 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:43.864 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:43.864 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:43.864 12:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:32:43.864 12:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:32:44.123 true 00:32:44.123 12:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 256492 00:32:44.123 12:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:45.057 12:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:45.057 12:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:32:45.057 12:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:32:45.316 true 00:32:45.316 12:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 256492 00:32:45.316 12:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:45.574 12:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:45.833 12:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:32:45.833 12:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:32:45.833 true 00:32:45.833 12:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 256492 00:32:45.833 12:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:47.208 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:47.208 12:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:47.208 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:47.208 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:47.208 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:47.208 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:47.208 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:47.208 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:47.466 12:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:32:47.466 12:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:32:47.466 true 00:32:47.466 12:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 256492 00:32:47.466 12:16:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:48.485 12:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:48.485 12:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:32:48.485 12:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:32:48.743 true 00:32:48.743 12:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 256492 00:32:48.743 12:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:49.002 12:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:49.261 12:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:32:49.261 12:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:32:49.261 true 00:32:49.261 12:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 256492 00:32:49.261 12:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:50.636 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:50.636 12:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:50.636 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:50.636 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:50.636 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:50.636 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:50.636 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:50.636 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:50.636 12:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:32:50.636 12:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:32:50.894 true 00:32:50.894 12:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 256492 00:32:50.894 12:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:51.826 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:51.826 12:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:51.826 12:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:32:51.826 12:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:32:52.084 true 00:32:52.084 12:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 256492 00:32:52.084 12:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:52.342 12:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:52.600 12:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:32:52.600 12:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:32:52.600 true 00:32:52.600 12:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 256492 00:32:52.600 12:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:53.975 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:53.975 12:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:53.975 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:53.975 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:53.975 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:53.975 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:53.975 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:53.975 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:53.975 12:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:32:53.975 12:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:32:54.234 true 00:32:54.234 12:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 256492 00:32:54.234 12:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:55.169 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:55.169 12:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:55.169 12:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:32:55.169 12:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:32:55.427 true 00:32:55.427 12:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 256492 00:32:55.427 12:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:55.686 12:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:55.944 12:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:32:55.944 12:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:32:55.944 true 00:32:55.944 12:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 256492 00:32:55.944 12:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:57.317 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:57.317 12:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:57.317 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:57.317 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:57.317 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:57.317 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:57.317 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:57.317 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:57.317 12:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:32:57.317 12:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:32:57.575 true 00:32:57.575 12:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 256492 00:32:57.575 12:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:58.510 12:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:58.769 12:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:32:58.769 12:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:32:58.769 true 00:32:58.769 12:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 256492 00:32:58.769 12:16:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:59.026 12:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:59.284 12:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:32:59.284 12:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:32:59.542 true 00:32:59.542 12:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 256492 00:32:59.542 12:16:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:00.477 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:00.477 12:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:00.734 12:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:33:00.734 12:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:33:00.734 true 00:33:00.992 12:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 256492 00:33:00.992 12:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:00.992 12:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:01.249 12:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:33:01.249 12:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:33:01.506 true 00:33:01.506 12:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 256492 00:33:01.506 12:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:02.879 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:02.879 12:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:02.879 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:02.879 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:02.879 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:02.879 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:02.879 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:02.879 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:02.879 12:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:33:02.879 12:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:33:03.138 true 00:33:03.139 12:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 256492 00:33:03.139 12:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:03.730 12:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:03.988 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:33:03.988 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:33:04.247 Initializing NVMe Controllers 00:33:04.247 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:04.247 Controller IO queue size 128, less than required. 00:33:04.247 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:04.247 Controller IO queue size 128, less than required. 00:33:04.247 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:04.247 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:04.247 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:33:04.247 Initialization complete. Launching workers. 00:33:04.247 ======================================================== 00:33:04.247 Latency(us) 00:33:04.247 Device Information : IOPS MiB/s Average min max 00:33:04.247 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2132.18 1.04 41439.58 2719.82 1022714.62 00:33:04.247 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 18409.65 8.99 6952.45 1587.55 369467.18 00:33:04.247 ======================================================== 00:33:04.247 Total : 20541.83 10.03 10532.11 1587.55 1022714.62 00:33:04.247 00:33:04.247 true 00:33:04.247 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 256492 00:33:04.247 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (256492) - No such process 00:33:04.247 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 256492 00:33:04.247 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:04.506 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:04.765 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:33:04.765 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:33:04.765 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:33:04.765 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:04.765 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:33:04.765 null0 00:33:04.765 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:04.765 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:04.765 12:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:33:05.023 null1 00:33:05.023 12:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:05.023 12:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:05.023 12:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:33:05.281 null2 00:33:05.281 12:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:05.281 12:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:05.281 12:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:33:05.281 null3 00:33:05.281 12:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:05.281 12:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:05.281 12:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:33:05.540 null4 00:33:05.540 12:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:05.540 12:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:05.540 12:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:33:05.799 null5 00:33:05.799 12:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:05.799 12:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:05.799 12:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:33:05.799 null6 00:33:06.058 12:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:06.058 12:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:06.058 12:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:33:06.058 null7 00:33:06.058 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:06.058 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:06.058 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:33:06.058 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:06.058 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:06.058 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:06.058 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:33:06.058 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:06.058 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:33:06.058 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:06.058 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:06.059 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:06.059 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:06.059 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:06.059 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:33:06.059 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:06.059 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:33:06.059 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:06.059 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:06.059 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:06.059 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:06.059 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:06.059 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:33:06.059 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:06.059 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:33:06.059 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:06.059 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:06.059 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:06.059 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:06.059 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:06.059 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:33:06.059 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:06.059 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:33:06.059 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:06.059 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:06.059 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:06.059 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:06.059 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:06.059 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:33:06.059 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:06.059 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:33:06.059 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:06.059 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:06.059 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:06.059 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:06.059 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:33:06.059 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:06.059 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:06.059 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:33:06.059 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:06.059 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:06.059 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:06.059 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:06.059 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:06.059 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:06.059 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:33:06.059 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:33:06.059 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:06.059 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:06.059 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:06.059 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:06.059 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:06.059 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 261702 261704 261708 261711 261714 261717 261720 261723 00:33:06.059 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:33:06.059 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:06.059 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:33:06.059 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:06.059 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:06.059 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:06.318 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:06.318 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:06.318 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:06.318 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:06.318 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:06.318 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:06.318 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:06.318 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:06.577 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:06.577 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:06.577 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:06.577 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:06.577 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:06.577 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:06.577 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:06.577 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:06.577 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:06.577 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:06.577 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:06.577 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:06.577 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:06.577 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:06.577 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:06.577 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:06.577 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:06.577 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:06.577 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:06.577 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:06.577 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:06.578 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:06.578 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:06.578 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:06.838 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:06.838 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:06.838 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:06.838 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:06.838 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:06.838 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:06.838 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:06.838 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:06.838 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:06.838 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:06.838 12:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:06.838 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:06.838 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:06.838 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:06.838 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:06.838 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:06.838 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:06.838 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:06.838 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:06.838 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:06.838 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:06.838 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:06.838 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:06.838 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:06.838 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:06.838 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:07.097 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:07.097 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:07.097 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:07.098 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:07.098 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:07.098 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:07.098 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:07.098 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:07.098 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:07.098 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:07.098 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:07.098 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:07.098 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:07.098 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:07.357 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:07.357 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:07.357 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:07.357 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:07.357 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:07.357 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:07.357 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:07.357 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:07.357 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:07.357 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:07.357 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:07.357 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:07.357 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:07.357 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:07.357 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:07.357 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:07.357 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:07.357 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:07.357 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:07.357 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:07.357 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:07.357 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:07.357 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:07.357 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:07.616 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:07.616 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:07.616 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:07.616 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:07.616 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:07.616 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:07.616 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:07.617 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:07.876 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:07.876 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:07.876 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:07.876 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:07.876 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:07.876 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:07.876 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:07.876 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:07.876 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:07.876 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:07.876 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:07.876 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:07.876 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:07.876 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:07.876 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:07.876 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:07.876 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:07.876 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:07.876 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:07.876 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:07.876 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:07.876 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:07.876 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:07.876 12:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:07.876 12:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:07.876 12:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:07.876 12:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:07.876 12:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:07.876 12:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:07.876 12:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:07.876 12:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:07.876 12:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:08.135 12:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:08.135 12:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:08.135 12:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:08.135 12:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:08.135 12:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:08.135 12:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:08.136 12:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:08.136 12:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:08.136 12:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:08.136 12:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:08.136 12:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:08.136 12:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:08.136 12:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:08.136 12:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:08.136 12:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:08.136 12:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:08.136 12:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:08.136 12:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:08.136 12:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:08.136 12:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:08.136 12:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:08.136 12:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:08.136 12:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:08.136 12:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:08.395 12:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:08.395 12:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:08.395 12:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:08.395 12:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:08.395 12:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:08.395 12:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:08.395 12:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:08.395 12:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:08.654 12:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:08.654 12:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:08.654 12:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:08.654 12:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:08.654 12:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:08.654 12:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:08.654 12:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:08.654 12:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:08.654 12:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:08.654 12:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:08.654 12:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:08.654 12:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:08.654 12:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:08.654 12:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:08.654 12:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:08.654 12:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:08.654 12:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:08.654 12:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:08.655 12:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:08.655 12:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:08.655 12:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:08.655 12:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:08.655 12:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:08.655 12:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:08.914 12:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:08.914 12:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:08.914 12:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:08.914 12:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:08.914 12:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:08.914 12:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:08.914 12:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:08.914 12:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:08.914 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:08.914 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:08.914 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:08.914 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:08.914 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:08.914 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:08.914 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:08.914 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:08.914 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:08.914 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:08.914 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:08.914 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:08.914 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:08.914 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:08.914 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:08.914 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:08.914 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:08.914 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:08.914 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:08.914 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:08.914 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:08.914 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:08.914 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:08.914 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:09.173 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:09.173 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:09.173 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:09.173 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:09.173 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:09.173 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:09.173 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:09.173 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:09.433 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:09.433 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:09.433 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:09.433 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:09.434 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:09.434 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:09.434 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:09.434 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:09.434 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:09.434 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:09.434 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:09.434 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:09.434 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:09.434 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:09.434 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:09.434 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:09.434 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:09.434 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:09.434 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:09.434 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:09.434 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:09.434 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:09.434 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:09.434 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:09.693 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:09.693 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:09.693 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:09.693 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:09.693 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:09.693 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:09.693 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:09.693 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:09.693 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:09.693 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:09.693 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:09.952 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:09.952 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:09.952 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:09.952 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:09.952 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:09.952 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:09.952 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:09.952 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:09.952 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:09.952 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:09.952 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:09.952 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:09.952 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:09.952 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:09.952 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:09.952 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:09.952 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:09.952 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:09.952 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:09.952 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:09.952 12:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:09.952 12:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:09.952 12:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:09.952 12:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:09.952 12:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:09.952 12:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:09.952 12:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:09.952 12:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:09.952 12:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:10.211 12:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:10.211 12:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:10.211 12:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:10.211 12:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:10.211 12:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:10.211 12:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:10.211 12:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:10.211 12:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:10.211 12:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:10.211 12:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:10.211 12:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:10.211 12:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:10.211 12:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:10.211 12:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:10.211 12:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:10.211 12:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:10.211 12:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:33:10.211 12:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:33:10.211 12:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # nvmfcleanup 00:33:10.211 12:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@99 -- # sync 00:33:10.211 12:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:33:10.211 12:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # set +e 00:33:10.211 12:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # for i in {1..20} 00:33:10.211 12:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:33:10.211 rmmod nvme_tcp 00:33:10.211 rmmod nvme_fabrics 00:33:10.211 rmmod nvme_keyring 00:33:10.211 12:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:33:10.211 12:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # set -e 00:33:10.211 12:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # return 0 00:33:10.211 12:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # '[' -n 256145 ']' 00:33:10.211 12:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@337 -- # killprocess 256145 00:33:10.211 12:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 256145 ']' 00:33:10.211 12:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 256145 00:33:10.211 12:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:33:10.211 12:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:10.211 12:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 256145 00:33:10.471 12:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:10.471 12:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:10.471 12:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 256145' 00:33:10.471 killing process with pid 256145 00:33:10.471 12:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 256145 00:33:10.471 12:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 256145 00:33:10.471 12:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:33:10.471 12:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # nvmf_fini 00:33:10.471 12:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@264 -- # local dev 00:33:10.471 12:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@267 -- # remove_target_ns 00:33:10.471 12:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:33:10.471 12:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:33:10.471 12:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_target_ns 00:33:13.005 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@268 -- # delete_main_bridge 00:33:13.005 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:33:13.005 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@130 -- # return 0 00:33:13.005 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:33:13.005 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:33:13.005 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:33:13.005 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:33:13.005 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:33:13.005 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:33:13.005 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:33:13.005 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:33:13.005 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:33:13.005 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:33:13.005 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:33:13.005 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:33:13.005 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:33:13.005 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:33:13.005 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:33:13.005 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:33:13.005 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:33:13.005 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@41 -- # _dev=0 00:33:13.005 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@41 -- # dev_map=() 00:33:13.005 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@284 -- # iptr 00:33:13.005 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@542 -- # iptables-save 00:33:13.005 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:33:13.005 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@542 -- # iptables-restore 00:33:13.005 00:33:13.005 real 0m47.067s 00:33:13.005 user 2m54.889s 00:33:13.005 sys 0m19.714s 00:33:13.005 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:13.005 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:33:13.005 ************************************ 00:33:13.005 END TEST nvmf_ns_hotplug_stress 00:33:13.005 ************************************ 00:33:13.005 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:33:13.005 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:13.005 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:13.005 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:13.005 ************************************ 00:33:13.005 START TEST nvmf_delete_subsystem 00:33:13.005 ************************************ 00:33:13.005 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:33:13.005 * Looking for test storage... 00:33:13.005 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:13.005 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:13.005 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:33:13.005 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:13.005 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:13.005 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:13.005 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:13.005 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:13.005 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:33:13.005 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:33:13.005 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:33:13.006 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:33:13.006 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:33:13.006 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:33:13.006 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:33:13.006 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:13.006 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:33:13.006 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:33:13.006 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:13.006 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:13.006 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:33:13.006 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:33:13.006 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:13.006 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:33:13.006 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:33:13.006 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:33:13.006 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:33:13.006 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:13.006 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:33:13.006 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:33:13.006 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:13.006 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:13.006 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:33:13.006 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:13.006 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:13.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:13.006 --rc genhtml_branch_coverage=1 00:33:13.006 --rc genhtml_function_coverage=1 00:33:13.006 --rc genhtml_legend=1 00:33:13.006 --rc geninfo_all_blocks=1 00:33:13.006 --rc geninfo_unexecuted_blocks=1 00:33:13.006 00:33:13.006 ' 00:33:13.006 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:13.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:13.006 --rc genhtml_branch_coverage=1 00:33:13.006 --rc genhtml_function_coverage=1 00:33:13.006 --rc genhtml_legend=1 00:33:13.006 --rc geninfo_all_blocks=1 00:33:13.006 --rc geninfo_unexecuted_blocks=1 00:33:13.006 00:33:13.006 ' 00:33:13.006 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:13.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:13.006 --rc genhtml_branch_coverage=1 00:33:13.006 --rc genhtml_function_coverage=1 00:33:13.006 --rc genhtml_legend=1 00:33:13.006 --rc geninfo_all_blocks=1 00:33:13.006 --rc geninfo_unexecuted_blocks=1 00:33:13.006 00:33:13.006 ' 00:33:13.006 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:13.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:13.006 --rc genhtml_branch_coverage=1 00:33:13.006 --rc genhtml_function_coverage=1 00:33:13.006 --rc genhtml_legend=1 00:33:13.006 --rc geninfo_all_blocks=1 00:33:13.006 --rc geninfo_unexecuted_blocks=1 00:33:13.006 00:33:13.006 ' 00:33:13.006 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:13.006 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:33:13.006 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:13.006 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:13.006 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:13.006 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:13.006 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:13.006 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:33:13.006 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:13.006 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:33:13.006 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:33:13.006 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:33:13.006 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:13.006 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:33:13.006 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:33:13.006 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:13.006 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:13.006 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:33:13.006 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:13.006 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:13.006 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:13.006 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:13.006 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:13.007 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:13.007 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:33:13.007 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:13.007 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:33:13.007 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:33:13.007 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:33:13.007 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:33:13.007 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@50 -- # : 0 00:33:13.007 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:33:13.007 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:33:13.007 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:33:13.007 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:13.007 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:13.007 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:33:13.007 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:33:13.007 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:33:13.007 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:33:13.007 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@54 -- # have_pci_nics=0 00:33:13.007 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:33:13.007 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:33:13.007 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:13.007 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # prepare_net_devs 00:33:13.007 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # local -g is_hw=no 00:33:13.007 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # remove_target_ns 00:33:13.007 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:33:13.007 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:33:13.007 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_target_ns 00:33:13.007 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:33:13.007 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:33:13.007 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # xtrace_disable 00:33:13.007 12:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:19.579 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:19.579 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@131 -- # pci_devs=() 00:33:19.579 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@131 -- # local -a pci_devs 00:33:19.579 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@132 -- # pci_net_devs=() 00:33:19.579 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:33:19.579 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@133 -- # pci_drivers=() 00:33:19.579 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@133 -- # local -A pci_drivers 00:33:19.579 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@135 -- # net_devs=() 00:33:19.579 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@135 -- # local -ga net_devs 00:33:19.579 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@136 -- # e810=() 00:33:19.579 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@136 -- # local -ga e810 00:33:19.579 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@137 -- # x722=() 00:33:19.579 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@137 -- # local -ga x722 00:33:19.579 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@138 -- # mlx=() 00:33:19.579 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@138 -- # local -ga mlx 00:33:19.579 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:19.579 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:19.579 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:19.579 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:19.579 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:19.579 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:19.579 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:19.579 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:19.579 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:19.579 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:19.579 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:19.579 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:19.579 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:33:19.579 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:33:19.579 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:33:19.579 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:33:19.579 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:33:19.579 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:33:19.579 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:33:19.579 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:19.579 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:19.579 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:33:19.579 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:33:19.579 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:19.579 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:19.579 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:19.580 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # [[ up == up ]] 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:19.580 Found net devices under 0000:86:00.0: cvl_0_0 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # [[ up == up ]] 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:19.580 Found net devices under 0000:86:00.1: cvl_0_1 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # is_hw=yes 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@257 -- # create_target_ns 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@27 -- # local -gA dev_map 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@28 -- # local -g _dev 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@44 -- # ips=() 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@11 -- # local val=167772161 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:33:19.580 10.0.0.1 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@11 -- # local val=167772162 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:33:19.580 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:33:19.581 10.0.0.2 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@38 -- # ping_ips 1 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@107 -- # local dev=initiator0 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:33:19.581 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:19.581 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.389 ms 00:33:19.581 00:33:19.581 --- 10.0.0.1 ping statistics --- 00:33:19.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:19.581 rtt min/avg/max/mdev = 0.389/0.389/0.389/0.000 ms 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # get_net_dev target0 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@107 -- # local dev=target0 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:33:19.581 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:19.581 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.112 ms 00:33:19.581 00:33:19.581 --- 10.0.0.2 ping statistics --- 00:33:19.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:19.581 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # (( pair++ )) 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # return 0 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@107 -- # local dev=initiator0 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:33:19.581 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:33:19.582 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:33:19.582 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:33:19.582 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:19.582 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:19.582 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:33:19.582 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:33:19.582 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:33:19.582 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:19.582 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:33:19.582 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:33:19.582 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:33:19.582 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:33:19.582 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:33:19.582 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:33:19.582 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@107 -- # local dev=initiator1 00:33:19.582 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:33:19.582 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:33:19.582 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # return 1 00:33:19.582 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # dev= 00:33:19.582 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@169 -- # return 0 00:33:19.582 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:33:19.582 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:33:19.582 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:33:19.582 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:33:19.582 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:33:19.582 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:19.582 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:19.582 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # get_net_dev target0 00:33:19.582 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@107 -- # local dev=target0 00:33:19.582 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:33:19.582 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:33:19.582 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:33:19.582 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:33:19.582 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:33:19.582 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:33:19.582 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:33:19.582 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:33:19.582 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:33:19.582 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:19.582 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:33:19.582 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:33:19.582 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:33:19.582 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:33:19.582 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:19.582 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:19.582 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # get_net_dev target1 00:33:19.582 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@107 -- # local dev=target1 00:33:19.582 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:33:19.582 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:33:19.582 12:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # return 1 00:33:19.582 12:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # dev= 00:33:19.582 12:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@169 -- # return 0 00:33:19.582 12:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:33:19.582 12:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:19.582 12:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:33:19.582 12:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:33:19.582 12:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:19.582 12:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:33:19.582 12:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:33:19.582 12:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:33:19.582 12:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:33:19.582 12:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:19.582 12:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:19.582 12:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # nvmfpid=266015 00:33:19.582 12:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # waitforlisten 266015 00:33:19.582 12:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:33:19.582 12:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 266015 ']' 00:33:19.582 12:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:19.582 12:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:19.582 12:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:19.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:19.582 12:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:19.582 12:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:19.582 [2024-12-05 12:16:53.097074] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:19.582 [2024-12-05 12:16:53.097990] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:33:19.582 [2024-12-05 12:16:53.098023] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:19.582 [2024-12-05 12:16:53.177084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:19.582 [2024-12-05 12:16:53.217652] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:19.582 [2024-12-05 12:16:53.217688] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:19.582 [2024-12-05 12:16:53.217695] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:19.582 [2024-12-05 12:16:53.217701] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:19.582 [2024-12-05 12:16:53.217706] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:19.582 [2024-12-05 12:16:53.218940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:19.582 [2024-12-05 12:16:53.218941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:19.582 [2024-12-05 12:16:53.286887] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:19.582 [2024-12-05 12:16:53.287466] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:19.582 [2024-12-05 12:16:53.287590] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:19.582 12:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:19.583 12:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:33:19.583 12:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:33:19.583 12:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:19.583 12:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:19.583 12:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:19.583 12:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:19.583 12:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.583 12:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:19.583 [2024-12-05 12:16:53.355811] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:19.583 12:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.583 12:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:33:19.583 12:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.583 12:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:19.583 12:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.583 12:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:19.583 12:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.583 12:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:19.583 [2024-12-05 12:16:53.384088] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:19.583 12:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.583 12:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:33:19.583 12:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.583 12:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:19.583 NULL1 00:33:19.583 12:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.583 12:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:33:19.583 12:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.583 12:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:19.583 Delay0 00:33:19.583 12:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.583 12:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:19.583 12:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.583 12:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:19.583 12:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.583 12:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=266228 00:33:19.583 12:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:33:19.583 12:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:33:19.583 [2024-12-05 12:16:53.496839] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:33:21.487 12:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:21.487 12:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.487 12:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:21.487 Write completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Write completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 starting I/O failed: -6 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Write completed with error (sct=0, sc=8) 00:33:21.487 Write completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 starting I/O failed: -6 00:33:21.487 Write completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 starting I/O failed: -6 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Write completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 starting I/O failed: -6 00:33:21.487 Write completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Write completed with error (sct=0, sc=8) 00:33:21.487 starting I/O failed: -6 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 starting I/O failed: -6 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Write completed with error (sct=0, sc=8) 00:33:21.487 Write completed with error (sct=0, sc=8) 00:33:21.487 starting I/O failed: -6 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 starting I/O failed: -6 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 starting I/O failed: -6 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Write completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Write completed with error (sct=0, sc=8) 00:33:21.487 starting I/O failed: -6 00:33:21.487 Write completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 [2024-12-05 12:16:55.667981] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f64a0 is same with the state(6) to be set 00:33:21.487 Write completed with error (sct=0, sc=8) 00:33:21.487 Write completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Write completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Write completed with error (sct=0, sc=8) 00:33:21.487 Write completed with error (sct=0, sc=8) 00:33:21.487 Write completed with error (sct=0, sc=8) 00:33:21.487 Write completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Write completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Write completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Write completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Write completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Write completed with error (sct=0, sc=8) 00:33:21.487 Write completed with error (sct=0, sc=8) 00:33:21.487 Write completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Write completed with error (sct=0, sc=8) 00:33:21.487 Write completed with error (sct=0, sc=8) 00:33:21.487 Write completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Write completed with error (sct=0, sc=8) 00:33:21.487 Write completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 [2024-12-05 12:16:55.668595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f6860 is same with the state(6) to be set 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 starting I/O failed: -6 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Write completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 starting I/O failed: -6 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Write completed with error (sct=0, sc=8) 00:33:21.487 Write completed with error (sct=0, sc=8) 00:33:21.487 Write completed with error (sct=0, sc=8) 00:33:21.487 starting I/O failed: -6 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Write completed with error (sct=0, sc=8) 00:33:21.487 starting I/O failed: -6 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Write completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Write completed with error (sct=0, sc=8) 00:33:21.487 starting I/O failed: -6 00:33:21.487 Write completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 starting I/O failed: -6 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 starting I/O failed: -6 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Write completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 starting I/O failed: -6 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 starting I/O failed: -6 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Write completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Write completed with error (sct=0, sc=8) 00:33:21.487 starting I/O failed: -6 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Write completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Write completed with error (sct=0, sc=8) 00:33:21.487 starting I/O failed: -6 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 [2024-12-05 12:16:55.668907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2ccc000c40 is same with the state(6) to be set 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Write completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Write completed with error (sct=0, sc=8) 00:33:21.487 Write completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Write completed with error (sct=0, sc=8) 00:33:21.487 Write completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Write completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Write completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Write completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Write completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Write completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.487 Read completed with error (sct=0, sc=8) 00:33:21.488 Read completed with error (sct=0, sc=8) 00:33:21.488 Read completed with error (sct=0, sc=8) 00:33:21.488 Read completed with error (sct=0, sc=8) 00:33:21.488 Write completed with error (sct=0, sc=8) 00:33:21.488 Write completed with error (sct=0, sc=8) 00:33:21.488 Read completed with error (sct=0, sc=8) 00:33:21.488 Read completed with error (sct=0, sc=8) 00:33:21.488 Read completed with error (sct=0, sc=8) 00:33:21.488 Write completed with error (sct=0, sc=8) 00:33:21.488 Read completed with error (sct=0, sc=8) 00:33:21.488 Read completed with error (sct=0, sc=8) 00:33:21.488 Write completed with error (sct=0, sc=8) 00:33:21.488 Write completed with error (sct=0, sc=8) 00:33:21.488 Read completed with error (sct=0, sc=8) 00:33:21.488 Read completed with error (sct=0, sc=8) 00:33:21.488 Write completed with error (sct=0, sc=8) 00:33:21.488 Read completed with error (sct=0, sc=8) 00:33:21.488 Read completed with error (sct=0, sc=8) 00:33:21.488 Read completed with error (sct=0, sc=8) 00:33:21.488 Read completed with error (sct=0, sc=8) 00:33:21.488 Read completed with error (sct=0, sc=8) 00:33:21.488 Read completed with error (sct=0, sc=8) 00:33:21.488 Write completed with error (sct=0, sc=8) 00:33:21.488 Write completed with error (sct=0, sc=8) 00:33:21.488 Read completed with error (sct=0, sc=8) 00:33:21.488 Read completed with error (sct=0, sc=8) 00:33:22.864 [2024-12-05 12:16:56.632532] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f79b0 is same with the state(6) to be set 00:33:22.864 Read completed with error (sct=0, sc=8) 00:33:22.864 Read completed with error (sct=0, sc=8) 00:33:22.864 Read completed with error (sct=0, sc=8) 00:33:22.864 Write completed with error (sct=0, sc=8) 00:33:22.864 Read completed with error (sct=0, sc=8) 00:33:22.864 Read completed with error (sct=0, sc=8) 00:33:22.864 Read completed with error (sct=0, sc=8) 00:33:22.864 Read completed with error (sct=0, sc=8) 00:33:22.864 Write completed with error (sct=0, sc=8) 00:33:22.864 Read completed with error (sct=0, sc=8) 00:33:22.864 Write completed with error (sct=0, sc=8) 00:33:22.864 Write completed with error (sct=0, sc=8) 00:33:22.864 Read completed with error (sct=0, sc=8) 00:33:22.864 Read completed with error (sct=0, sc=8) 00:33:22.864 Read completed with error (sct=0, sc=8) 00:33:22.864 Read completed with error (sct=0, sc=8) 00:33:22.864 Write completed with error (sct=0, sc=8) 00:33:22.864 Write completed with error (sct=0, sc=8) 00:33:22.864 Read completed with error (sct=0, sc=8) 00:33:22.864 Read completed with error (sct=0, sc=8) 00:33:22.864 Read completed with error (sct=0, sc=8) 00:33:22.864 [2024-12-05 12:16:56.671855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2ccc00d020 is same with the state(6) to be set 00:33:22.864 Read completed with error (sct=0, sc=8) 00:33:22.864 Write completed with error (sct=0, sc=8) 00:33:22.864 Read completed with error (sct=0, sc=8) 00:33:22.864 Read completed with error (sct=0, sc=8) 00:33:22.864 Read completed with error (sct=0, sc=8) 00:33:22.864 Read completed with error (sct=0, sc=8) 00:33:22.864 Write completed with error (sct=0, sc=8) 00:33:22.864 Read completed with error (sct=0, sc=8) 00:33:22.864 Read completed with error (sct=0, sc=8) 00:33:22.864 Read completed with error (sct=0, sc=8) 00:33:22.864 Read completed with error (sct=0, sc=8) 00:33:22.864 Write completed with error (sct=0, sc=8) 00:33:22.864 Write completed with error (sct=0, sc=8) 00:33:22.864 Read completed with error (sct=0, sc=8) 00:33:22.864 Read completed with error (sct=0, sc=8) 00:33:22.864 Read completed with error (sct=0, sc=8) 00:33:22.864 Read completed with error (sct=0, sc=8) 00:33:22.864 Read completed with error (sct=0, sc=8) 00:33:22.864 Read completed with error (sct=0, sc=8) 00:33:22.864 Write completed with error (sct=0, sc=8) 00:33:22.864 Read completed with error (sct=0, sc=8) 00:33:22.864 Read completed with error (sct=0, sc=8) 00:33:22.864 [2024-12-05 12:16:56.672169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f62c0 is same with the state(6) to be set 00:33:22.864 Read completed with error (sct=0, sc=8) 00:33:22.864 Read completed with error (sct=0, sc=8) 00:33:22.864 Write completed with error (sct=0, sc=8) 00:33:22.864 Write completed with error (sct=0, sc=8) 00:33:22.864 Read completed with error (sct=0, sc=8) 00:33:22.864 Read completed with error (sct=0, sc=8) 00:33:22.864 Write completed with error (sct=0, sc=8) 00:33:22.864 Read completed with error (sct=0, sc=8) 00:33:22.864 Write completed with error (sct=0, sc=8) 00:33:22.865 Read completed with error (sct=0, sc=8) 00:33:22.865 Write completed with error (sct=0, sc=8) 00:33:22.865 Read completed with error (sct=0, sc=8) 00:33:22.865 Read completed with error (sct=0, sc=8) 00:33:22.865 Read completed with error (sct=0, sc=8) 00:33:22.865 Write completed with error (sct=0, sc=8) 00:33:22.865 Read completed with error (sct=0, sc=8) 00:33:22.865 Read completed with error (sct=0, sc=8) 00:33:22.865 Write completed with error (sct=0, sc=8) 00:33:22.865 Read completed with error (sct=0, sc=8) 00:33:22.865 Write completed with error (sct=0, sc=8) 00:33:22.865 Read completed with error (sct=0, sc=8) 00:33:22.865 [2024-12-05 12:16:56.672279] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f6680 is same with the state(6) to be set 00:33:22.865 Read completed with error (sct=0, sc=8) 00:33:22.865 Write completed with error (sct=0, sc=8) 00:33:22.865 Read completed with error (sct=0, sc=8) 00:33:22.865 Write completed with error (sct=0, sc=8) 00:33:22.865 Read completed with error (sct=0, sc=8) 00:33:22.865 Read completed with error (sct=0, sc=8) 00:33:22.865 Read completed with error (sct=0, sc=8) 00:33:22.865 Write completed with error (sct=0, sc=8) 00:33:22.865 Read completed with error (sct=0, sc=8) 00:33:22.865 Read completed with error (sct=0, sc=8) 00:33:22.865 Read completed with error (sct=0, sc=8) 00:33:22.865 Read completed with error (sct=0, sc=8) 00:33:22.865 Read completed with error (sct=0, sc=8) 00:33:22.865 Write completed with error (sct=0, sc=8) 00:33:22.865 Read completed with error (sct=0, sc=8) 00:33:22.865 Read completed with error (sct=0, sc=8) 00:33:22.865 Write completed with error (sct=0, sc=8) 00:33:22.865 Read completed with error (sct=0, sc=8) 00:33:22.865 Read completed with error (sct=0, sc=8) 00:33:22.865 Read completed with error (sct=0, sc=8) 00:33:22.865 [2024-12-05 12:16:56.672959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2ccc00d7e0 is same with the state(6) to be set 00:33:22.865 Initializing NVMe Controllers 00:33:22.865 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:22.865 Controller IO queue size 128, less than required. 00:33:22.865 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:22.865 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:33:22.865 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:33:22.865 Initialization complete. Launching workers. 00:33:22.865 ======================================================== 00:33:22.865 Latency(us) 00:33:22.865 Device Information : IOPS MiB/s Average min max 00:33:22.865 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 164.38 0.08 908351.47 565.89 1011753.63 00:33:22.865 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 162.39 0.08 914514.25 255.60 1042421.72 00:33:22.865 ======================================================== 00:33:22.865 Total : 326.78 0.16 911414.13 255.60 1042421.72 00:33:22.865 00:33:22.865 [2024-12-05 12:16:56.673539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15f79b0 (9): Bad file descriptor 00:33:22.865 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:33:22.865 12:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.865 12:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:33:22.865 12:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 266228 00:33:22.865 12:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:33:23.123 12:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:33:23.123 12:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 266228 00:33:23.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (266228) - No such process 00:33:23.123 12:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 266228 00:33:23.123 12:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:33:23.123 12:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 266228 00:33:23.123 12:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:33:23.123 12:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:23.123 12:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:33:23.123 12:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:23.123 12:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 266228 00:33:23.123 12:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:33:23.123 12:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:23.123 12:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:23.123 12:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:23.123 12:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:33:23.123 12:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.123 12:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:23.123 12:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.123 12:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:23.123 12:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.123 12:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:23.123 [2024-12-05 12:16:57.203957] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:23.123 12:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.123 12:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:23.123 12:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.123 12:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:23.123 12:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.123 12:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=266716 00:33:23.123 12:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:33:23.123 12:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:33:23.123 12:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 266716 00:33:23.123 12:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:23.123 [2024-12-05 12:16:57.286599] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:33:23.686 12:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:23.687 12:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 266716 00:33:23.687 12:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:24.253 12:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:24.253 12:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 266716 00:33:24.253 12:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:24.817 12:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:24.818 12:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 266716 00:33:24.818 12:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:25.074 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:25.074 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 266716 00:33:25.075 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:25.638 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:25.638 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 266716 00:33:25.638 12:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:26.205 12:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:26.205 12:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 266716 00:33:26.205 12:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:26.464 Initializing NVMe Controllers 00:33:26.464 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:26.464 Controller IO queue size 128, less than required. 00:33:26.464 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:26.464 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:33:26.464 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:33:26.464 Initialization complete. Launching workers. 00:33:26.464 ======================================================== 00:33:26.464 Latency(us) 00:33:26.464 Device Information : IOPS MiB/s Average min max 00:33:26.464 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002131.46 1000151.46 1006016.37 00:33:26.464 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003780.93 1000312.72 1009726.49 00:33:26.464 ======================================================== 00:33:26.464 Total : 256.00 0.12 1002956.20 1000151.46 1009726.49 00:33:26.464 00:33:26.724 12:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:26.724 12:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 266716 00:33:26.724 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (266716) - No such process 00:33:26.724 12:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 266716 00:33:26.724 12:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:33:26.724 12:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:33:26.724 12:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # nvmfcleanup 00:33:26.724 12:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@99 -- # sync 00:33:26.724 12:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:33:26.724 12:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # set +e 00:33:26.724 12:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # for i in {1..20} 00:33:26.724 12:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:33:26.724 rmmod nvme_tcp 00:33:26.724 rmmod nvme_fabrics 00:33:26.724 rmmod nvme_keyring 00:33:26.724 12:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:33:26.724 12:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # set -e 00:33:26.724 12:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # return 0 00:33:26.724 12:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # '[' -n 266015 ']' 00:33:26.724 12:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@337 -- # killprocess 266015 00:33:26.724 12:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 266015 ']' 00:33:26.724 12:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 266015 00:33:26.724 12:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:33:26.724 12:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:26.724 12:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 266015 00:33:26.724 12:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:26.724 12:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:26.724 12:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 266015' 00:33:26.724 killing process with pid 266015 00:33:26.724 12:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 266015 00:33:26.724 12:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 266015 00:33:26.984 12:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:33:26.984 12:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # nvmf_fini 00:33:26.984 12:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@264 -- # local dev 00:33:26.984 12:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@267 -- # remove_target_ns 00:33:26.984 12:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:33:26.984 12:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:33:26.984 12:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_target_ns 00:33:28.889 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@268 -- # delete_main_bridge 00:33:28.889 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:33:28.889 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@130 -- # return 0 00:33:28.889 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:33:28.889 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:33:28.889 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:33:28.889 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:33:28.889 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:33:28.889 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:33:28.889 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:33:28.889 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:33:28.889 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:33:28.889 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:33:28.889 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:33:28.889 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:33:28.889 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:33:28.889 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:33:28.889 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:33:28.889 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:33:29.149 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:33:29.149 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@41 -- # _dev=0 00:33:29.149 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@41 -- # dev_map=() 00:33:29.149 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@284 -- # iptr 00:33:29.149 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@542 -- # iptables-save 00:33:29.149 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:33:29.149 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@542 -- # iptables-restore 00:33:29.149 00:33:29.149 real 0m16.330s 00:33:29.149 user 0m26.377s 00:33:29.149 sys 0m6.070s 00:33:29.149 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:29.149 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:29.149 ************************************ 00:33:29.149 END TEST nvmf_delete_subsystem 00:33:29.149 ************************************ 00:33:29.149 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:33:29.149 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:29.149 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:29.149 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:29.149 ************************************ 00:33:29.149 START TEST nvmf_host_management 00:33:29.149 ************************************ 00:33:29.149 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:33:29.149 * Looking for test storage... 00:33:29.149 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:29.149 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:29.149 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:33:29.149 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:29.149 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:29.149 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:29.149 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:29.149 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:29.149 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:33:29.149 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:33:29.149 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:33:29.149 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:33:29.149 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:33:29.149 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:33:29.149 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:33:29.149 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:29.149 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:33:29.149 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:33:29.149 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:29.149 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:29.149 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:33:29.149 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:33:29.149 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:29.149 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:33:29.408 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:33:29.408 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:33:29.408 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:33:29.408 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:29.408 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:33:29.408 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:33:29.408 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:29.408 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:29.408 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:33:29.408 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:29.408 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:29.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:29.408 --rc genhtml_branch_coverage=1 00:33:29.408 --rc genhtml_function_coverage=1 00:33:29.408 --rc genhtml_legend=1 00:33:29.408 --rc geninfo_all_blocks=1 00:33:29.408 --rc geninfo_unexecuted_blocks=1 00:33:29.408 00:33:29.408 ' 00:33:29.408 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:29.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:29.408 --rc genhtml_branch_coverage=1 00:33:29.408 --rc genhtml_function_coverage=1 00:33:29.408 --rc genhtml_legend=1 00:33:29.408 --rc geninfo_all_blocks=1 00:33:29.408 --rc geninfo_unexecuted_blocks=1 00:33:29.408 00:33:29.408 ' 00:33:29.408 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:29.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:29.408 --rc genhtml_branch_coverage=1 00:33:29.408 --rc genhtml_function_coverage=1 00:33:29.408 --rc genhtml_legend=1 00:33:29.408 --rc geninfo_all_blocks=1 00:33:29.408 --rc geninfo_unexecuted_blocks=1 00:33:29.408 00:33:29.408 ' 00:33:29.408 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:29.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:29.408 --rc genhtml_branch_coverage=1 00:33:29.408 --rc genhtml_function_coverage=1 00:33:29.408 --rc genhtml_legend=1 00:33:29.408 --rc geninfo_all_blocks=1 00:33:29.408 --rc geninfo_unexecuted_blocks=1 00:33:29.408 00:33:29.408 ' 00:33:29.408 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:29.408 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:33:29.408 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:29.408 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:29.408 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:29.408 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:29.408 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:29.408 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:33:29.408 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:29.408 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:33:29.408 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:33:29.408 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:33:29.408 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:29.408 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:33:29.409 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:33:29.409 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:29.409 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:29.409 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:33:29.409 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:29.409 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:29.409 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:29.409 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:29.409 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:29.409 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:29.409 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:33:29.409 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:29.409 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:33:29.409 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:33:29.409 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:33:29.409 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:33:29.409 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@50 -- # : 0 00:33:29.409 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:33:29.409 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:33:29.409 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:33:29.409 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:29.409 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:29.409 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:33:29.409 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:33:29.409 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:33:29.409 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:33:29.409 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@54 -- # have_pci_nics=0 00:33:29.409 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:29.409 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:29.409 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:33:29.409 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:33:29.409 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:29.409 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@296 -- # prepare_net_devs 00:33:29.409 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # local -g is_hw=no 00:33:29.409 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@260 -- # remove_target_ns 00:33:29.409 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:33:29.409 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:33:29.409 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_target_ns 00:33:29.409 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:33:29.409 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:33:29.409 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # xtrace_disable 00:33:29.409 12:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@131 -- # pci_devs=() 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@131 -- # local -a pci_devs 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@132 -- # pci_net_devs=() 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@133 -- # pci_drivers=() 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@133 -- # local -A pci_drivers 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@135 -- # net_devs=() 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@135 -- # local -ga net_devs 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@136 -- # e810=() 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@136 -- # local -ga e810 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@137 -- # x722=() 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@137 -- # local -ga x722 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@138 -- # mlx=() 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@138 -- # local -ga mlx 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:35.980 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:35.980 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@234 -- # [[ up == up ]] 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:35.980 Found net devices under 0000:86:00.0: cvl_0_0 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@234 -- # [[ up == up ]] 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:35.980 Found net devices under 0000:86:00.1: cvl_0_1 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # is_hw=yes 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@257 -- # create_target_ns 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@27 -- # local -gA dev_map 00:33:35.980 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@28 -- # local -g _dev 00:33:35.981 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:33:35.981 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:33:35.981 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:33:35.981 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:33:35.981 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@44 -- # ips=() 00:33:35.981 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:33:35.981 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:33:35.981 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:33:35.981 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:33:35.981 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:33:35.981 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:33:35.981 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:33:35.981 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:33:35.981 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:33:35.981 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:33:35.981 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:33:35.981 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:33:35.981 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:33:35.981 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:33:35.981 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:33:35.981 12:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:33:35.981 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:33:35.981 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:33:35.981 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:35.981 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:33:35.981 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@11 -- # local val=167772161 00:33:35.981 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:33:35.981 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:33:35.981 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:33:35.981 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:33:35.981 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:33:35.981 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:33:35.981 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:33:35.981 10.0.0.1 00:33:35.981 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:33:35.981 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:33:35.981 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:35.981 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:35.981 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:33:35.981 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@11 -- # local val=167772162 00:33:35.981 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:33:35.981 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:33:35.981 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:33:35.981 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:33:35.981 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:33:35.981 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:33:35.981 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:33:35.981 10.0.0.2 00:33:35.981 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:33:35.981 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:33:35.981 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:33:35.981 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:33:35.981 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:33:35.981 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:33:35.981 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:33:35.981 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:35.981 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:35.981 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:33:35.981 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:33:35.981 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:33:35.981 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:33:35.981 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:33:35.981 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:33:35.981 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:33:35.981 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:33:35.981 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:33:35.981 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:33:35.981 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:33:35.981 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@38 -- # ping_ips 1 00:33:35.981 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:33:35.981 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:33:35.981 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:33:35.981 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:33:35.981 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:33:35.981 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:33:35.981 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:33:35.981 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:33:35.981 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:33:35.981 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@107 -- # local dev=initiator0 00:33:35.981 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:33:35.981 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:33:35.981 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:33:35.981 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:33:35.981 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:35.981 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:35.981 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:33:35.981 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:33:35.981 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:33:35.981 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:33:35.981 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:33:35.981 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:35.981 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:35.981 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:33:35.981 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:33:35.981 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:35.981 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.489 ms 00:33:35.981 00:33:35.981 --- 10.0.0.1 ping statistics --- 00:33:35.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:35.981 rtt min/avg/max/mdev = 0.489/0.489/0.489/0.000 ms 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@168 -- # get_net_dev target0 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@107 -- # local dev=target0 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:33:35.982 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:35.982 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:33:35.982 00:33:35.982 --- 10.0.0.2 ping statistics --- 00:33:35.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:35.982 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@98 -- # (( pair++ )) 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@270 -- # return 0 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@107 -- # local dev=initiator0 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@107 -- # local dev=initiator1 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@109 -- # return 1 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@168 -- # dev= 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@169 -- # return 0 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@168 -- # get_net_dev target0 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@107 -- # local dev=target0 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@168 -- # get_net_dev target1 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@107 -- # local dev=target1 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@109 -- # return 1 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@168 -- # dev= 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@169 -- # return 0 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:33:35.982 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:35.983 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:33:35.983 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:33:35.983 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:33:35.983 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:33:35.983 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:33:35.983 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:33:35.983 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:35.983 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:35.983 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # nvmfpid=270905 00:33:35.983 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@329 -- # waitforlisten 270905 00:33:35.983 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:33:35.983 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 270905 ']' 00:33:35.983 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:35.983 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:35.983 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:35.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:35.983 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:35.983 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:35.983 [2024-12-05 12:17:09.432637] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:35.983 [2024-12-05 12:17:09.433609] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:33:35.983 [2024-12-05 12:17:09.433648] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:35.983 [2024-12-05 12:17:09.509794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:35.983 [2024-12-05 12:17:09.552705] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:35.983 [2024-12-05 12:17:09.552739] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:35.983 [2024-12-05 12:17:09.552745] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:35.983 [2024-12-05 12:17:09.552751] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:35.983 [2024-12-05 12:17:09.552757] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:35.983 [2024-12-05 12:17:09.554247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:35.983 [2024-12-05 12:17:09.554354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:35.983 [2024-12-05 12:17:09.554461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:35.983 [2024-12-05 12:17:09.554462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:33:35.983 [2024-12-05 12:17:09.623030] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:35.983 [2024-12-05 12:17:09.623722] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:35.983 [2024-12-05 12:17:09.623948] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:35.983 [2024-12-05 12:17:09.624130] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:35.983 [2024-12-05 12:17:09.624184] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:35.983 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:35.983 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:33:35.983 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:33:35.983 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:35.983 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:35.983 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:35.983 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:35.983 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:35.983 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:35.983 [2024-12-05 12:17:09.691230] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:35.983 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:35.983 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:33:35.983 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:35.983 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:35.983 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:33:35.983 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:33:35.983 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:33:35.983 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:35.983 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:35.983 Malloc0 00:33:35.983 [2024-12-05 12:17:09.779548] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:35.983 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:35.983 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:33:35.983 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:35.983 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:35.983 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=270982 00:33:35.983 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 270982 /var/tmp/bdevperf.sock 00:33:35.983 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 270982 ']' 00:33:35.983 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:35.983 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:33:35.983 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:33:35.983 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:35.983 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:35.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:35.983 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # config=() 00:33:35.983 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:35.983 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # local subsystem config 00:33:35.983 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:35.983 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:33:35.983 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:33:35.983 { 00:33:35.983 "params": { 00:33:35.983 "name": "Nvme$subsystem", 00:33:35.983 "trtype": "$TEST_TRANSPORT", 00:33:35.983 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:35.983 "adrfam": "ipv4", 00:33:35.983 "trsvcid": "$NVMF_PORT", 00:33:35.983 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:35.983 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:35.983 "hdgst": ${hdgst:-false}, 00:33:35.983 "ddgst": ${ddgst:-false} 00:33:35.983 }, 00:33:35.983 "method": "bdev_nvme_attach_controller" 00:33:35.983 } 00:33:35.983 EOF 00:33:35.983 )") 00:33:35.983 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@394 -- # cat 00:33:35.983 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@396 -- # jq . 00:33:35.983 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@397 -- # IFS=, 00:33:35.983 12:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:33:35.983 "params": { 00:33:35.983 "name": "Nvme0", 00:33:35.983 "trtype": "tcp", 00:33:35.983 "traddr": "10.0.0.2", 00:33:35.983 "adrfam": "ipv4", 00:33:35.983 "trsvcid": "4420", 00:33:35.983 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:35.983 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:35.983 "hdgst": false, 00:33:35.983 "ddgst": false 00:33:35.983 }, 00:33:35.983 "method": "bdev_nvme_attach_controller" 00:33:35.983 }' 00:33:35.983 [2024-12-05 12:17:09.877947] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:33:35.983 [2024-12-05 12:17:09.877998] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid270982 ] 00:33:35.984 [2024-12-05 12:17:09.953450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:35.984 [2024-12-05 12:17:09.994315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:35.984 Running I/O for 10 seconds... 00:33:36.549 12:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:36.549 12:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:33:36.549 12:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:33:36.549 12:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.549 12:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:36.549 12:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.549 12:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:36.550 12:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:33:36.550 12:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:33:36.550 12:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:33:36.550 12:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:33:36.550 12:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:33:36.550 12:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:33:36.550 12:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:33:36.810 12:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:33:36.810 12:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:33:36.810 12:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.810 12:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:36.810 12:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.810 12:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1219 00:33:36.810 12:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1219 -ge 100 ']' 00:33:36.810 12:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:33:36.810 12:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:33:36.810 12:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:33:36.810 12:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:33:36.810 12:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.810 12:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:36.810 [2024-12-05 12:17:10.790964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b2930 is same with the state(6) to be set 00:33:36.810 [2024-12-05 12:17:10.791000] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b2930 is same with the state(6) to be set 00:33:36.810 [2024-12-05 12:17:10.791008] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b2930 is same with the state(6) to be set 00:33:36.810 [2024-12-05 12:17:10.791015] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b2930 is same with the state(6) to be set 00:33:36.810 [2024-12-05 12:17:10.791021] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b2930 is same with the state(6) to be set 00:33:36.811 [2024-12-05 12:17:10.791027] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b2930 is same with the state(6) to be set 00:33:36.811 [2024-12-05 12:17:10.791034] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b2930 is same with the state(6) to be set 00:33:36.811 [2024-12-05 12:17:10.791040] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b2930 is same with the state(6) to be set 00:33:36.811 [2024-12-05 12:17:10.791045] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b2930 is same with the state(6) to be set 00:33:36.811 [2024-12-05 12:17:10.791051] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b2930 is same with the state(6) to be set 00:33:36.811 [2024-12-05 12:17:10.791057] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b2930 is same with the state(6) to be set 00:33:36.811 [2024-12-05 12:17:10.791063] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b2930 is same with the state(6) to be set 00:33:36.811 [2024-12-05 12:17:10.791069] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b2930 is same with the state(6) to be set 00:33:36.811 [2024-12-05 12:17:10.791075] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b2930 is same with the state(6) to be set 00:33:36.811 [2024-12-05 12:17:10.791081] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b2930 is same with the state(6) to be set 00:33:36.811 [2024-12-05 12:17:10.791086] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b2930 is same with the state(6) to be set 00:33:36.811 [2024-12-05 12:17:10.791093] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b2930 is same with the state(6) to be set 00:33:36.811 [2024-12-05 12:17:10.791099] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b2930 is same with the state(6) to be set 00:33:36.811 [2024-12-05 12:17:10.791105] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b2930 is same with the state(6) to be set 00:33:36.811 [2024-12-05 12:17:10.791111] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b2930 is same with the state(6) to be set 00:33:36.811 [2024-12-05 12:17:10.791116] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b2930 is same with the state(6) to be set 00:33:36.811 [2024-12-05 12:17:10.791122] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b2930 is same with the state(6) to be set 00:33:36.811 [2024-12-05 12:17:10.792132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.811 [2024-12-05 12:17:10.792164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.811 [2024-12-05 12:17:10.792181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.811 [2024-12-05 12:17:10.792189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.811 [2024-12-05 12:17:10.792207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.811 [2024-12-05 12:17:10.792214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.811 [2024-12-05 12:17:10.792222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.811 [2024-12-05 12:17:10.792229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.811 [2024-12-05 12:17:10.792237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.811 [2024-12-05 12:17:10.792245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.811 [2024-12-05 12:17:10.792253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.811 [2024-12-05 12:17:10.792259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.811 [2024-12-05 12:17:10.792268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.811 [2024-12-05 12:17:10.792274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.811 [2024-12-05 12:17:10.792282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.811 [2024-12-05 12:17:10.792289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.811 [2024-12-05 12:17:10.792298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.811 [2024-12-05 12:17:10.792305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.811 [2024-12-05 12:17:10.792313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.811 [2024-12-05 12:17:10.792319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.811 [2024-12-05 12:17:10.792327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.811 [2024-12-05 12:17:10.792333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.811 [2024-12-05 12:17:10.792341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.811 [2024-12-05 12:17:10.792348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.811 [2024-12-05 12:17:10.792356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.811 [2024-12-05 12:17:10.792363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.811 [2024-12-05 12:17:10.792381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.811 [2024-12-05 12:17:10.792388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.811 [2024-12-05 12:17:10.792396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.811 [2024-12-05 12:17:10.792405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.811 [2024-12-05 12:17:10.792413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.811 [2024-12-05 12:17:10.792419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.811 [2024-12-05 12:17:10.792427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.811 [2024-12-05 12:17:10.792435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.811 [2024-12-05 12:17:10.792443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.811 [2024-12-05 12:17:10.792449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.811 [2024-12-05 12:17:10.792458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.811 [2024-12-05 12:17:10.792464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.811 [2024-12-05 12:17:10.792473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.811 [2024-12-05 12:17:10.792480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.811 [2024-12-05 12:17:10.792488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.811 [2024-12-05 12:17:10.792494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.811 [2024-12-05 12:17:10.792502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.811 [2024-12-05 12:17:10.792509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.811 [2024-12-05 12:17:10.792517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.811 [2024-12-05 12:17:10.792523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.811 [2024-12-05 12:17:10.792531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.811 [2024-12-05 12:17:10.792538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.811 [2024-12-05 12:17:10.792546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.811 [2024-12-05 12:17:10.792552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.811 [2024-12-05 12:17:10.792560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.811 [2024-12-05 12:17:10.792567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.811 [2024-12-05 12:17:10.792575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.811 [2024-12-05 12:17:10.792582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.811 [2024-12-05 12:17:10.792591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.811 [2024-12-05 12:17:10.792598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.811 [2024-12-05 12:17:10.792606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.811 [2024-12-05 12:17:10.792613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.811 [2024-12-05 12:17:10.792621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.811 [2024-12-05 12:17:10.792627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.811 [2024-12-05 12:17:10.792636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.811 [2024-12-05 12:17:10.792642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.812 [2024-12-05 12:17:10.792650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.812 [2024-12-05 12:17:10.792656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.812 [2024-12-05 12:17:10.792664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.812 [2024-12-05 12:17:10.792671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.812 [2024-12-05 12:17:10.792680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.812 [2024-12-05 12:17:10.792686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.812 [2024-12-05 12:17:10.792695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.812 [2024-12-05 12:17:10.792701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.812 [2024-12-05 12:17:10.792709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.812 [2024-12-05 12:17:10.792716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.812 [2024-12-05 12:17:10.792724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.812 [2024-12-05 12:17:10.792730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.812 [2024-12-05 12:17:10.792738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.812 [2024-12-05 12:17:10.792745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.812 [2024-12-05 12:17:10.792753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.812 [2024-12-05 12:17:10.792759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.812 [2024-12-05 12:17:10.792767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.812 [2024-12-05 12:17:10.792775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.812 [2024-12-05 12:17:10.792783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.812 [2024-12-05 12:17:10.792790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.812 [2024-12-05 12:17:10.792798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.812 [2024-12-05 12:17:10.792804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.812 [2024-12-05 12:17:10.792812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.812 [2024-12-05 12:17:10.792819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.812 [2024-12-05 12:17:10.792827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.812 [2024-12-05 12:17:10.792833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.812 [2024-12-05 12:17:10.792841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.812 [2024-12-05 12:17:10.792848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.812 [2024-12-05 12:17:10.792856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.812 [2024-12-05 12:17:10.792862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.812 [2024-12-05 12:17:10.792870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.812 [2024-12-05 12:17:10.792877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.812 [2024-12-05 12:17:10.792886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.812 [2024-12-05 12:17:10.792893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.812 [2024-12-05 12:17:10.792901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.812 [2024-12-05 12:17:10.792908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.812 [2024-12-05 12:17:10.792916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.812 [2024-12-05 12:17:10.792922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.812 [2024-12-05 12:17:10.792931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.812 [2024-12-05 12:17:10.792937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.812 [2024-12-05 12:17:10.792946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.812 [2024-12-05 12:17:10.792952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.812 [2024-12-05 12:17:10.792962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.812 [2024-12-05 12:17:10.792969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.812 [2024-12-05 12:17:10.792977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.812 [2024-12-05 12:17:10.792984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.812 [2024-12-05 12:17:10.792992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.812 [2024-12-05 12:17:10.792998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.812 [2024-12-05 12:17:10.793006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.812 [2024-12-05 12:17:10.793013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.812 [2024-12-05 12:17:10.793021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.812 [2024-12-05 12:17:10.793028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.812 [2024-12-05 12:17:10.793036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.812 [2024-12-05 12:17:10.793042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.812 [2024-12-05 12:17:10.793050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.812 [2024-12-05 12:17:10.793057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.812 [2024-12-05 12:17:10.793065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.812 [2024-12-05 12:17:10.793071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.812 [2024-12-05 12:17:10.793079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.812 [2024-12-05 12:17:10.793086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.812 [2024-12-05 12:17:10.793094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.812 [2024-12-05 12:17:10.793100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.812 [2024-12-05 12:17:10.793108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.812 [2024-12-05 12:17:10.793115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.812 [2024-12-05 12:17:10.793122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.812 [2024-12-05 12:17:10.793129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.812 [2024-12-05 12:17:10.794063] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:36.812 task offset: 32768 on job bdev=Nvme0n1 fails 00:33:36.812 00:33:36.812 Latency(us) 00:33:36.812 [2024-12-05T11:17:11.008Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:36.812 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:33:36.812 Job: Nvme0n1 ended in about 0.64 seconds with error 00:33:36.812 Verification LBA range: start 0x0 length 0x400 00:33:36.812 Nvme0n1 : 0.64 1994.67 124.67 99.73 0.00 29956.86 1654.00 27337.87 00:33:36.812 [2024-12-05T11:17:11.008Z] =================================================================================================================== 00:33:36.812 [2024-12-05T11:17:11.008Z] Total : 1994.67 124.67 99.73 0.00 29956.86 1654.00 27337.87 00:33:36.812 12:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.812 12:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:33:36.812 [2024-12-05 12:17:10.796437] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:33:36.812 [2024-12-05 12:17:10.796458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xee4510 (9): Bad file descriptor 00:33:36.812 12:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.812 12:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:36.812 [2024-12-05 12:17:10.797411] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:33:36.813 [2024-12-05 12:17:10.797518] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:33:36.813 [2024-12-05 12:17:10.797540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.813 [2024-12-05 12:17:10.797555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:33:36.813 [2024-12-05 12:17:10.797563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:33:36.813 [2024-12-05 12:17:10.797569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.813 [2024-12-05 12:17:10.797576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xee4510 00:33:36.813 [2024-12-05 12:17:10.797596] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xee4510 (9): Bad file descriptor 00:33:36.813 [2024-12-05 12:17:10.797608] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:36.813 [2024-12-05 12:17:10.797615] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:36.813 [2024-12-05 12:17:10.797624] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:36.813 [2024-12-05 12:17:10.797633] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:36.813 12:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.813 12:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:33:37.810 12:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 270982 00:33:37.810 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (270982) - No such process 00:33:37.810 12:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:33:37.811 12:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:33:37.811 12:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:33:37.811 12:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:33:37.811 12:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # config=() 00:33:37.811 12:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # local subsystem config 00:33:37.811 12:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:33:37.811 12:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:33:37.811 { 00:33:37.811 "params": { 00:33:37.811 "name": "Nvme$subsystem", 00:33:37.811 "trtype": "$TEST_TRANSPORT", 00:33:37.811 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:37.811 "adrfam": "ipv4", 00:33:37.811 "trsvcid": "$NVMF_PORT", 00:33:37.811 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:37.811 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:37.811 "hdgst": ${hdgst:-false}, 00:33:37.811 "ddgst": ${ddgst:-false} 00:33:37.811 }, 00:33:37.811 "method": "bdev_nvme_attach_controller" 00:33:37.811 } 00:33:37.811 EOF 00:33:37.811 )") 00:33:37.811 12:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@394 -- # cat 00:33:37.811 12:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@396 -- # jq . 00:33:37.811 12:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@397 -- # IFS=, 00:33:37.811 12:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:33:37.811 "params": { 00:33:37.811 "name": "Nvme0", 00:33:37.811 "trtype": "tcp", 00:33:37.811 "traddr": "10.0.0.2", 00:33:37.811 "adrfam": "ipv4", 00:33:37.811 "trsvcid": "4420", 00:33:37.811 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:37.811 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:37.811 "hdgst": false, 00:33:37.811 "ddgst": false 00:33:37.811 }, 00:33:37.811 "method": "bdev_nvme_attach_controller" 00:33:37.811 }' 00:33:37.811 [2024-12-05 12:17:11.862133] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:33:37.811 [2024-12-05 12:17:11.862183] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid271231 ] 00:33:37.811 [2024-12-05 12:17:11.935560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:37.811 [2024-12-05 12:17:11.975107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:38.162 Running I/O for 1 seconds... 00:33:39.539 2048.00 IOPS, 128.00 MiB/s 00:33:39.539 Latency(us) 00:33:39.539 [2024-12-05T11:17:13.735Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:39.539 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:33:39.540 Verification LBA range: start 0x0 length 0x400 00:33:39.540 Nvme0n1 : 1.01 2081.37 130.09 0.00 0.00 30264.36 4150.61 26588.89 00:33:39.540 [2024-12-05T11:17:13.736Z] =================================================================================================================== 00:33:39.540 [2024-12-05T11:17:13.736Z] Total : 2081.37 130.09 0.00 0.00 30264.36 4150.61 26588.89 00:33:39.540 12:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:33:39.540 12:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:33:39.540 12:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:33:39.540 12:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:33:39.540 12:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:33:39.540 12:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@335 -- # nvmfcleanup 00:33:39.540 12:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@99 -- # sync 00:33:39.540 12:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:33:39.540 12:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@102 -- # set +e 00:33:39.540 12:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@103 -- # for i in {1..20} 00:33:39.540 12:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:33:39.540 rmmod nvme_tcp 00:33:39.540 rmmod nvme_fabrics 00:33:39.540 rmmod nvme_keyring 00:33:39.540 12:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:33:39.540 12:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@106 -- # set -e 00:33:39.540 12:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@107 -- # return 0 00:33:39.540 12:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # '[' -n 270905 ']' 00:33:39.540 12:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@337 -- # killprocess 270905 00:33:39.540 12:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 270905 ']' 00:33:39.540 12:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 270905 00:33:39.540 12:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:33:39.540 12:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:39.540 12:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 270905 00:33:39.540 12:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:39.540 12:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:39.540 12:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 270905' 00:33:39.540 killing process with pid 270905 00:33:39.540 12:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 270905 00:33:39.540 12:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 270905 00:33:39.798 [2024-12-05 12:17:13.761515] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:33:39.798 12:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:33:39.798 12:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@342 -- # nvmf_fini 00:33:39.798 12:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@264 -- # local dev 00:33:39.798 12:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@267 -- # remove_target_ns 00:33:39.798 12:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:33:39.798 12:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:33:39.798 12:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_target_ns 00:33:41.715 12:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@268 -- # delete_main_bridge 00:33:41.715 12:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:33:41.715 12:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@130 -- # return 0 00:33:41.715 12:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:33:41.715 12:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:33:41.715 12:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:33:41.715 12:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:33:41.715 12:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:33:41.715 12:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:33:41.715 12:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:33:41.715 12:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:33:41.715 12:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:33:41.715 12:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:33:41.715 12:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:33:41.715 12:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:33:41.715 12:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:33:41.715 12:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:33:41.715 12:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:33:41.715 12:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:33:41.715 12:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:33:41.715 12:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@41 -- # _dev=0 00:33:41.715 12:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@41 -- # dev_map=() 00:33:41.715 12:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@284 -- # iptr 00:33:41.715 12:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@542 -- # iptables-save 00:33:41.715 12:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:33:41.715 12:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@542 -- # iptables-restore 00:33:41.715 12:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:33:41.715 00:33:41.715 real 0m12.694s 00:33:41.715 user 0m19.223s 00:33:41.715 sys 0m6.511s 00:33:41.715 12:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:41.715 12:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:41.715 ************************************ 00:33:41.715 END TEST nvmf_host_management 00:33:41.715 ************************************ 00:33:41.715 12:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:33:41.715 12:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:41.715 12:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:41.715 12:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:41.974 ************************************ 00:33:41.974 START TEST nvmf_lvol 00:33:41.974 ************************************ 00:33:41.974 12:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:33:41.974 * Looking for test storage... 00:33:41.974 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:41.974 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:41.974 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:33:41.974 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:41.974 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:41.974 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:41.974 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:41.974 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:41.974 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:33:41.974 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:33:41.974 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:33:41.974 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:33:41.974 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:33:41.974 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:33:41.974 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:33:41.974 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:41.974 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:33:41.974 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:33:41.974 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:41.974 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:41.974 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:33:41.974 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:33:41.974 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:41.974 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:33:41.974 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:33:41.974 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:33:41.974 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:33:41.974 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:41.974 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:33:41.975 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:33:41.975 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:41.975 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:41.975 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:33:41.975 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:41.975 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:41.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:41.975 --rc genhtml_branch_coverage=1 00:33:41.975 --rc genhtml_function_coverage=1 00:33:41.975 --rc genhtml_legend=1 00:33:41.975 --rc geninfo_all_blocks=1 00:33:41.975 --rc geninfo_unexecuted_blocks=1 00:33:41.975 00:33:41.975 ' 00:33:41.975 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:41.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:41.975 --rc genhtml_branch_coverage=1 00:33:41.975 --rc genhtml_function_coverage=1 00:33:41.975 --rc genhtml_legend=1 00:33:41.975 --rc geninfo_all_blocks=1 00:33:41.975 --rc geninfo_unexecuted_blocks=1 00:33:41.975 00:33:41.975 ' 00:33:41.975 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:41.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:41.975 --rc genhtml_branch_coverage=1 00:33:41.975 --rc genhtml_function_coverage=1 00:33:41.975 --rc genhtml_legend=1 00:33:41.975 --rc geninfo_all_blocks=1 00:33:41.975 --rc geninfo_unexecuted_blocks=1 00:33:41.975 00:33:41.975 ' 00:33:41.975 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:41.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:41.975 --rc genhtml_branch_coverage=1 00:33:41.975 --rc genhtml_function_coverage=1 00:33:41.975 --rc genhtml_legend=1 00:33:41.975 --rc geninfo_all_blocks=1 00:33:41.975 --rc geninfo_unexecuted_blocks=1 00:33:41.975 00:33:41.975 ' 00:33:41.975 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:41.975 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:33:41.975 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:41.975 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:41.975 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:41.975 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:41.975 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:41.975 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:33:41.975 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:41.975 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:33:41.975 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:33:41.975 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:33:41.975 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:41.975 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:33:41.975 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:33:41.975 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:41.975 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:41.975 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:33:41.975 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:41.975 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:41.975 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:41.975 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:41.975 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:41.975 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:41.975 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:33:41.975 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:41.975 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:33:41.975 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:33:41.975 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:33:41.975 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:33:41.975 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@50 -- # : 0 00:33:41.975 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:33:41.975 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:33:41.975 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:33:41.975 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:41.975 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:41.975 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:33:41.975 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:33:41.975 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:33:41.975 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:33:41.975 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@54 -- # have_pci_nics=0 00:33:41.975 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:41.975 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:41.975 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:33:41.975 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:33:41.975 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:41.975 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:33:41.975 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:33:41.975 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:41.975 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@296 -- # prepare_net_devs 00:33:41.975 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # local -g is_hw=no 00:33:41.975 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@260 -- # remove_target_ns 00:33:41.975 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:33:41.975 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:33:41.975 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_target_ns 00:33:41.975 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:33:41.975 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:33:41.975 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # xtrace_disable 00:33:41.975 12:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:33:48.544 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:48.544 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@131 -- # pci_devs=() 00:33:48.544 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@131 -- # local -a pci_devs 00:33:48.544 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@132 -- # pci_net_devs=() 00:33:48.544 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:33:48.544 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@133 -- # pci_drivers=() 00:33:48.544 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@133 -- # local -A pci_drivers 00:33:48.544 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@135 -- # net_devs=() 00:33:48.544 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@135 -- # local -ga net_devs 00:33:48.544 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@136 -- # e810=() 00:33:48.544 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@136 -- # local -ga e810 00:33:48.544 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@137 -- # x722=() 00:33:48.544 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@137 -- # local -ga x722 00:33:48.544 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@138 -- # mlx=() 00:33:48.544 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@138 -- # local -ga mlx 00:33:48.544 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:48.544 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:48.544 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:48.544 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:48.544 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:48.544 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:48.544 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:48.544 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:48.544 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:48.544 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:48.544 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:48.544 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:48.544 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:33:48.544 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:33:48.544 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:33:48.544 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:33:48.544 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:33:48.544 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:33:48.544 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:33:48.544 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:48.544 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:48.544 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:33:48.544 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:33:48.544 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:48.544 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:48.544 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:33:48.544 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:33:48.544 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:48.544 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:48.544 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:33:48.544 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:33:48.544 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:48.544 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:48.544 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:33:48.544 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:33:48.544 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:33:48.544 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:33:48.544 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:33:48.544 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:48.544 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:33:48.544 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@234 -- # [[ up == up ]] 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:48.545 Found net devices under 0000:86:00.0: cvl_0_0 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@234 -- # [[ up == up ]] 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:48.545 Found net devices under 0000:86:00.1: cvl_0_1 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # is_hw=yes 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@257 -- # create_target_ns 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@27 -- # local -gA dev_map 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@28 -- # local -g _dev 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@44 -- # ips=() 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@11 -- # local val=167772161 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:33:48.545 10.0.0.1 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@11 -- # local val=167772162 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:33:48.545 10.0.0.2 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:33:48.545 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:33:48.546 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:33:48.546 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@38 -- # ping_ips 1 00:33:48.546 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:33:48.546 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:33:48.546 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:33:48.546 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:33:48.546 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:33:48.546 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:33:48.546 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:33:48.546 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:33:48.546 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:33:48.546 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@107 -- # local dev=initiator0 00:33:48.546 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:33:48.546 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:33:48.546 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:33:48.546 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:33:48.546 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:48.546 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:48.546 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:33:48.546 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:33:48.546 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:33:48.546 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:33:48.546 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:33:48.546 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:48.546 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:48.546 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:33:48.546 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:33:48.546 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:48.546 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.426 ms 00:33:48.546 00:33:48.546 --- 10.0.0.1 ping statistics --- 00:33:48.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:48.546 rtt min/avg/max/mdev = 0.426/0.426/0.426/0.000 ms 00:33:48.546 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:33:48.546 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:33:48.546 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:33:48.546 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:33:48.546 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:48.546 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:48.546 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@168 -- # get_net_dev target0 00:33:48.546 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@107 -- # local dev=target0 00:33:48.546 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:33:48.546 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:33:48.546 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:33:48.546 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:33:48.546 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:33:48.546 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:33:48.546 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:33:48.546 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:33:48.546 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:33:48.546 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:33:48.546 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:33:48.546 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:33:48.546 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:33:48.546 12:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:33:48.546 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:48.546 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.112 ms 00:33:48.546 00:33:48.546 --- 10.0.0.2 ping statistics --- 00:33:48.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:48.546 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:33:48.546 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@98 -- # (( pair++ )) 00:33:48.546 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:33:48.546 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:48.546 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@270 -- # return 0 00:33:48.546 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:33:48.546 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:33:48.546 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:33:48.546 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:33:48.546 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:33:48.546 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:33:48.546 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:33:48.546 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:33:48.546 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:33:48.546 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:33:48.546 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@107 -- # local dev=initiator0 00:33:48.546 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:33:48.546 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:33:48.546 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:33:48.546 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:33:48.546 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:48.546 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:48.546 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:33:48.546 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:33:48.546 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:33:48.546 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:48.546 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:33:48.546 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:33:48.546 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:33:48.546 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:33:48.546 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:33:48.546 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:33:48.546 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@107 -- # local dev=initiator1 00:33:48.546 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:33:48.546 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:33:48.546 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@109 -- # return 1 00:33:48.546 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@168 -- # dev= 00:33:48.546 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@169 -- # return 0 00:33:48.546 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:33:48.546 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:33:48.546 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:33:48.546 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:33:48.546 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:33:48.546 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:48.546 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:48.546 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@168 -- # get_net_dev target0 00:33:48.546 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@107 -- # local dev=target0 00:33:48.546 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:33:48.546 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:33:48.546 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:33:48.546 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:33:48.547 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:33:48.547 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:33:48.547 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:33:48.547 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:33:48.547 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:33:48.547 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:48.547 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:33:48.547 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:33:48.547 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:33:48.547 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:33:48.547 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:48.547 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:48.547 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@168 -- # get_net_dev target1 00:33:48.547 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@107 -- # local dev=target1 00:33:48.547 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:33:48.547 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:33:48.547 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@109 -- # return 1 00:33:48.547 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@168 -- # dev= 00:33:48.547 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@169 -- # return 0 00:33:48.547 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:33:48.547 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:48.547 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:33:48.547 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:33:48.547 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:48.547 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:33:48.547 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:33:48.547 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:33:48.547 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:33:48.547 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:48.547 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:33:48.547 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # nvmfpid=275035 00:33:48.547 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@329 -- # waitforlisten 275035 00:33:48.547 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:33:48.547 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 275035 ']' 00:33:48.547 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:48.547 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:48.547 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:48.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:48.547 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:48.547 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:33:48.547 [2024-12-05 12:17:22.153999] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:48.547 [2024-12-05 12:17:22.154918] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:33:48.547 [2024-12-05 12:17:22.154953] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:48.547 [2024-12-05 12:17:22.233780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:48.547 [2024-12-05 12:17:22.272878] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:48.547 [2024-12-05 12:17:22.272914] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:48.547 [2024-12-05 12:17:22.272921] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:48.547 [2024-12-05 12:17:22.272927] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:48.547 [2024-12-05 12:17:22.272932] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:48.547 [2024-12-05 12:17:22.274227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:48.547 [2024-12-05 12:17:22.274334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:48.547 [2024-12-05 12:17:22.274334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:48.547 [2024-12-05 12:17:22.343009] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:48.547 [2024-12-05 12:17:22.343749] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:48.547 [2024-12-05 12:17:22.343803] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:48.547 [2024-12-05 12:17:22.343970] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:48.547 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:48.547 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:33:48.547 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:33:48.547 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:48.547 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:33:48.547 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:48.547 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:48.547 [2024-12-05 12:17:22.591030] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:48.547 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:48.805 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:33:48.805 12:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:49.064 12:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:33:49.064 12:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:33:49.064 12:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:33:49.322 12:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=2cb6ce0f-76ce-47c0-9bf5-3908ed5e13db 00:33:49.322 12:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2cb6ce0f-76ce-47c0-9bf5-3908ed5e13db lvol 20 00:33:49.581 12:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=68e6184d-2c3f-47cb-bca4-f5243dce4b41 00:33:49.581 12:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:33:49.839 12:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 68e6184d-2c3f-47cb-bca4-f5243dce4b41 00:33:50.097 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:50.097 [2024-12-05 12:17:24.218955] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:50.097 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:50.356 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=275517 00:33:50.356 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:33:50.356 12:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:33:51.291 12:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 68e6184d-2c3f-47cb-bca4-f5243dce4b41 MY_SNAPSHOT 00:33:51.549 12:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=17b16169-8c74-4ac0-91ca-d2fa5c917dff 00:33:51.549 12:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 68e6184d-2c3f-47cb-bca4-f5243dce4b41 30 00:33:51.806 12:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 17b16169-8c74-4ac0-91ca-d2fa5c917dff MY_CLONE 00:33:52.064 12:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=e2c770cf-2764-483c-a41f-ba10990dd139 00:33:52.064 12:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate e2c770cf-2764-483c-a41f-ba10990dd139 00:33:52.630 12:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 275517 00:34:00.741 Initializing NVMe Controllers 00:34:00.741 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:34:00.741 Controller IO queue size 128, less than required. 00:34:00.741 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:00.741 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:34:00.741 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:34:00.741 Initialization complete. Launching workers. 00:34:00.741 ======================================================== 00:34:00.741 Latency(us) 00:34:00.741 Device Information : IOPS MiB/s Average min max 00:34:00.741 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12514.80 48.89 10228.32 242.77 97298.98 00:34:00.741 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12352.50 48.25 10360.20 2980.00 44645.23 00:34:00.741 ======================================================== 00:34:00.741 Total : 24867.30 97.14 10293.83 242.77 97298.98 00:34:00.741 00:34:00.741 12:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:01.000 12:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 68e6184d-2c3f-47cb-bca4-f5243dce4b41 00:34:01.000 12:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2cb6ce0f-76ce-47c0-9bf5-3908ed5e13db 00:34:01.258 12:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:34:01.258 12:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:34:01.258 12:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:34:01.258 12:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@335 -- # nvmfcleanup 00:34:01.258 12:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@99 -- # sync 00:34:01.258 12:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:34:01.258 12:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@102 -- # set +e 00:34:01.258 12:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@103 -- # for i in {1..20} 00:34:01.258 12:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:34:01.258 rmmod nvme_tcp 00:34:01.258 rmmod nvme_fabrics 00:34:01.258 rmmod nvme_keyring 00:34:01.258 12:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:34:01.258 12:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@106 -- # set -e 00:34:01.258 12:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@107 -- # return 0 00:34:01.258 12:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # '[' -n 275035 ']' 00:34:01.258 12:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@337 -- # killprocess 275035 00:34:01.258 12:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 275035 ']' 00:34:01.258 12:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 275035 00:34:01.258 12:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:34:01.515 12:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:01.515 12:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 275035 00:34:01.515 12:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:01.515 12:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:01.515 12:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 275035' 00:34:01.515 killing process with pid 275035 00:34:01.515 12:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 275035 00:34:01.515 12:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 275035 00:34:01.515 12:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:34:01.515 12:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@342 -- # nvmf_fini 00:34:01.515 12:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@264 -- # local dev 00:34:01.515 12:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@267 -- # remove_target_ns 00:34:01.515 12:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:34:01.515 12:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:34:01.515 12:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_target_ns 00:34:04.045 12:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@268 -- # delete_main_bridge 00:34:04.045 12:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:34:04.045 12:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@130 -- # return 0 00:34:04.045 12:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:34:04.045 12:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:34:04.045 12:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:34:04.045 12:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:34:04.045 12:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:34:04.045 12:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:34:04.045 12:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:34:04.045 12:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:34:04.045 12:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:34:04.045 12:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:34:04.045 12:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:34:04.045 12:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:34:04.045 12:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:34:04.045 12:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:34:04.045 12:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:34:04.045 12:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:34:04.045 12:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:34:04.045 12:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@41 -- # _dev=0 00:34:04.045 12:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@41 -- # dev_map=() 00:34:04.045 12:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@284 -- # iptr 00:34:04.045 12:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@542 -- # iptables-save 00:34:04.045 12:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:34:04.045 12:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@542 -- # iptables-restore 00:34:04.045 00:34:04.045 real 0m21.837s 00:34:04.045 user 0m55.547s 00:34:04.045 sys 0m9.852s 00:34:04.045 12:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:04.045 12:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:34:04.045 ************************************ 00:34:04.045 END TEST nvmf_lvol 00:34:04.045 ************************************ 00:34:04.045 12:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:34:04.045 12:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:04.045 12:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:04.045 12:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:04.045 ************************************ 00:34:04.045 START TEST nvmf_lvs_grow 00:34:04.045 ************************************ 00:34:04.045 12:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:34:04.045 * Looking for test storage... 00:34:04.045 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:04.045 12:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:04.045 12:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:34:04.045 12:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:04.045 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:04.045 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:04.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:04.046 --rc genhtml_branch_coverage=1 00:34:04.046 --rc genhtml_function_coverage=1 00:34:04.046 --rc genhtml_legend=1 00:34:04.046 --rc geninfo_all_blocks=1 00:34:04.046 --rc geninfo_unexecuted_blocks=1 00:34:04.046 00:34:04.046 ' 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:04.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:04.046 --rc genhtml_branch_coverage=1 00:34:04.046 --rc genhtml_function_coverage=1 00:34:04.046 --rc genhtml_legend=1 00:34:04.046 --rc geninfo_all_blocks=1 00:34:04.046 --rc geninfo_unexecuted_blocks=1 00:34:04.046 00:34:04.046 ' 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:04.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:04.046 --rc genhtml_branch_coverage=1 00:34:04.046 --rc genhtml_function_coverage=1 00:34:04.046 --rc genhtml_legend=1 00:34:04.046 --rc geninfo_all_blocks=1 00:34:04.046 --rc geninfo_unexecuted_blocks=1 00:34:04.046 00:34:04.046 ' 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:04.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:04.046 --rc genhtml_branch_coverage=1 00:34:04.046 --rc genhtml_function_coverage=1 00:34:04.046 --rc genhtml_legend=1 00:34:04.046 --rc geninfo_all_blocks=1 00:34:04.046 --rc geninfo_unexecuted_blocks=1 00:34:04.046 00:34:04.046 ' 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@50 -- # : 0 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@54 -- # have_pci_nics=0 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@296 -- # prepare_net_devs 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # local -g is_hw=no 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@260 -- # remove_target_ns 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_target_ns 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # xtrace_disable 00:34:04.046 12:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@131 -- # pci_devs=() 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@131 -- # local -a pci_devs 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@132 -- # pci_net_devs=() 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@133 -- # pci_drivers=() 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@133 -- # local -A pci_drivers 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@135 -- # net_devs=() 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@135 -- # local -ga net_devs 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@136 -- # e810=() 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@136 -- # local -ga e810 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@137 -- # x722=() 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@137 -- # local -ga x722 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@138 -- # mlx=() 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@138 -- # local -ga mlx 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:10.615 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:10.615 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@234 -- # [[ up == up ]] 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:10.615 Found net devices under 0000:86:00.0: cvl_0_0 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@234 -- # [[ up == up ]] 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:10.615 Found net devices under 0000:86:00.1: cvl_0_1 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # is_hw=yes 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@257 -- # create_target_ns 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:34:10.615 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@27 -- # local -gA dev_map 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@28 -- # local -g _dev 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@44 -- # ips=() 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@11 -- # local val=167772161 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:34:10.616 10.0.0.1 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@11 -- # local val=167772162 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:34:10.616 10.0.0.2 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@38 -- # ping_ips 1 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@107 -- # local dev=initiator0 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:34:10.616 12:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:34:10.616 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:34:10.616 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:34:10.616 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:34:10.616 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:34:10.616 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:34:10.616 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:34:10.616 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:34:10.616 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:34:10.616 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:34:10.617 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:10.617 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:34:10.617 00:34:10.617 --- 10.0.0.1 ping statistics --- 00:34:10.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:10.617 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # get_net_dev target0 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@107 -- # local dev=target0 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:34:10.617 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:10.617 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:34:10.617 00:34:10.617 --- 10.0.0.2 ping statistics --- 00:34:10.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:10.617 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # (( pair++ )) 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@270 -- # return 0 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@107 -- # local dev=initiator0 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@107 -- # local dev=initiator1 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # return 1 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # dev= 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@169 -- # return 0 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # get_net_dev target0 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@107 -- # local dev=target0 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # get_net_dev target1 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@107 -- # local dev=target1 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # return 1 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # dev= 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@169 -- # return 0 00:34:10.617 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:34:10.618 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:10.618 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:34:10.618 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:34:10.618 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:10.618 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:34:10.618 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:34:10.618 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:34:10.618 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:34:10.618 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:10.618 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:10.618 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # nvmfpid=280843 00:34:10.618 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:34:10.618 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@329 -- # waitforlisten 280843 00:34:10.618 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 280843 ']' 00:34:10.618 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:10.618 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:10.618 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:10.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:10.618 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:10.618 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:10.618 [2024-12-05 12:17:44.194928] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:10.618 [2024-12-05 12:17:44.195839] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:34:10.618 [2024-12-05 12:17:44.195873] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:10.618 [2024-12-05 12:17:44.274736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:10.618 [2024-12-05 12:17:44.314980] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:10.618 [2024-12-05 12:17:44.315017] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:10.618 [2024-12-05 12:17:44.315024] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:10.618 [2024-12-05 12:17:44.315030] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:10.618 [2024-12-05 12:17:44.315034] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:10.618 [2024-12-05 12:17:44.315575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:10.618 [2024-12-05 12:17:44.383278] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:10.618 [2024-12-05 12:17:44.383491] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:10.618 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:10.618 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:34:10.618 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:34:10.618 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:10.618 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:10.618 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:10.618 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:10.618 [2024-12-05 12:17:44.612223] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:10.618 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:34:10.618 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:10.618 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:10.618 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:10.618 ************************************ 00:34:10.618 START TEST lvs_grow_clean 00:34:10.618 ************************************ 00:34:10.618 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:34:10.618 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:34:10.618 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:34:10.618 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:34:10.618 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:34:10.618 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:34:10.618 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:34:10.618 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:10.618 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:10.618 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:34:10.876 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:34:10.876 12:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:34:11.135 12:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=13a0d5d3-ab14-4d51-8253-eab4c5be9996 00:34:11.135 12:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 13a0d5d3-ab14-4d51-8253-eab4c5be9996 00:34:11.135 12:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:34:11.135 12:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:34:11.135 12:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:34:11.135 12:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 13a0d5d3-ab14-4d51-8253-eab4c5be9996 lvol 150 00:34:11.432 12:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=56bda43b-ee3c-4984-b702-622b472fb202 00:34:11.432 12:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:11.432 12:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:34:11.691 [2024-12-05 12:17:45.655989] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:34:11.691 [2024-12-05 12:17:45.656120] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:34:11.691 true 00:34:11.691 12:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:34:11.691 12:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 13a0d5d3-ab14-4d51-8253-eab4c5be9996 00:34:11.691 12:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:34:11.691 12:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:34:11.949 12:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 56bda43b-ee3c-4984-b702-622b472fb202 00:34:12.207 12:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:12.207 [2024-12-05 12:17:46.400468] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:12.466 12:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:12.466 12:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=281176 00:34:12.466 12:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:12.466 12:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:34:12.466 12:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 281176 /var/tmp/bdevperf.sock 00:34:12.466 12:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 281176 ']' 00:34:12.466 12:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:12.466 12:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:12.466 12:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:12.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:12.466 12:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:12.466 12:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:34:12.466 [2024-12-05 12:17:46.637384] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:34:12.466 [2024-12-05 12:17:46.637433] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid281176 ] 00:34:12.725 [2024-12-05 12:17:46.712093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:12.725 [2024-12-05 12:17:46.755640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:13.292 12:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:13.292 12:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:34:13.292 12:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:34:13.859 Nvme0n1 00:34:13.859 12:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:34:13.859 [ 00:34:13.859 { 00:34:13.859 "name": "Nvme0n1", 00:34:13.859 "aliases": [ 00:34:13.859 "56bda43b-ee3c-4984-b702-622b472fb202" 00:34:13.859 ], 00:34:13.859 "product_name": "NVMe disk", 00:34:13.859 "block_size": 4096, 00:34:13.859 "num_blocks": 38912, 00:34:13.859 "uuid": "56bda43b-ee3c-4984-b702-622b472fb202", 00:34:13.859 "numa_id": 1, 00:34:13.859 "assigned_rate_limits": { 00:34:13.859 "rw_ios_per_sec": 0, 00:34:13.859 "rw_mbytes_per_sec": 0, 00:34:13.859 "r_mbytes_per_sec": 0, 00:34:13.859 "w_mbytes_per_sec": 0 00:34:13.859 }, 00:34:13.859 "claimed": false, 00:34:13.859 "zoned": false, 00:34:13.859 "supported_io_types": { 00:34:13.859 "read": true, 00:34:13.859 "write": true, 00:34:13.859 "unmap": true, 00:34:13.859 "flush": true, 00:34:13.859 "reset": true, 00:34:13.859 "nvme_admin": true, 00:34:13.859 "nvme_io": true, 00:34:13.859 "nvme_io_md": false, 00:34:13.859 "write_zeroes": true, 00:34:13.859 "zcopy": false, 00:34:13.859 "get_zone_info": false, 00:34:13.859 "zone_management": false, 00:34:13.859 "zone_append": false, 00:34:13.859 "compare": true, 00:34:13.859 "compare_and_write": true, 00:34:13.859 "abort": true, 00:34:13.859 "seek_hole": false, 00:34:13.859 "seek_data": false, 00:34:13.859 "copy": true, 00:34:13.859 "nvme_iov_md": false 00:34:13.859 }, 00:34:13.859 "memory_domains": [ 00:34:13.859 { 00:34:13.859 "dma_device_id": "system", 00:34:13.859 "dma_device_type": 1 00:34:13.859 } 00:34:13.859 ], 00:34:13.859 "driver_specific": { 00:34:13.859 "nvme": [ 00:34:13.859 { 00:34:13.859 "trid": { 00:34:13.859 "trtype": "TCP", 00:34:13.859 "adrfam": "IPv4", 00:34:13.859 "traddr": "10.0.0.2", 00:34:13.859 "trsvcid": "4420", 00:34:13.859 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:34:13.859 }, 00:34:13.859 "ctrlr_data": { 00:34:13.859 "cntlid": 1, 00:34:13.859 "vendor_id": "0x8086", 00:34:13.859 "model_number": "SPDK bdev Controller", 00:34:13.859 "serial_number": "SPDK0", 00:34:13.859 "firmware_revision": "25.01", 00:34:13.859 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:13.859 "oacs": { 00:34:13.859 "security": 0, 00:34:13.859 "format": 0, 00:34:13.859 "firmware": 0, 00:34:13.859 "ns_manage": 0 00:34:13.859 }, 00:34:13.859 "multi_ctrlr": true, 00:34:13.859 "ana_reporting": false 00:34:13.859 }, 00:34:13.859 "vs": { 00:34:13.859 "nvme_version": "1.3" 00:34:13.859 }, 00:34:13.859 "ns_data": { 00:34:13.859 "id": 1, 00:34:13.859 "can_share": true 00:34:13.859 } 00:34:13.859 } 00:34:13.859 ], 00:34:13.859 "mp_policy": "active_passive" 00:34:13.859 } 00:34:13.859 } 00:34:13.859 ] 00:34:14.118 12:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=281413 00:34:14.119 12:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:34:14.119 12:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:14.119 Running I/O for 10 seconds... 00:34:15.056 Latency(us) 00:34:15.056 [2024-12-05T11:17:49.252Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:15.056 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:15.056 Nvme0n1 : 1.00 22924.00 89.55 0.00 0.00 0.00 0.00 0.00 00:34:15.056 [2024-12-05T11:17:49.252Z] =================================================================================================================== 00:34:15.056 [2024-12-05T11:17:49.252Z] Total : 22924.00 89.55 0.00 0.00 0.00 0.00 0.00 00:34:15.056 00:34:15.994 12:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 13a0d5d3-ab14-4d51-8253-eab4c5be9996 00:34:15.994 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:15.994 Nvme0n1 : 2.00 23202.00 90.63 0.00 0.00 0.00 0.00 0.00 00:34:15.994 [2024-12-05T11:17:50.190Z] =================================================================================================================== 00:34:15.994 [2024-12-05T11:17:50.190Z] Total : 23202.00 90.63 0.00 0.00 0.00 0.00 0.00 00:34:15.994 00:34:16.253 true 00:34:16.253 12:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 13a0d5d3-ab14-4d51-8253-eab4c5be9996 00:34:16.253 12:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:34:16.513 12:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:34:16.513 12:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:34:16.513 12:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 281413 00:34:17.080 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:17.080 Nvme0n1 : 3.00 23299.67 91.01 0.00 0.00 0.00 0.00 0.00 00:34:17.080 [2024-12-05T11:17:51.276Z] =================================================================================================================== 00:34:17.080 [2024-12-05T11:17:51.276Z] Total : 23299.67 91.01 0.00 0.00 0.00 0.00 0.00 00:34:17.080 00:34:18.018 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:18.018 Nvme0n1 : 4.00 23443.75 91.58 0.00 0.00 0.00 0.00 0.00 00:34:18.018 [2024-12-05T11:17:52.214Z] =================================================================================================================== 00:34:18.018 [2024-12-05T11:17:52.214Z] Total : 23443.75 91.58 0.00 0.00 0.00 0.00 0.00 00:34:18.018 00:34:19.396 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:19.396 Nvme0n1 : 5.00 23530.20 91.91 0.00 0.00 0.00 0.00 0.00 00:34:19.396 [2024-12-05T11:17:53.592Z] =================================================================================================================== 00:34:19.396 [2024-12-05T11:17:53.592Z] Total : 23530.20 91.91 0.00 0.00 0.00 0.00 0.00 00:34:19.396 00:34:20.334 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:20.334 Nvme0n1 : 6.00 23587.83 92.14 0.00 0.00 0.00 0.00 0.00 00:34:20.334 [2024-12-05T11:17:54.530Z] =================================================================================================================== 00:34:20.334 [2024-12-05T11:17:54.530Z] Total : 23587.83 92.14 0.00 0.00 0.00 0.00 0.00 00:34:20.334 00:34:21.268 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:21.268 Nvme0n1 : 7.00 23629.00 92.30 0.00 0.00 0.00 0.00 0.00 00:34:21.268 [2024-12-05T11:17:55.464Z] =================================================================================================================== 00:34:21.268 [2024-12-05T11:17:55.464Z] Total : 23629.00 92.30 0.00 0.00 0.00 0.00 0.00 00:34:21.268 00:34:22.203 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:22.203 Nvme0n1 : 8.00 23652.00 92.39 0.00 0.00 0.00 0.00 0.00 00:34:22.203 [2024-12-05T11:17:56.399Z] =================================================================================================================== 00:34:22.203 [2024-12-05T11:17:56.399Z] Total : 23652.00 92.39 0.00 0.00 0.00 0.00 0.00 00:34:22.203 00:34:23.138 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:23.138 Nvme0n1 : 9.00 23682.33 92.51 0.00 0.00 0.00 0.00 0.00 00:34:23.138 [2024-12-05T11:17:57.334Z] =================================================================================================================== 00:34:23.138 [2024-12-05T11:17:57.334Z] Total : 23682.33 92.51 0.00 0.00 0.00 0.00 0.00 00:34:23.138 00:34:24.075 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:24.075 Nvme0n1 : 10.00 23712.90 92.63 0.00 0.00 0.00 0.00 0.00 00:34:24.075 [2024-12-05T11:17:58.271Z] =================================================================================================================== 00:34:24.075 [2024-12-05T11:17:58.271Z] Total : 23712.90 92.63 0.00 0.00 0.00 0.00 0.00 00:34:24.075 00:34:24.075 00:34:24.075 Latency(us) 00:34:24.075 [2024-12-05T11:17:58.271Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:24.075 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:24.075 Nvme0n1 : 10.01 23713.07 92.63 0.00 0.00 5394.82 3136.37 28086.86 00:34:24.075 [2024-12-05T11:17:58.271Z] =================================================================================================================== 00:34:24.075 [2024-12-05T11:17:58.271Z] Total : 23713.07 92.63 0.00 0.00 5394.82 3136.37 28086.86 00:34:24.075 { 00:34:24.075 "results": [ 00:34:24.075 { 00:34:24.075 "job": "Nvme0n1", 00:34:24.075 "core_mask": "0x2", 00:34:24.075 "workload": "randwrite", 00:34:24.075 "status": "finished", 00:34:24.075 "queue_depth": 128, 00:34:24.075 "io_size": 4096, 00:34:24.075 "runtime": 10.005327, 00:34:24.075 "iops": 23713.068048650483, 00:34:24.075 "mibps": 92.62917206504095, 00:34:24.075 "io_failed": 0, 00:34:24.075 "io_timeout": 0, 00:34:24.075 "avg_latency_us": 5394.81814671934, 00:34:24.075 "min_latency_us": 3136.365714285714, 00:34:24.075 "max_latency_us": 28086.85714285714 00:34:24.075 } 00:34:24.075 ], 00:34:24.075 "core_count": 1 00:34:24.075 } 00:34:24.075 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 281176 00:34:24.075 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 281176 ']' 00:34:24.075 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 281176 00:34:24.075 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:34:24.075 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:24.075 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 281176 00:34:24.075 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:24.075 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:24.075 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 281176' 00:34:24.075 killing process with pid 281176 00:34:24.075 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 281176 00:34:24.075 Received shutdown signal, test time was about 10.000000 seconds 00:34:24.075 00:34:24.075 Latency(us) 00:34:24.075 [2024-12-05T11:17:58.271Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:24.075 [2024-12-05T11:17:58.271Z] =================================================================================================================== 00:34:24.075 [2024-12-05T11:17:58.271Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:24.075 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 281176 00:34:24.334 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:24.593 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:24.852 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 13a0d5d3-ab14-4d51-8253-eab4c5be9996 00:34:24.852 12:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:34:24.852 12:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:34:24.852 12:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:34:24.852 12:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:34:25.111 [2024-12-05 12:17:59.160016] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:34:25.111 12:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 13a0d5d3-ab14-4d51-8253-eab4c5be9996 00:34:25.111 12:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:34:25.111 12:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 13a0d5d3-ab14-4d51-8253-eab4c5be9996 00:34:25.111 12:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:25.111 12:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:25.111 12:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:25.111 12:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:25.111 12:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:25.111 12:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:25.111 12:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:25.111 12:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:34:25.111 12:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 13a0d5d3-ab14-4d51-8253-eab4c5be9996 00:34:25.369 request: 00:34:25.369 { 00:34:25.369 "uuid": "13a0d5d3-ab14-4d51-8253-eab4c5be9996", 00:34:25.369 "method": "bdev_lvol_get_lvstores", 00:34:25.369 "req_id": 1 00:34:25.369 } 00:34:25.369 Got JSON-RPC error response 00:34:25.369 response: 00:34:25.369 { 00:34:25.369 "code": -19, 00:34:25.369 "message": "No such device" 00:34:25.369 } 00:34:25.369 12:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:34:25.369 12:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:25.369 12:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:25.369 12:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:25.369 12:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:34:25.627 aio_bdev 00:34:25.627 12:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 56bda43b-ee3c-4984-b702-622b472fb202 00:34:25.627 12:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=56bda43b-ee3c-4984-b702-622b472fb202 00:34:25.627 12:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:25.627 12:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:34:25.627 12:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:25.627 12:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:25.627 12:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:25.627 12:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 56bda43b-ee3c-4984-b702-622b472fb202 -t 2000 00:34:25.886 [ 00:34:25.886 { 00:34:25.886 "name": "56bda43b-ee3c-4984-b702-622b472fb202", 00:34:25.886 "aliases": [ 00:34:25.886 "lvs/lvol" 00:34:25.886 ], 00:34:25.886 "product_name": "Logical Volume", 00:34:25.886 "block_size": 4096, 00:34:25.886 "num_blocks": 38912, 00:34:25.886 "uuid": "56bda43b-ee3c-4984-b702-622b472fb202", 00:34:25.886 "assigned_rate_limits": { 00:34:25.886 "rw_ios_per_sec": 0, 00:34:25.886 "rw_mbytes_per_sec": 0, 00:34:25.886 "r_mbytes_per_sec": 0, 00:34:25.886 "w_mbytes_per_sec": 0 00:34:25.886 }, 00:34:25.886 "claimed": false, 00:34:25.886 "zoned": false, 00:34:25.886 "supported_io_types": { 00:34:25.886 "read": true, 00:34:25.886 "write": true, 00:34:25.886 "unmap": true, 00:34:25.886 "flush": false, 00:34:25.886 "reset": true, 00:34:25.886 "nvme_admin": false, 00:34:25.886 "nvme_io": false, 00:34:25.886 "nvme_io_md": false, 00:34:25.886 "write_zeroes": true, 00:34:25.886 "zcopy": false, 00:34:25.886 "get_zone_info": false, 00:34:25.886 "zone_management": false, 00:34:25.886 "zone_append": false, 00:34:25.886 "compare": false, 00:34:25.886 "compare_and_write": false, 00:34:25.886 "abort": false, 00:34:25.886 "seek_hole": true, 00:34:25.886 "seek_data": true, 00:34:25.886 "copy": false, 00:34:25.886 "nvme_iov_md": false 00:34:25.886 }, 00:34:25.886 "driver_specific": { 00:34:25.886 "lvol": { 00:34:25.886 "lvol_store_uuid": "13a0d5d3-ab14-4d51-8253-eab4c5be9996", 00:34:25.886 "base_bdev": "aio_bdev", 00:34:25.886 "thin_provision": false, 00:34:25.886 "num_allocated_clusters": 38, 00:34:25.886 "snapshot": false, 00:34:25.886 "clone": false, 00:34:25.886 "esnap_clone": false 00:34:25.886 } 00:34:25.886 } 00:34:25.886 } 00:34:25.886 ] 00:34:25.886 12:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:34:25.886 12:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 13a0d5d3-ab14-4d51-8253-eab4c5be9996 00:34:25.886 12:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:34:26.145 12:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:34:26.145 12:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 13a0d5d3-ab14-4d51-8253-eab4c5be9996 00:34:26.145 12:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:34:26.404 12:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:34:26.404 12:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 56bda43b-ee3c-4984-b702-622b472fb202 00:34:26.404 12:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 13a0d5d3-ab14-4d51-8253-eab4c5be9996 00:34:26.663 12:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:34:26.922 12:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:26.922 00:34:26.922 real 0m16.289s 00:34:26.922 user 0m15.973s 00:34:26.922 sys 0m1.524s 00:34:26.922 12:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:26.922 12:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:34:26.922 ************************************ 00:34:26.922 END TEST lvs_grow_clean 00:34:26.922 ************************************ 00:34:26.922 12:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:34:26.922 12:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:26.922 12:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:26.922 12:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:26.922 ************************************ 00:34:26.922 START TEST lvs_grow_dirty 00:34:26.922 ************************************ 00:34:26.922 12:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:34:26.922 12:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:34:26.922 12:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:34:26.922 12:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:34:26.922 12:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:34:26.922 12:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:34:26.922 12:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:34:26.922 12:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:26.922 12:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:26.922 12:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:34:27.181 12:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:34:27.181 12:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:34:27.439 12:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=5ef19d3a-ac5e-4c81-8977-86e051c5ddca 00:34:27.439 12:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5ef19d3a-ac5e-4c81-8977-86e051c5ddca 00:34:27.439 12:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:34:27.698 12:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:34:27.698 12:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:34:27.698 12:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5ef19d3a-ac5e-4c81-8977-86e051c5ddca lvol 150 00:34:27.698 12:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=97b56807-8ec6-46d2-866d-5f95bbb49eb6 00:34:27.698 12:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:27.698 12:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:34:27.957 [2024-12-05 12:18:02.047963] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:34:27.957 [2024-12-05 12:18:02.048093] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:34:27.957 true 00:34:27.957 12:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5ef19d3a-ac5e-4c81-8977-86e051c5ddca 00:34:27.957 12:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:34:28.216 12:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:34:28.216 12:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:34:28.475 12:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 97b56807-8ec6-46d2-866d-5f95bbb49eb6 00:34:28.475 12:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:28.734 [2024-12-05 12:18:02.824468] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:28.734 12:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:28.993 12:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=284091 00:34:28.993 12:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:28.993 12:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:34:28.993 12:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 284091 /var/tmp/bdevperf.sock 00:34:28.993 12:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 284091 ']' 00:34:28.994 12:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:28.994 12:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:28.994 12:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:28.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:28.994 12:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:28.994 12:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:34:28.994 [2024-12-05 12:18:03.089780] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:34:28.994 [2024-12-05 12:18:03.089832] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid284091 ] 00:34:28.994 [2024-12-05 12:18:03.163311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:29.253 [2024-12-05 12:18:03.204719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:29.253 12:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:29.253 12:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:34:29.253 12:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:34:29.513 Nvme0n1 00:34:29.513 12:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:34:29.772 [ 00:34:29.772 { 00:34:29.772 "name": "Nvme0n1", 00:34:29.772 "aliases": [ 00:34:29.772 "97b56807-8ec6-46d2-866d-5f95bbb49eb6" 00:34:29.772 ], 00:34:29.772 "product_name": "NVMe disk", 00:34:29.772 "block_size": 4096, 00:34:29.772 "num_blocks": 38912, 00:34:29.772 "uuid": "97b56807-8ec6-46d2-866d-5f95bbb49eb6", 00:34:29.772 "numa_id": 1, 00:34:29.772 "assigned_rate_limits": { 00:34:29.772 "rw_ios_per_sec": 0, 00:34:29.772 "rw_mbytes_per_sec": 0, 00:34:29.772 "r_mbytes_per_sec": 0, 00:34:29.772 "w_mbytes_per_sec": 0 00:34:29.772 }, 00:34:29.772 "claimed": false, 00:34:29.772 "zoned": false, 00:34:29.772 "supported_io_types": { 00:34:29.772 "read": true, 00:34:29.772 "write": true, 00:34:29.772 "unmap": true, 00:34:29.772 "flush": true, 00:34:29.772 "reset": true, 00:34:29.772 "nvme_admin": true, 00:34:29.772 "nvme_io": true, 00:34:29.772 "nvme_io_md": false, 00:34:29.772 "write_zeroes": true, 00:34:29.772 "zcopy": false, 00:34:29.772 "get_zone_info": false, 00:34:29.772 "zone_management": false, 00:34:29.772 "zone_append": false, 00:34:29.772 "compare": true, 00:34:29.772 "compare_and_write": true, 00:34:29.772 "abort": true, 00:34:29.772 "seek_hole": false, 00:34:29.772 "seek_data": false, 00:34:29.772 "copy": true, 00:34:29.772 "nvme_iov_md": false 00:34:29.772 }, 00:34:29.772 "memory_domains": [ 00:34:29.773 { 00:34:29.773 "dma_device_id": "system", 00:34:29.773 "dma_device_type": 1 00:34:29.773 } 00:34:29.773 ], 00:34:29.773 "driver_specific": { 00:34:29.773 "nvme": [ 00:34:29.773 { 00:34:29.773 "trid": { 00:34:29.773 "trtype": "TCP", 00:34:29.773 "adrfam": "IPv4", 00:34:29.773 "traddr": "10.0.0.2", 00:34:29.773 "trsvcid": "4420", 00:34:29.773 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:34:29.773 }, 00:34:29.773 "ctrlr_data": { 00:34:29.773 "cntlid": 1, 00:34:29.773 "vendor_id": "0x8086", 00:34:29.773 "model_number": "SPDK bdev Controller", 00:34:29.773 "serial_number": "SPDK0", 00:34:29.773 "firmware_revision": "25.01", 00:34:29.773 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:29.773 "oacs": { 00:34:29.773 "security": 0, 00:34:29.773 "format": 0, 00:34:29.773 "firmware": 0, 00:34:29.773 "ns_manage": 0 00:34:29.773 }, 00:34:29.773 "multi_ctrlr": true, 00:34:29.773 "ana_reporting": false 00:34:29.773 }, 00:34:29.773 "vs": { 00:34:29.773 "nvme_version": "1.3" 00:34:29.773 }, 00:34:29.773 "ns_data": { 00:34:29.773 "id": 1, 00:34:29.773 "can_share": true 00:34:29.773 } 00:34:29.773 } 00:34:29.773 ], 00:34:29.773 "mp_policy": "active_passive" 00:34:29.773 } 00:34:29.773 } 00:34:29.773 ] 00:34:29.773 12:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=284120 00:34:29.773 12:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:34:29.773 12:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:29.773 Running I/O for 10 seconds... 00:34:31.158 Latency(us) 00:34:31.158 [2024-12-05T11:18:05.354Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:31.158 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:31.158 Nvme0n1 : 1.00 22987.00 89.79 0.00 0.00 0.00 0.00 0.00 00:34:31.158 [2024-12-05T11:18:05.354Z] =================================================================================================================== 00:34:31.158 [2024-12-05T11:18:05.354Z] Total : 22987.00 89.79 0.00 0.00 0.00 0.00 0.00 00:34:31.158 00:34:31.775 12:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5ef19d3a-ac5e-4c81-8977-86e051c5ddca 00:34:31.775 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:31.775 Nvme0n1 : 2.00 23241.00 90.79 0.00 0.00 0.00 0.00 0.00 00:34:31.775 [2024-12-05T11:18:05.971Z] =================================================================================================================== 00:34:31.775 [2024-12-05T11:18:05.971Z] Total : 23241.00 90.79 0.00 0.00 0.00 0.00 0.00 00:34:31.775 00:34:32.097 true 00:34:32.097 12:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5ef19d3a-ac5e-4c81-8977-86e051c5ddca 00:34:32.097 12:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:34:32.097 12:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:34:32.097 12:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:34:32.097 12:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 284120 00:34:33.034 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:33.034 Nvme0n1 : 3.00 23368.00 91.28 0.00 0.00 0.00 0.00 0.00 00:34:33.034 [2024-12-05T11:18:07.230Z] =================================================================================================================== 00:34:33.034 [2024-12-05T11:18:07.231Z] Total : 23368.00 91.28 0.00 0.00 0.00 0.00 0.00 00:34:33.035 00:34:33.970 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:33.970 Nvme0n1 : 4.00 23463.25 91.65 0.00 0.00 0.00 0.00 0.00 00:34:33.970 [2024-12-05T11:18:08.166Z] =================================================================================================================== 00:34:33.970 [2024-12-05T11:18:08.166Z] Total : 23463.25 91.65 0.00 0.00 0.00 0.00 0.00 00:34:33.970 00:34:34.905 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:34.905 Nvme0n1 : 5.00 23469.60 91.68 0.00 0.00 0.00 0.00 0.00 00:34:34.905 [2024-12-05T11:18:09.101Z] =================================================================================================================== 00:34:34.905 [2024-12-05T11:18:09.101Z] Total : 23469.60 91.68 0.00 0.00 0.00 0.00 0.00 00:34:34.905 00:34:35.894 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:35.894 Nvme0n1 : 6.00 23516.17 91.86 0.00 0.00 0.00 0.00 0.00 00:34:35.894 [2024-12-05T11:18:10.090Z] =================================================================================================================== 00:34:35.894 [2024-12-05T11:18:10.090Z] Total : 23516.17 91.86 0.00 0.00 0.00 0.00 0.00 00:34:35.894 00:34:36.829 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:36.829 Nvme0n1 : 7.00 23549.43 91.99 0.00 0.00 0.00 0.00 0.00 00:34:36.829 [2024-12-05T11:18:11.025Z] =================================================================================================================== 00:34:36.829 [2024-12-05T11:18:11.025Z] Total : 23549.43 91.99 0.00 0.00 0.00 0.00 0.00 00:34:36.829 00:34:38.206 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:38.206 Nvme0n1 : 8.00 23594.50 92.17 0.00 0.00 0.00 0.00 0.00 00:34:38.206 [2024-12-05T11:18:12.402Z] =================================================================================================================== 00:34:38.206 [2024-12-05T11:18:12.402Z] Total : 23594.50 92.17 0.00 0.00 0.00 0.00 0.00 00:34:38.206 00:34:38.801 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:38.801 Nvme0n1 : 9.00 23639.89 92.34 0.00 0.00 0.00 0.00 0.00 00:34:38.801 [2024-12-05T11:18:12.997Z] =================================================================================================================== 00:34:38.801 [2024-12-05T11:18:12.997Z] Total : 23639.89 92.34 0.00 0.00 0.00 0.00 0.00 00:34:38.801 00:34:40.179 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:40.179 Nvme0n1 : 10.00 23676.20 92.49 0.00 0.00 0.00 0.00 0.00 00:34:40.179 [2024-12-05T11:18:14.375Z] =================================================================================================================== 00:34:40.179 [2024-12-05T11:18:14.375Z] Total : 23676.20 92.49 0.00 0.00 0.00 0.00 0.00 00:34:40.179 00:34:40.179 00:34:40.179 Latency(us) 00:34:40.179 [2024-12-05T11:18:14.375Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:40.179 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:40.179 Nvme0n1 : 10.00 23673.75 92.48 0.00 0.00 5403.30 3214.38 25590.25 00:34:40.179 [2024-12-05T11:18:14.375Z] =================================================================================================================== 00:34:40.179 [2024-12-05T11:18:14.375Z] Total : 23673.75 92.48 0.00 0.00 5403.30 3214.38 25590.25 00:34:40.179 { 00:34:40.179 "results": [ 00:34:40.179 { 00:34:40.179 "job": "Nvme0n1", 00:34:40.179 "core_mask": "0x2", 00:34:40.179 "workload": "randwrite", 00:34:40.179 "status": "finished", 00:34:40.179 "queue_depth": 128, 00:34:40.179 "io_size": 4096, 00:34:40.179 "runtime": 10.003782, 00:34:40.179 "iops": 23673.746589040024, 00:34:40.179 "mibps": 92.47557261343759, 00:34:40.179 "io_failed": 0, 00:34:40.179 "io_timeout": 0, 00:34:40.179 "avg_latency_us": 5403.30369557686, 00:34:40.179 "min_latency_us": 3214.384761904762, 00:34:40.179 "max_latency_us": 25590.24761904762 00:34:40.179 } 00:34:40.179 ], 00:34:40.179 "core_count": 1 00:34:40.179 } 00:34:40.179 12:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 284091 00:34:40.179 12:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 284091 ']' 00:34:40.179 12:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 284091 00:34:40.179 12:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:34:40.179 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:40.179 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 284091 00:34:40.179 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:40.179 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:40.179 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 284091' 00:34:40.179 killing process with pid 284091 00:34:40.179 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 284091 00:34:40.179 Received shutdown signal, test time was about 10.000000 seconds 00:34:40.179 00:34:40.179 Latency(us) 00:34:40.179 [2024-12-05T11:18:14.375Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:40.179 [2024-12-05T11:18:14.375Z] =================================================================================================================== 00:34:40.179 [2024-12-05T11:18:14.375Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:40.179 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 284091 00:34:40.179 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:40.438 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:40.438 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5ef19d3a-ac5e-4c81-8977-86e051c5ddca 00:34:40.438 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:34:40.697 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:34:40.697 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:34:40.697 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 280843 00:34:40.697 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 280843 00:34:40.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 280843 Killed "${NVMF_APP[@]}" "$@" 00:34:40.697 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:34:40.697 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:34:40.697 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:34:40.697 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:40.697 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:34:40.697 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@328 -- # nvmfpid=286341 00:34:40.697 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@329 -- # waitforlisten 286341 00:34:40.697 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:34:40.697 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 286341 ']' 00:34:40.697 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:40.697 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:40.697 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:40.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:40.697 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:40.697 12:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:34:40.697 [2024-12-05 12:18:14.873251] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:40.697 [2024-12-05 12:18:14.874185] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:34:40.697 [2024-12-05 12:18:14.874224] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:40.956 [2024-12-05 12:18:14.938779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:40.956 [2024-12-05 12:18:14.979937] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:40.956 [2024-12-05 12:18:14.979973] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:40.956 [2024-12-05 12:18:14.979980] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:40.956 [2024-12-05 12:18:14.979986] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:40.956 [2024-12-05 12:18:14.979992] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:40.956 [2024-12-05 12:18:14.980531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:40.956 [2024-12-05 12:18:15.047612] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:40.956 [2024-12-05 12:18:15.047825] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:40.956 12:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:40.956 12:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:34:40.956 12:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:34:40.956 12:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:40.956 12:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:34:40.956 12:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:40.956 12:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:34:41.214 [2024-12-05 12:18:15.285865] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:34:41.214 [2024-12-05 12:18:15.286072] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:34:41.214 [2024-12-05 12:18:15.286157] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:34:41.214 12:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:34:41.214 12:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 97b56807-8ec6-46d2-866d-5f95bbb49eb6 00:34:41.214 12:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=97b56807-8ec6-46d2-866d-5f95bbb49eb6 00:34:41.214 12:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:41.214 12:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:34:41.214 12:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:41.214 12:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:41.214 12:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:41.472 12:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 97b56807-8ec6-46d2-866d-5f95bbb49eb6 -t 2000 00:34:41.742 [ 00:34:41.742 { 00:34:41.742 "name": "97b56807-8ec6-46d2-866d-5f95bbb49eb6", 00:34:41.742 "aliases": [ 00:34:41.742 "lvs/lvol" 00:34:41.742 ], 00:34:41.742 "product_name": "Logical Volume", 00:34:41.742 "block_size": 4096, 00:34:41.742 "num_blocks": 38912, 00:34:41.742 "uuid": "97b56807-8ec6-46d2-866d-5f95bbb49eb6", 00:34:41.742 "assigned_rate_limits": { 00:34:41.742 "rw_ios_per_sec": 0, 00:34:41.742 "rw_mbytes_per_sec": 0, 00:34:41.742 "r_mbytes_per_sec": 0, 00:34:41.742 "w_mbytes_per_sec": 0 00:34:41.742 }, 00:34:41.742 "claimed": false, 00:34:41.742 "zoned": false, 00:34:41.742 "supported_io_types": { 00:34:41.742 "read": true, 00:34:41.742 "write": true, 00:34:41.742 "unmap": true, 00:34:41.742 "flush": false, 00:34:41.742 "reset": true, 00:34:41.742 "nvme_admin": false, 00:34:41.742 "nvme_io": false, 00:34:41.742 "nvme_io_md": false, 00:34:41.742 "write_zeroes": true, 00:34:41.742 "zcopy": false, 00:34:41.742 "get_zone_info": false, 00:34:41.742 "zone_management": false, 00:34:41.742 "zone_append": false, 00:34:41.742 "compare": false, 00:34:41.742 "compare_and_write": false, 00:34:41.742 "abort": false, 00:34:41.742 "seek_hole": true, 00:34:41.742 "seek_data": true, 00:34:41.743 "copy": false, 00:34:41.743 "nvme_iov_md": false 00:34:41.743 }, 00:34:41.743 "driver_specific": { 00:34:41.743 "lvol": { 00:34:41.743 "lvol_store_uuid": "5ef19d3a-ac5e-4c81-8977-86e051c5ddca", 00:34:41.743 "base_bdev": "aio_bdev", 00:34:41.743 "thin_provision": false, 00:34:41.743 "num_allocated_clusters": 38, 00:34:41.743 "snapshot": false, 00:34:41.743 "clone": false, 00:34:41.743 "esnap_clone": false 00:34:41.743 } 00:34:41.743 } 00:34:41.743 } 00:34:41.743 ] 00:34:41.743 12:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:34:41.743 12:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5ef19d3a-ac5e-4c81-8977-86e051c5ddca 00:34:41.743 12:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:34:41.743 12:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:34:41.743 12:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5ef19d3a-ac5e-4c81-8977-86e051c5ddca 00:34:41.743 12:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:34:42.002 12:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:34:42.002 12:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:34:42.261 [2024-12-05 12:18:16.248989] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:34:42.261 12:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5ef19d3a-ac5e-4c81-8977-86e051c5ddca 00:34:42.261 12:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:34:42.261 12:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5ef19d3a-ac5e-4c81-8977-86e051c5ddca 00:34:42.261 12:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:42.261 12:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:42.261 12:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:42.261 12:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:42.261 12:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:42.261 12:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:42.261 12:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:42.261 12:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:34:42.261 12:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5ef19d3a-ac5e-4c81-8977-86e051c5ddca 00:34:42.520 request: 00:34:42.520 { 00:34:42.520 "uuid": "5ef19d3a-ac5e-4c81-8977-86e051c5ddca", 00:34:42.520 "method": "bdev_lvol_get_lvstores", 00:34:42.520 "req_id": 1 00:34:42.520 } 00:34:42.520 Got JSON-RPC error response 00:34:42.520 response: 00:34:42.520 { 00:34:42.520 "code": -19, 00:34:42.520 "message": "No such device" 00:34:42.520 } 00:34:42.520 12:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:34:42.520 12:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:42.520 12:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:42.520 12:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:42.520 12:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:34:42.520 aio_bdev 00:34:42.520 12:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 97b56807-8ec6-46d2-866d-5f95bbb49eb6 00:34:42.520 12:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=97b56807-8ec6-46d2-866d-5f95bbb49eb6 00:34:42.520 12:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:42.520 12:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:34:42.520 12:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:42.520 12:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:42.520 12:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:42.779 12:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 97b56807-8ec6-46d2-866d-5f95bbb49eb6 -t 2000 00:34:43.036 [ 00:34:43.036 { 00:34:43.036 "name": "97b56807-8ec6-46d2-866d-5f95bbb49eb6", 00:34:43.036 "aliases": [ 00:34:43.036 "lvs/lvol" 00:34:43.036 ], 00:34:43.036 "product_name": "Logical Volume", 00:34:43.036 "block_size": 4096, 00:34:43.036 "num_blocks": 38912, 00:34:43.036 "uuid": "97b56807-8ec6-46d2-866d-5f95bbb49eb6", 00:34:43.036 "assigned_rate_limits": { 00:34:43.036 "rw_ios_per_sec": 0, 00:34:43.036 "rw_mbytes_per_sec": 0, 00:34:43.036 "r_mbytes_per_sec": 0, 00:34:43.036 "w_mbytes_per_sec": 0 00:34:43.036 }, 00:34:43.037 "claimed": false, 00:34:43.037 "zoned": false, 00:34:43.037 "supported_io_types": { 00:34:43.037 "read": true, 00:34:43.037 "write": true, 00:34:43.037 "unmap": true, 00:34:43.037 "flush": false, 00:34:43.037 "reset": true, 00:34:43.037 "nvme_admin": false, 00:34:43.037 "nvme_io": false, 00:34:43.037 "nvme_io_md": false, 00:34:43.037 "write_zeroes": true, 00:34:43.037 "zcopy": false, 00:34:43.037 "get_zone_info": false, 00:34:43.037 "zone_management": false, 00:34:43.037 "zone_append": false, 00:34:43.037 "compare": false, 00:34:43.037 "compare_and_write": false, 00:34:43.037 "abort": false, 00:34:43.037 "seek_hole": true, 00:34:43.037 "seek_data": true, 00:34:43.037 "copy": false, 00:34:43.037 "nvme_iov_md": false 00:34:43.037 }, 00:34:43.037 "driver_specific": { 00:34:43.037 "lvol": { 00:34:43.037 "lvol_store_uuid": "5ef19d3a-ac5e-4c81-8977-86e051c5ddca", 00:34:43.037 "base_bdev": "aio_bdev", 00:34:43.037 "thin_provision": false, 00:34:43.037 "num_allocated_clusters": 38, 00:34:43.037 "snapshot": false, 00:34:43.037 "clone": false, 00:34:43.037 "esnap_clone": false 00:34:43.037 } 00:34:43.037 } 00:34:43.037 } 00:34:43.037 ] 00:34:43.037 12:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:34:43.037 12:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:34:43.037 12:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5ef19d3a-ac5e-4c81-8977-86e051c5ddca 00:34:43.294 12:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:34:43.294 12:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5ef19d3a-ac5e-4c81-8977-86e051c5ddca 00:34:43.294 12:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:34:43.294 12:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:34:43.294 12:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 97b56807-8ec6-46d2-866d-5f95bbb49eb6 00:34:43.552 12:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5ef19d3a-ac5e-4c81-8977-86e051c5ddca 00:34:43.810 12:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:34:44.069 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:44.069 00:34:44.069 real 0m17.060s 00:34:44.069 user 0m34.401s 00:34:44.069 sys 0m3.912s 00:34:44.069 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:44.069 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:34:44.069 ************************************ 00:34:44.069 END TEST lvs_grow_dirty 00:34:44.069 ************************************ 00:34:44.069 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:34:44.069 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:34:44.069 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:34:44.069 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:34:44.069 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:34:44.069 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:34:44.069 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:34:44.069 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:34:44.069 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:34:44.069 nvmf_trace.0 00:34:44.069 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:34:44.069 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:34:44.069 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@335 -- # nvmfcleanup 00:34:44.069 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@99 -- # sync 00:34:44.069 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:34:44.069 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@102 -- # set +e 00:34:44.069 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@103 -- # for i in {1..20} 00:34:44.069 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:34:44.069 rmmod nvme_tcp 00:34:44.069 rmmod nvme_fabrics 00:34:44.069 rmmod nvme_keyring 00:34:44.069 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:34:44.069 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@106 -- # set -e 00:34:44.069 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@107 -- # return 0 00:34:44.069 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # '[' -n 286341 ']' 00:34:44.069 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@337 -- # killprocess 286341 00:34:44.069 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 286341 ']' 00:34:44.069 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 286341 00:34:44.069 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:34:44.069 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:44.069 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 286341 00:34:44.328 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:44.328 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:44.328 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 286341' 00:34:44.328 killing process with pid 286341 00:34:44.328 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 286341 00:34:44.328 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 286341 00:34:44.328 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:34:44.328 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@342 -- # nvmf_fini 00:34:44.328 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@264 -- # local dev 00:34:44.328 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@267 -- # remove_target_ns 00:34:44.328 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:34:44.328 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:34:44.328 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_target_ns 00:34:46.861 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@268 -- # delete_main_bridge 00:34:46.861 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:34:46.861 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@130 -- # return 0 00:34:46.861 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:34:46.861 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:34:46.861 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:34:46.861 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:34:46.861 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:34:46.861 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:34:46.861 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:34:46.861 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:34:46.861 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:34:46.861 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:34:46.861 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:34:46.861 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:34:46.861 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:34:46.861 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:34:46.861 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:34:46.861 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:34:46.861 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:34:46.861 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@41 -- # _dev=0 00:34:46.861 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@41 -- # dev_map=() 00:34:46.861 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@284 -- # iptr 00:34:46.861 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@542 -- # iptables-save 00:34:46.861 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:34:46.861 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@542 -- # iptables-restore 00:34:46.861 00:34:46.861 real 0m42.704s 00:34:46.861 user 0m52.981s 00:34:46.861 sys 0m10.381s 00:34:46.861 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:46.861 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:46.861 ************************************ 00:34:46.861 END TEST nvmf_lvs_grow 00:34:46.861 ************************************ 00:34:46.861 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@24 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:34:46.861 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:46.861 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:46.861 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:46.862 ************************************ 00:34:46.862 START TEST nvmf_bdev_io_wait 00:34:46.862 ************************************ 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:34:46.862 * Looking for test storage... 00:34:46.862 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:46.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:46.862 --rc genhtml_branch_coverage=1 00:34:46.862 --rc genhtml_function_coverage=1 00:34:46.862 --rc genhtml_legend=1 00:34:46.862 --rc geninfo_all_blocks=1 00:34:46.862 --rc geninfo_unexecuted_blocks=1 00:34:46.862 00:34:46.862 ' 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:46.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:46.862 --rc genhtml_branch_coverage=1 00:34:46.862 --rc genhtml_function_coverage=1 00:34:46.862 --rc genhtml_legend=1 00:34:46.862 --rc geninfo_all_blocks=1 00:34:46.862 --rc geninfo_unexecuted_blocks=1 00:34:46.862 00:34:46.862 ' 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:46.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:46.862 --rc genhtml_branch_coverage=1 00:34:46.862 --rc genhtml_function_coverage=1 00:34:46.862 --rc genhtml_legend=1 00:34:46.862 --rc geninfo_all_blocks=1 00:34:46.862 --rc geninfo_unexecuted_blocks=1 00:34:46.862 00:34:46.862 ' 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:46.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:46.862 --rc genhtml_branch_coverage=1 00:34:46.862 --rc genhtml_function_coverage=1 00:34:46.862 --rc genhtml_legend=1 00:34:46.862 --rc geninfo_all_blocks=1 00:34:46.862 --rc geninfo_unexecuted_blocks=1 00:34:46.862 00:34:46.862 ' 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@50 -- # : 0 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@54 -- # have_pci_nics=0 00:34:46.862 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:46.863 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:46.863 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:34:46.863 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:34:46.863 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:46.863 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # prepare_net_devs 00:34:46.863 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # local -g is_hw=no 00:34:46.863 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # remove_target_ns 00:34:46.863 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:34:46.863 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:34:46.863 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_target_ns 00:34:46.863 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:34:46.863 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:34:46.863 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # xtrace_disable 00:34:46.863 12:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:53.436 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:53.436 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@131 -- # pci_devs=() 00:34:53.436 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@131 -- # local -a pci_devs 00:34:53.436 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@132 -- # pci_net_devs=() 00:34:53.436 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:34:53.436 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@133 -- # pci_drivers=() 00:34:53.436 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@133 -- # local -A pci_drivers 00:34:53.436 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@135 -- # net_devs=() 00:34:53.436 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@135 -- # local -ga net_devs 00:34:53.436 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@136 -- # e810=() 00:34:53.436 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@136 -- # local -ga e810 00:34:53.436 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@137 -- # x722=() 00:34:53.436 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@137 -- # local -ga x722 00:34:53.436 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@138 -- # mlx=() 00:34:53.436 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@138 -- # local -ga mlx 00:34:53.436 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:53.436 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:53.436 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:53.436 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:53.436 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:53.436 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:53.436 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:53.436 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:53.436 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:53.436 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:53.436 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:53.436 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:53.436 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:34:53.436 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:34:53.436 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:34:53.436 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:34:53.436 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:34:53.436 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:34:53.436 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:34:53.436 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:53.436 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:53.436 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:53.437 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # [[ up == up ]] 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:53.437 Found net devices under 0000:86:00.0: cvl_0_0 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # [[ up == up ]] 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:53.437 Found net devices under 0000:86:00.1: cvl_0_1 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # is_hw=yes 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@257 -- # create_target_ns 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@27 -- # local -gA dev_map 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@28 -- # local -g _dev 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@44 -- # ips=() 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@11 -- # local val=167772161 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:34:53.437 10.0.0.1 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@11 -- # local val=167772162 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:34:53.437 10.0.0.2 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@38 -- # ping_ips 1 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@107 -- # local dev=initiator0 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:34:53.437 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:53.437 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.406 ms 00:34:53.437 00:34:53.437 --- 10.0.0.1 ping statistics --- 00:34:53.437 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:53.437 rtt min/avg/max/mdev = 0.406/0.406/0.406/0.000 ms 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # get_net_dev target0 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@107 -- # local dev=target0 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:34:53.437 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:34:53.438 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:53.438 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:34:53.438 00:34:53.438 --- 10.0.0.2 ping statistics --- 00:34:53.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:53.438 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # (( pair++ )) 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # return 0 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@107 -- # local dev=initiator0 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@107 -- # local dev=initiator1 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # return 1 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # dev= 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@169 -- # return 0 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # get_net_dev target0 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@107 -- # local dev=target0 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # get_net_dev target1 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@107 -- # local dev=target1 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # return 1 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # dev= 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@169 -- # return 0 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # nvmfpid=290411 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # waitforlisten 290411 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 290411 ']' 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:53.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:53.438 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:53.438 [2024-12-05 12:18:26.881132] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:53.438 [2024-12-05 12:18:26.882035] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:34:53.438 [2024-12-05 12:18:26.882068] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:53.438 [2024-12-05 12:18:26.957455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:53.438 [2024-12-05 12:18:27.000406] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:53.438 [2024-12-05 12:18:27.000442] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:53.438 [2024-12-05 12:18:27.000448] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:53.438 [2024-12-05 12:18:27.000454] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:53.438 [2024-12-05 12:18:27.000459] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:53.438 [2024-12-05 12:18:27.002030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:53.438 [2024-12-05 12:18:27.002141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:53.438 [2024-12-05 12:18:27.002248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:53.438 [2024-12-05 12:18:27.002248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:53.438 [2024-12-05 12:18:27.002604] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:53.438 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:53.438 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:34:53.438 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:34:53.438 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:53.438 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:53.438 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:53.438 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:34:53.438 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.438 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:53.438 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.438 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:34:53.438 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.438 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:53.438 [2024-12-05 12:18:27.127350] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:53.438 [2024-12-05 12:18:27.127522] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:53.438 [2024-12-05 12:18:27.127929] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:53.438 [2024-12-05 12:18:27.128119] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:53.438 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.438 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:53.438 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.438 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:53.438 [2024-12-05 12:18:27.138888] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:53.438 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.438 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:53.438 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.438 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:53.438 Malloc0 00:34:53.438 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.438 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:53.438 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.438 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:53.438 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.438 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:53.438 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.438 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:53.438 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.438 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:53.438 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.438 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:53.438 [2024-12-05 12:18:27.211275] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:53.438 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.438 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=290443 00:34:53.438 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:34:53.438 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:34:53.438 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=290445 00:34:53.438 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:34:53.438 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:34:53.438 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:34:53.438 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:34:53.438 { 00:34:53.438 "params": { 00:34:53.438 "name": "Nvme$subsystem", 00:34:53.438 "trtype": "$TEST_TRANSPORT", 00:34:53.438 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:53.438 "adrfam": "ipv4", 00:34:53.438 "trsvcid": "$NVMF_PORT", 00:34:53.439 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:53.439 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:53.439 "hdgst": ${hdgst:-false}, 00:34:53.439 "ddgst": ${ddgst:-false} 00:34:53.439 }, 00:34:53.439 "method": "bdev_nvme_attach_controller" 00:34:53.439 } 00:34:53.439 EOF 00:34:53.439 )") 00:34:53.439 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:34:53.439 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:34:53.439 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=290447 00:34:53.439 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:34:53.439 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:34:53.439 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:34:53.439 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:34:53.439 { 00:34:53.439 "params": { 00:34:53.439 "name": "Nvme$subsystem", 00:34:53.439 "trtype": "$TEST_TRANSPORT", 00:34:53.439 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:53.439 "adrfam": "ipv4", 00:34:53.439 "trsvcid": "$NVMF_PORT", 00:34:53.439 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:53.439 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:53.439 "hdgst": ${hdgst:-false}, 00:34:53.439 "ddgst": ${ddgst:-false} 00:34:53.439 }, 00:34:53.439 "method": "bdev_nvme_attach_controller" 00:34:53.439 } 00:34:53.439 EOF 00:34:53.439 )") 00:34:53.439 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:34:53.439 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=290450 00:34:53.439 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:34:53.439 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:34:53.439 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:34:53.439 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:34:53.439 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:34:53.439 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:34:53.439 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:34:53.439 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:34:53.439 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:34:53.439 { 00:34:53.439 "params": { 00:34:53.439 "name": "Nvme$subsystem", 00:34:53.439 "trtype": "$TEST_TRANSPORT", 00:34:53.439 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:53.439 "adrfam": "ipv4", 00:34:53.439 "trsvcid": "$NVMF_PORT", 00:34:53.439 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:53.439 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:53.439 "hdgst": ${hdgst:-false}, 00:34:53.439 "ddgst": ${ddgst:-false} 00:34:53.439 }, 00:34:53.439 "method": "bdev_nvme_attach_controller" 00:34:53.439 } 00:34:53.439 EOF 00:34:53.439 )") 00:34:53.439 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:34:53.439 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:34:53.439 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:34:53.439 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:34:53.439 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:34:53.439 { 00:34:53.439 "params": { 00:34:53.439 "name": "Nvme$subsystem", 00:34:53.439 "trtype": "$TEST_TRANSPORT", 00:34:53.439 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:53.439 "adrfam": "ipv4", 00:34:53.439 "trsvcid": "$NVMF_PORT", 00:34:53.439 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:53.439 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:53.439 "hdgst": ${hdgst:-false}, 00:34:53.439 "ddgst": ${ddgst:-false} 00:34:53.439 }, 00:34:53.439 "method": "bdev_nvme_attach_controller" 00:34:53.439 } 00:34:53.439 EOF 00:34:53.439 )") 00:34:53.439 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:34:53.439 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 290443 00:34:53.439 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:34:53.439 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:34:53.439 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:34:53.439 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:34:53.439 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:34:53.439 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:34:53.439 "params": { 00:34:53.439 "name": "Nvme1", 00:34:53.439 "trtype": "tcp", 00:34:53.439 "traddr": "10.0.0.2", 00:34:53.439 "adrfam": "ipv4", 00:34:53.439 "trsvcid": "4420", 00:34:53.439 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:53.439 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:53.439 "hdgst": false, 00:34:53.439 "ddgst": false 00:34:53.439 }, 00:34:53.439 "method": "bdev_nvme_attach_controller" 00:34:53.439 }' 00:34:53.439 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:34:53.439 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:34:53.439 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:34:53.439 "params": { 00:34:53.439 "name": "Nvme1", 00:34:53.439 "trtype": "tcp", 00:34:53.439 "traddr": "10.0.0.2", 00:34:53.439 "adrfam": "ipv4", 00:34:53.439 "trsvcid": "4420", 00:34:53.439 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:53.439 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:53.439 "hdgst": false, 00:34:53.439 "ddgst": false 00:34:53.439 }, 00:34:53.439 "method": "bdev_nvme_attach_controller" 00:34:53.439 }' 00:34:53.439 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:34:53.439 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:34:53.439 "params": { 00:34:53.439 "name": "Nvme1", 00:34:53.439 "trtype": "tcp", 00:34:53.439 "traddr": "10.0.0.2", 00:34:53.439 "adrfam": "ipv4", 00:34:53.439 "trsvcid": "4420", 00:34:53.439 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:53.439 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:53.439 "hdgst": false, 00:34:53.439 "ddgst": false 00:34:53.439 }, 00:34:53.439 "method": "bdev_nvme_attach_controller" 00:34:53.439 }' 00:34:53.439 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:34:53.439 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:34:53.439 "params": { 00:34:53.439 "name": "Nvme1", 00:34:53.439 "trtype": "tcp", 00:34:53.439 "traddr": "10.0.0.2", 00:34:53.439 "adrfam": "ipv4", 00:34:53.439 "trsvcid": "4420", 00:34:53.439 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:53.439 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:53.439 "hdgst": false, 00:34:53.439 "ddgst": false 00:34:53.439 }, 00:34:53.439 "method": "bdev_nvme_attach_controller" 00:34:53.439 }' 00:34:53.439 [2024-12-05 12:18:27.263190] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:34:53.439 [2024-12-05 12:18:27.263242] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:34:53.439 [2024-12-05 12:18:27.265209] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:34:53.439 [2024-12-05 12:18:27.265251] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:34:53.439 [2024-12-05 12:18:27.265262] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:34:53.439 [2024-12-05 12:18:27.265301] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:34:53.439 [2024-12-05 12:18:27.268256] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:34:53.439 [2024-12-05 12:18:27.268303] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:34:53.439 [2024-12-05 12:18:27.451898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:53.439 [2024-12-05 12:18:27.494995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:34:53.439 [2024-12-05 12:18:27.545308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:53.439 [2024-12-05 12:18:27.585722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:34:53.698 [2024-12-05 12:18:27.636082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:53.698 [2024-12-05 12:18:27.687861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:34:53.698 [2024-12-05 12:18:27.695855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:53.698 [2024-12-05 12:18:27.738504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:34:53.698 Running I/O for 1 seconds... 00:34:53.698 Running I/O for 1 seconds... 00:34:53.956 Running I/O for 1 seconds... 00:34:53.956 Running I/O for 1 seconds... 00:34:54.890 13055.00 IOPS, 51.00 MiB/s 00:34:54.890 Latency(us) 00:34:54.890 [2024-12-05T11:18:29.086Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:54.890 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:34:54.890 Nvme1n1 : 1.01 13096.28 51.16 0.00 0.00 9740.72 3308.01 11484.40 00:34:54.890 [2024-12-05T11:18:29.086Z] =================================================================================================================== 00:34:54.890 [2024-12-05T11:18:29.086Z] Total : 13096.28 51.16 0.00 0.00 9740.72 3308.01 11484.40 00:34:54.890 242576.00 IOPS, 947.56 MiB/s 00:34:54.890 Latency(us) 00:34:54.890 [2024-12-05T11:18:29.086Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:54.890 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:34:54.890 Nvme1n1 : 1.00 242211.48 946.14 0.00 0.00 525.97 222.35 1490.16 00:34:54.890 [2024-12-05T11:18:29.086Z] =================================================================================================================== 00:34:54.890 [2024-12-05T11:18:29.086Z] Total : 242211.48 946.14 0.00 0.00 525.97 222.35 1490.16 00:34:54.890 10803.00 IOPS, 42.20 MiB/s [2024-12-05T11:18:29.086Z] 11391.00 IOPS, 44.50 MiB/s 00:34:54.890 Latency(us) 00:34:54.890 [2024-12-05T11:18:29.086Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:54.890 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:34:54.890 Nvme1n1 : 1.01 11467.92 44.80 0.00 0.00 11129.22 4181.82 16976.94 00:34:54.890 [2024-12-05T11:18:29.086Z] =================================================================================================================== 00:34:54.890 [2024-12-05T11:18:29.086Z] Total : 11467.92 44.80 0.00 0.00 11129.22 4181.82 16976.94 00:34:54.890 00:34:54.890 Latency(us) 00:34:54.890 [2024-12-05T11:18:29.086Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:54.890 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:34:54.890 Nvme1n1 : 1.01 10879.46 42.50 0.00 0.00 11728.45 1505.77 17725.93 00:34:54.890 [2024-12-05T11:18:29.086Z] =================================================================================================================== 00:34:54.890 [2024-12-05T11:18:29.086Z] Total : 10879.46 42.50 0.00 0.00 11728.45 1505.77 17725.93 00:34:54.890 12:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 290445 00:34:54.890 12:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 290447 00:34:54.890 12:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 290450 00:34:54.890 12:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:54.890 12:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.890 12:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:55.149 12:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.149 12:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:34:55.149 12:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:34:55.149 12:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # nvmfcleanup 00:34:55.149 12:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@99 -- # sync 00:34:55.149 12:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:34:55.149 12:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # set +e 00:34:55.149 12:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # for i in {1..20} 00:34:55.149 12:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:34:55.149 rmmod nvme_tcp 00:34:55.149 rmmod nvme_fabrics 00:34:55.149 rmmod nvme_keyring 00:34:55.149 12:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:34:55.149 12:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # set -e 00:34:55.149 12:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # return 0 00:34:55.149 12:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # '[' -n 290411 ']' 00:34:55.149 12:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@337 -- # killprocess 290411 00:34:55.149 12:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 290411 ']' 00:34:55.149 12:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 290411 00:34:55.149 12:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:34:55.149 12:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:55.149 12:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 290411 00:34:55.149 12:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:55.149 12:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:55.149 12:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 290411' 00:34:55.149 killing process with pid 290411 00:34:55.149 12:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 290411 00:34:55.149 12:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 290411 00:34:55.149 12:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:34:55.149 12:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # nvmf_fini 00:34:55.149 12:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@264 -- # local dev 00:34:55.149 12:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@267 -- # remove_target_ns 00:34:55.149 12:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:34:55.149 12:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:34:55.149 12:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_target_ns 00:34:57.680 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@268 -- # delete_main_bridge 00:34:57.680 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:34:57.680 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@130 -- # return 0 00:34:57.680 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:34:57.680 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:34:57.680 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:34:57.680 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:34:57.680 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:34:57.680 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:34:57.680 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:34:57.680 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@41 -- # _dev=0 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@41 -- # dev_map=() 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@284 -- # iptr 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@542 -- # iptables-save 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@542 -- # iptables-restore 00:34:57.681 00:34:57.681 real 0m10.797s 00:34:57.681 user 0m14.810s 00:34:57.681 sys 0m6.594s 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:57.681 ************************************ 00:34:57.681 END TEST nvmf_bdev_io_wait 00:34:57.681 ************************************ 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@25 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:57.681 ************************************ 00:34:57.681 START TEST nvmf_queue_depth 00:34:57.681 ************************************ 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:34:57.681 * Looking for test storage... 00:34:57.681 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:57.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:57.681 --rc genhtml_branch_coverage=1 00:34:57.681 --rc genhtml_function_coverage=1 00:34:57.681 --rc genhtml_legend=1 00:34:57.681 --rc geninfo_all_blocks=1 00:34:57.681 --rc geninfo_unexecuted_blocks=1 00:34:57.681 00:34:57.681 ' 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:57.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:57.681 --rc genhtml_branch_coverage=1 00:34:57.681 --rc genhtml_function_coverage=1 00:34:57.681 --rc genhtml_legend=1 00:34:57.681 --rc geninfo_all_blocks=1 00:34:57.681 --rc geninfo_unexecuted_blocks=1 00:34:57.681 00:34:57.681 ' 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:57.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:57.681 --rc genhtml_branch_coverage=1 00:34:57.681 --rc genhtml_function_coverage=1 00:34:57.681 --rc genhtml_legend=1 00:34:57.681 --rc geninfo_all_blocks=1 00:34:57.681 --rc geninfo_unexecuted_blocks=1 00:34:57.681 00:34:57.681 ' 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:57.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:57.681 --rc genhtml_branch_coverage=1 00:34:57.681 --rc genhtml_function_coverage=1 00:34:57.681 --rc genhtml_legend=1 00:34:57.681 --rc geninfo_all_blocks=1 00:34:57.681 --rc geninfo_unexecuted_blocks=1 00:34:57.681 00:34:57.681 ' 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:34:57.681 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:57.682 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:57.682 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:34:57.682 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:57.682 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:57.682 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:57.682 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:57.682 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:57.682 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:57.682 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:34:57.682 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:57.682 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:34:57.682 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:34:57.682 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:34:57.682 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:34:57.682 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@50 -- # : 0 00:34:57.682 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:34:57.682 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:34:57.682 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:34:57.682 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:57.682 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:57.682 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:34:57.682 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:34:57.682 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:34:57.682 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:34:57.682 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@54 -- # have_pci_nics=0 00:34:57.682 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:34:57.682 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:34:57.682 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:57.682 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:34:57.682 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:34:57.682 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:57.682 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@296 -- # prepare_net_devs 00:34:57.682 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # local -g is_hw=no 00:34:57.682 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@260 -- # remove_target_ns 00:34:57.682 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:34:57.682 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:34:57.682 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_target_ns 00:34:57.682 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:34:57.682 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:34:57.682 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # xtrace_disable 00:34:57.682 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:04.254 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:04.254 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@131 -- # pci_devs=() 00:35:04.254 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@131 -- # local -a pci_devs 00:35:04.254 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@132 -- # pci_net_devs=() 00:35:04.254 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:35:04.254 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@133 -- # pci_drivers=() 00:35:04.254 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@133 -- # local -A pci_drivers 00:35:04.254 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@135 -- # net_devs=() 00:35:04.254 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@135 -- # local -ga net_devs 00:35:04.254 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@136 -- # e810=() 00:35:04.254 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@136 -- # local -ga e810 00:35:04.254 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@137 -- # x722=() 00:35:04.254 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@137 -- # local -ga x722 00:35:04.254 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@138 -- # mlx=() 00:35:04.254 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@138 -- # local -ga mlx 00:35:04.254 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:04.254 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:04.254 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:04.254 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:04.254 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:04.254 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:04.254 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:04.254 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:04.254 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:04.254 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:04.254 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:04.254 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:04.254 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:35:04.254 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:35:04.254 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:35:04.254 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:35:04.254 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:35:04.254 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:35:04.254 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:35:04.254 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:35:04.254 Found 0000:86:00.0 (0x8086 - 0x159b) 00:35:04.254 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:35:04.254 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:35:04.254 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:04.254 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:04.254 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:35:04.254 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:35:04.254 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:35:04.254 Found 0000:86:00.1 (0x8086 - 0x159b) 00:35:04.254 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:35:04.254 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:35:04.254 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:04.254 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:04.254 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:35:04.254 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:35:04.254 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:35:04.254 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:35:04.254 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:35:04.254 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:04.254 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:35:04.254 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:04.254 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@234 -- # [[ up == up ]] 00:35:04.254 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:35:04.254 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:04.254 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:35:04.254 Found net devices under 0000:86:00.0: cvl_0_0 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@234 -- # [[ up == up ]] 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:35:04.255 Found net devices under 0000:86:00.1: cvl_0_1 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # is_hw=yes 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@257 -- # create_target_ns 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@27 -- # local -gA dev_map 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@28 -- # local -g _dev 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@44 -- # ips=() 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@11 -- # local val=167772161 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:35:04.255 10.0.0.1 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@11 -- # local val=167772162 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:35:04.255 10.0.0.2 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:35:04.255 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@38 -- # ping_ips 1 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@107 -- # local dev=initiator0 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:35:04.256 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:04.256 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.444 ms 00:35:04.256 00:35:04.256 --- 10.0.0.1 ping statistics --- 00:35:04.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:04.256 rtt min/avg/max/mdev = 0.444/0.444/0.444/0.000 ms 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@168 -- # get_net_dev target0 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@107 -- # local dev=target0 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:35:04.256 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:04.256 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.146 ms 00:35:04.256 00:35:04.256 --- 10.0.0.2 ping statistics --- 00:35:04.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:04.256 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@98 -- # (( pair++ )) 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@270 -- # return 0 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@107 -- # local dev=initiator0 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@107 -- # local dev=initiator1 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@109 -- # return 1 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@168 -- # dev= 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@169 -- # return 0 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:35:04.256 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:35:04.257 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:35:04.257 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:04.257 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:04.257 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@168 -- # get_net_dev target0 00:35:04.257 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@107 -- # local dev=target0 00:35:04.257 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:35:04.257 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:35:04.257 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:35:04.257 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:35:04.257 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:35:04.257 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:35:04.257 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:35:04.257 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:35:04.257 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:35:04.257 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:04.257 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:35:04.257 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:35:04.257 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:35:04.257 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:35:04.257 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:04.257 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:04.257 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@168 -- # get_net_dev target1 00:35:04.257 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@107 -- # local dev=target1 00:35:04.257 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:35:04.257 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:35:04.257 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@109 -- # return 1 00:35:04.257 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@168 -- # dev= 00:35:04.257 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@169 -- # return 0 00:35:04.257 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:35:04.257 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:04.257 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:35:04.257 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:35:04.257 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:04.257 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:35:04.257 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:35:04.257 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:35:04.257 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:35:04.257 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:04.257 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:04.257 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # nvmfpid=294261 00:35:04.257 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@329 -- # waitforlisten 294261 00:35:04.257 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:35:04.257 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 294261 ']' 00:35:04.257 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:04.257 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:04.257 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:04.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:04.257 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:04.257 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:04.257 [2024-12-05 12:18:37.755502] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:04.257 [2024-12-05 12:18:37.756416] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:35:04.257 [2024-12-05 12:18:37.756450] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:04.257 [2024-12-05 12:18:37.836336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:04.257 [2024-12-05 12:18:37.876852] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:04.257 [2024-12-05 12:18:37.876890] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:04.257 [2024-12-05 12:18:37.876897] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:04.257 [2024-12-05 12:18:37.876903] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:04.257 [2024-12-05 12:18:37.876908] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:04.257 [2024-12-05 12:18:37.877487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:04.257 [2024-12-05 12:18:37.944165] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:04.257 [2024-12-05 12:18:37.944362] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:04.257 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:04.257 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:35:04.257 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:35:04.257 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:04.257 12:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:04.257 12:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:04.257 12:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:04.257 12:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.257 12:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:04.257 [2024-12-05 12:18:38.018154] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:04.257 12:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.257 12:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:04.257 12:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.257 12:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:04.257 Malloc0 00:35:04.257 12:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.257 12:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:04.257 12:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.257 12:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:04.257 12:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.257 12:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:04.257 12:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.257 12:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:04.257 12:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.257 12:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:04.257 12:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.257 12:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:04.257 [2024-12-05 12:18:38.098257] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:04.257 12:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.257 12:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=294477 00:35:04.257 12:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:04.257 12:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:35:04.257 12:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 294477 /var/tmp/bdevperf.sock 00:35:04.257 12:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 294477 ']' 00:35:04.257 12:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:04.258 12:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:04.258 12:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:04.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:04.258 12:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:04.258 12:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:04.258 [2024-12-05 12:18:38.151996] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:35:04.258 [2024-12-05 12:18:38.152042] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid294477 ] 00:35:04.258 [2024-12-05 12:18:38.229167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:04.258 [2024-12-05 12:18:38.271502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:04.258 12:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:04.258 12:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:35:04.258 12:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:35:04.258 12:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.258 12:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:04.258 NVMe0n1 00:35:04.258 12:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.258 12:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:35:04.516 Running I/O for 10 seconds... 00:35:06.408 12246.00 IOPS, 47.84 MiB/s [2024-12-05T11:18:41.538Z] 12276.50 IOPS, 47.96 MiB/s [2024-12-05T11:18:42.913Z] 12302.67 IOPS, 48.06 MiB/s [2024-12-05T11:18:43.851Z] 12432.75 IOPS, 48.57 MiB/s [2024-12-05T11:18:44.787Z] 12486.40 IOPS, 48.77 MiB/s [2024-12-05T11:18:45.723Z] 12544.17 IOPS, 49.00 MiB/s [2024-12-05T11:18:46.657Z] 12585.71 IOPS, 49.16 MiB/s [2024-12-05T11:18:47.593Z] 12610.88 IOPS, 49.26 MiB/s [2024-12-05T11:18:48.970Z] 12621.78 IOPS, 49.30 MiB/s [2024-12-05T11:18:48.970Z] 12629.20 IOPS, 49.33 MiB/s 00:35:14.774 Latency(us) 00:35:14.774 [2024-12-05T11:18:48.970Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:14.774 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:35:14.774 Verification LBA range: start 0x0 length 0x4000 00:35:14.774 NVMe0n1 : 10.06 12658.77 49.45 0.00 0.00 80611.81 15978.30 50930.83 00:35:14.774 [2024-12-05T11:18:48.970Z] =================================================================================================================== 00:35:14.774 [2024-12-05T11:18:48.970Z] Total : 12658.77 49.45 0.00 0.00 80611.81 15978.30 50930.83 00:35:14.774 { 00:35:14.774 "results": [ 00:35:14.774 { 00:35:14.774 "job": "NVMe0n1", 00:35:14.774 "core_mask": "0x1", 00:35:14.774 "workload": "verify", 00:35:14.774 "status": "finished", 00:35:14.774 "verify_range": { 00:35:14.774 "start": 0, 00:35:14.774 "length": 16384 00:35:14.774 }, 00:35:14.774 "queue_depth": 1024, 00:35:14.774 "io_size": 4096, 00:35:14.774 "runtime": 10.058959, 00:35:14.774 "iops": 12658.76518633787, 00:35:14.774 "mibps": 49.44830150913231, 00:35:14.774 "io_failed": 0, 00:35:14.774 "io_timeout": 0, 00:35:14.774 "avg_latency_us": 80611.80927441666, 00:35:14.774 "min_latency_us": 15978.300952380952, 00:35:14.774 "max_latency_us": 50930.834285714285 00:35:14.774 } 00:35:14.774 ], 00:35:14.775 "core_count": 1 00:35:14.775 } 00:35:14.775 12:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 294477 00:35:14.775 12:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 294477 ']' 00:35:14.775 12:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 294477 00:35:14.775 12:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:35:14.775 12:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:14.775 12:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 294477 00:35:14.775 12:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:14.775 12:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:14.775 12:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 294477' 00:35:14.775 killing process with pid 294477 00:35:14.775 12:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 294477 00:35:14.775 Received shutdown signal, test time was about 10.000000 seconds 00:35:14.775 00:35:14.775 Latency(us) 00:35:14.775 [2024-12-05T11:18:48.971Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:14.775 [2024-12-05T11:18:48.971Z] =================================================================================================================== 00:35:14.775 [2024-12-05T11:18:48.971Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:14.775 12:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 294477 00:35:14.775 12:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:35:14.775 12:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:35:14.775 12:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@335 -- # nvmfcleanup 00:35:14.775 12:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@99 -- # sync 00:35:14.775 12:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:35:14.775 12:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@102 -- # set +e 00:35:14.775 12:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@103 -- # for i in {1..20} 00:35:14.775 12:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:35:14.775 rmmod nvme_tcp 00:35:14.775 rmmod nvme_fabrics 00:35:14.775 rmmod nvme_keyring 00:35:14.775 12:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:35:14.775 12:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@106 -- # set -e 00:35:14.775 12:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@107 -- # return 0 00:35:14.775 12:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # '[' -n 294261 ']' 00:35:14.775 12:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@337 -- # killprocess 294261 00:35:14.775 12:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 294261 ']' 00:35:14.775 12:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 294261 00:35:14.775 12:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:35:14.775 12:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:14.775 12:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 294261 00:35:14.775 12:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:14.775 12:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:14.775 12:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 294261' 00:35:14.775 killing process with pid 294261 00:35:14.775 12:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 294261 00:35:14.775 12:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 294261 00:35:15.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:35:15.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@342 -- # nvmf_fini 00:35:15.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@264 -- # local dev 00:35:15.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@267 -- # remove_target_ns 00:35:15.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:35:15.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:35:15.033 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_target_ns 00:35:17.564 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@268 -- # delete_main_bridge 00:35:17.564 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:35:17.564 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@130 -- # return 0 00:35:17.564 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:35:17.564 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:35:17.564 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:35:17.564 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:35:17.564 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:35:17.564 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:35:17.564 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:35:17.564 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:35:17.564 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:35:17.564 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:35:17.564 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:35:17.564 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:35:17.564 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:35:17.564 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:35:17.564 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:35:17.564 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:35:17.564 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:35:17.564 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@41 -- # _dev=0 00:35:17.564 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@41 -- # dev_map=() 00:35:17.564 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@284 -- # iptr 00:35:17.564 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@542 -- # iptables-save 00:35:17.564 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:35:17.564 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@542 -- # iptables-restore 00:35:17.564 00:35:17.564 real 0m19.706s 00:35:17.564 user 0m22.752s 00:35:17.564 sys 0m6.199s 00:35:17.564 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:17.564 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:17.564 ************************************ 00:35:17.564 END TEST nvmf_queue_depth 00:35:17.564 ************************************ 00:35:17.564 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:35:17.564 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:17.564 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:17.564 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:17.564 ************************************ 00:35:17.564 START TEST nvmf_nmic 00:35:17.564 ************************************ 00:35:17.564 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:35:17.564 * Looking for test storage... 00:35:17.564 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:17.564 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:17.564 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:35:17.564 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:17.564 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:17.564 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:17.564 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:17.564 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:17.564 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:35:17.564 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:35:17.564 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:35:17.564 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:35:17.564 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:35:17.564 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:35:17.564 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:35:17.564 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:17.564 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:35:17.564 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:35:17.564 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:17.564 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:17.564 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:35:17.564 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:35:17.564 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:17.564 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:35:17.564 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:35:17.564 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:35:17.565 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:35:17.565 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:17.565 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:35:17.565 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:35:17.565 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:17.565 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:17.565 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:35:17.565 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:17.565 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:17.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:17.565 --rc genhtml_branch_coverage=1 00:35:17.565 --rc genhtml_function_coverage=1 00:35:17.565 --rc genhtml_legend=1 00:35:17.565 --rc geninfo_all_blocks=1 00:35:17.565 --rc geninfo_unexecuted_blocks=1 00:35:17.565 00:35:17.565 ' 00:35:17.565 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:17.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:17.565 --rc genhtml_branch_coverage=1 00:35:17.565 --rc genhtml_function_coverage=1 00:35:17.565 --rc genhtml_legend=1 00:35:17.565 --rc geninfo_all_blocks=1 00:35:17.565 --rc geninfo_unexecuted_blocks=1 00:35:17.565 00:35:17.565 ' 00:35:17.565 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:17.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:17.565 --rc genhtml_branch_coverage=1 00:35:17.565 --rc genhtml_function_coverage=1 00:35:17.565 --rc genhtml_legend=1 00:35:17.565 --rc geninfo_all_blocks=1 00:35:17.565 --rc geninfo_unexecuted_blocks=1 00:35:17.565 00:35:17.565 ' 00:35:17.565 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:17.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:17.565 --rc genhtml_branch_coverage=1 00:35:17.565 --rc genhtml_function_coverage=1 00:35:17.565 --rc genhtml_legend=1 00:35:17.565 --rc geninfo_all_blocks=1 00:35:17.565 --rc geninfo_unexecuted_blocks=1 00:35:17.565 00:35:17.565 ' 00:35:17.565 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:17.565 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:35:17.565 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:17.565 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:17.565 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:17.565 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:17.565 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:17.565 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:35:17.565 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:17.565 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:35:17.565 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:35:17.565 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:35:17.565 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:17.565 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:35:17.565 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:35:17.565 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:17.565 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:17.565 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:35:17.565 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:17.565 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:17.565 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:17.565 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.565 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.565 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.565 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:35:17.565 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.565 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:35:17.565 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:35:17.565 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:35:17.565 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:35:17.565 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@50 -- # : 0 00:35:17.565 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:35:17.565 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:35:17.565 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:35:17.565 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:17.565 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:17.565 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:35:17.565 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:35:17.565 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:35:17.565 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:35:17.565 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@54 -- # have_pci_nics=0 00:35:17.565 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:17.565 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:17.565 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:35:17.565 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:35:17.565 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:17.565 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@296 -- # prepare_net_devs 00:35:17.565 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # local -g is_hw=no 00:35:17.565 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@260 -- # remove_target_ns 00:35:17.565 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:35:17.565 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:35:17.565 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_target_ns 00:35:17.565 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:35:17.565 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:35:17.565 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # xtrace_disable 00:35:17.565 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:24.134 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:24.134 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@131 -- # pci_devs=() 00:35:24.134 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@131 -- # local -a pci_devs 00:35:24.134 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@132 -- # pci_net_devs=() 00:35:24.134 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:35:24.134 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@133 -- # pci_drivers=() 00:35:24.134 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@133 -- # local -A pci_drivers 00:35:24.134 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@135 -- # net_devs=() 00:35:24.134 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@135 -- # local -ga net_devs 00:35:24.134 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@136 -- # e810=() 00:35:24.134 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@136 -- # local -ga e810 00:35:24.134 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@137 -- # x722=() 00:35:24.134 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@137 -- # local -ga x722 00:35:24.134 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@138 -- # mlx=() 00:35:24.134 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@138 -- # local -ga mlx 00:35:24.134 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:24.134 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:24.134 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:24.134 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:24.134 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:24.134 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:24.134 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:24.134 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:24.134 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:24.134 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:24.134 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:24.134 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:24.134 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:35:24.134 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:35:24.134 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:35:24.134 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:35:24.134 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:35:24.134 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:35:24.134 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:35:24.134 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:35:24.134 Found 0000:86:00.0 (0x8086 - 0x159b) 00:35:24.134 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:35:24.134 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:35:24.134 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:24.134 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:24.134 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:35:24.134 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:35:24.134 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:35:24.134 Found 0000:86:00.1 (0x8086 - 0x159b) 00:35:24.134 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:35:24.134 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:35:24.134 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:24.134 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:24.134 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:35:24.134 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:35:24.134 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:35:24.134 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:35:24.134 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:35:24.134 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:24.134 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:35:24.134 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:24.134 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@234 -- # [[ up == up ]] 00:35:24.134 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:35:24.134 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:24.134 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:35:24.134 Found net devices under 0000:86:00.0: cvl_0_0 00:35:24.134 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@234 -- # [[ up == up ]] 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:35:24.135 Found net devices under 0000:86:00.1: cvl_0_1 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # is_hw=yes 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@257 -- # create_target_ns 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@27 -- # local -gA dev_map 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@28 -- # local -g _dev 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@44 -- # ips=() 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@11 -- # local val=167772161 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:35:24.135 10.0.0.1 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@11 -- # local val=167772162 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:35:24.135 10.0.0.2 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@38 -- # ping_ips 1 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:35:24.135 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@107 -- # local dev=initiator0 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:35:24.136 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:24.136 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.449 ms 00:35:24.136 00:35:24.136 --- 10.0.0.1 ping statistics --- 00:35:24.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:24.136 rtt min/avg/max/mdev = 0.449/0.449/0.449/0.000 ms 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@168 -- # get_net_dev target0 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@107 -- # local dev=target0 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:35:24.136 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:24.136 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:35:24.136 00:35:24.136 --- 10.0.0.2 ping statistics --- 00:35:24.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:24.136 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@98 -- # (( pair++ )) 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@270 -- # return 0 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@107 -- # local dev=initiator0 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@107 -- # local dev=initiator1 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@109 -- # return 1 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@168 -- # dev= 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@169 -- # return 0 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@168 -- # get_net_dev target0 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@107 -- # local dev=target0 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:35:24.136 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:35:24.137 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:35:24.137 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:24.137 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:35:24.137 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:35:24.137 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:35:24.137 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:35:24.137 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:24.137 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:24.137 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@168 -- # get_net_dev target1 00:35:24.137 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@107 -- # local dev=target1 00:35:24.137 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:35:24.137 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:35:24.137 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@109 -- # return 1 00:35:24.137 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@168 -- # dev= 00:35:24.137 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@169 -- # return 0 00:35:24.137 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:35:24.137 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:24.137 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:35:24.137 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:35:24.137 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:24.137 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:35:24.137 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:35:24.137 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:35:24.137 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:35:24.137 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:24.137 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:24.137 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # nvmfpid=299628 00:35:24.137 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:35:24.137 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@329 -- # waitforlisten 299628 00:35:24.137 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 299628 ']' 00:35:24.137 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:24.137 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:24.137 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:24.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:24.137 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:24.137 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:24.137 [2024-12-05 12:18:57.572232] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:24.137 [2024-12-05 12:18:57.573376] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:35:24.137 [2024-12-05 12:18:57.573418] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:24.137 [2024-12-05 12:18:57.649585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:24.137 [2024-12-05 12:18:57.693911] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:24.137 [2024-12-05 12:18:57.693946] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:24.137 [2024-12-05 12:18:57.693953] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:24.137 [2024-12-05 12:18:57.693959] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:24.137 [2024-12-05 12:18:57.693964] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:24.137 [2024-12-05 12:18:57.695379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:24.137 [2024-12-05 12:18:57.695417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:24.137 [2024-12-05 12:18:57.695524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:24.137 [2024-12-05 12:18:57.695525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:24.137 [2024-12-05 12:18:57.762896] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:24.137 [2024-12-05 12:18:57.763498] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:24.137 [2024-12-05 12:18:57.763807] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:24.137 [2024-12-05 12:18:57.763925] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:24.137 [2024-12-05 12:18:57.764008] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:24.397 12:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:24.397 12:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:35:24.397 12:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:35:24.397 12:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:24.397 12:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:24.397 12:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:24.397 12:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:24.397 12:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.397 12:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:24.397 [2024-12-05 12:18:58.444205] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:24.397 12:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.397 12:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:24.397 12:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.397 12:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:24.397 Malloc0 00:35:24.397 12:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.397 12:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:35:24.397 12:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.397 12:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:24.397 12:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.397 12:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:24.397 12:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.397 12:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:24.397 12:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.397 12:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:24.397 12:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.397 12:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:24.397 [2024-12-05 12:18:58.524477] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:24.397 12:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.397 12:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:35:24.397 test case1: single bdev can't be used in multiple subsystems 00:35:24.397 12:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:35:24.397 12:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.397 12:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:24.397 12:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.397 12:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:35:24.398 12:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.398 12:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:24.398 12:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.398 12:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:35:24.398 12:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:35:24.398 12:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.398 12:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:24.398 [2024-12-05 12:18:58.559900] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:35:24.398 [2024-12-05 12:18:58.559924] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:35:24.398 [2024-12-05 12:18:58.559932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:24.398 request: 00:35:24.398 { 00:35:24.398 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:35:24.398 "namespace": { 00:35:24.398 "bdev_name": "Malloc0", 00:35:24.398 "no_auto_visible": false, 00:35:24.398 "hide_metadata": false 00:35:24.398 }, 00:35:24.398 "method": "nvmf_subsystem_add_ns", 00:35:24.398 "req_id": 1 00:35:24.398 } 00:35:24.398 Got JSON-RPC error response 00:35:24.398 response: 00:35:24.398 { 00:35:24.398 "code": -32602, 00:35:24.398 "message": "Invalid parameters" 00:35:24.398 } 00:35:24.398 12:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:24.398 12:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:35:24.398 12:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:35:24.398 12:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:35:24.398 Adding namespace failed - expected result. 00:35:24.398 12:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:35:24.398 test case2: host connect to nvmf target in multiple paths 00:35:24.398 12:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:35:24.398 12:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.398 12:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:24.398 [2024-12-05 12:18:58.571997] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:35:24.398 12:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.398 12:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:35:24.967 12:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:35:24.967 12:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:35:24.967 12:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:35:24.967 12:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:35:24.967 12:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:35:24.967 12:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:35:27.503 12:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:35:27.503 12:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:35:27.503 12:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:35:27.503 12:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:35:27.503 12:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:35:27.503 12:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:35:27.503 12:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:35:27.503 [global] 00:35:27.503 thread=1 00:35:27.503 invalidate=1 00:35:27.503 rw=write 00:35:27.503 time_based=1 00:35:27.503 runtime=1 00:35:27.503 ioengine=libaio 00:35:27.503 direct=1 00:35:27.503 bs=4096 00:35:27.503 iodepth=1 00:35:27.503 norandommap=0 00:35:27.503 numjobs=1 00:35:27.503 00:35:27.503 verify_dump=1 00:35:27.503 verify_backlog=512 00:35:27.503 verify_state_save=0 00:35:27.503 do_verify=1 00:35:27.503 verify=crc32c-intel 00:35:27.503 [job0] 00:35:27.503 filename=/dev/nvme0n1 00:35:27.503 Could not set queue depth (nvme0n1) 00:35:27.503 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:27.503 fio-3.35 00:35:27.503 Starting 1 thread 00:35:28.583 00:35:28.583 job0: (groupid=0, jobs=1): err= 0: pid=300448: Thu Dec 5 12:19:02 2024 00:35:28.583 read: IOPS=23, BW=93.8KiB/s (96.1kB/s)(96.0KiB/1023msec) 00:35:28.583 slat (nsec): min=9306, max=25359, avg=21335.71, stdev=3477.92 00:35:28.583 clat (usec): min=509, max=41969, avg=39311.50, stdev=8268.02 00:35:28.583 lat (usec): min=535, max=41991, avg=39332.83, stdev=8267.19 00:35:28.583 clat percentiles (usec): 00:35:28.583 | 1.00th=[ 510], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:35:28.583 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:28.583 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:28.583 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:28.583 | 99.99th=[42206] 00:35:28.583 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:35:28.583 slat (nsec): min=9048, max=37251, avg=11084.38, stdev=1978.83 00:35:28.583 clat (usec): min=126, max=392, avg=139.91, stdev=16.96 00:35:28.583 lat (usec): min=137, max=430, avg=150.99, stdev=18.57 00:35:28.583 clat percentiles (usec): 00:35:28.583 | 1.00th=[ 129], 5.00th=[ 133], 10.00th=[ 135], 20.00th=[ 135], 00:35:28.583 | 30.00th=[ 137], 40.00th=[ 137], 50.00th=[ 139], 60.00th=[ 139], 00:35:28.583 | 70.00th=[ 141], 80.00th=[ 143], 90.00th=[ 147], 95.00th=[ 151], 00:35:28.583 | 99.00th=[ 178], 99.50th=[ 233], 99.90th=[ 392], 99.95th=[ 392], 00:35:28.583 | 99.99th=[ 392] 00:35:28.583 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:35:28.583 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:28.583 lat (usec) : 250=95.15%, 500=0.37%, 750=0.19% 00:35:28.583 lat (msec) : 50=4.29% 00:35:28.583 cpu : usr=0.39%, sys=0.39%, ctx=536, majf=0, minf=1 00:35:28.583 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:28.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:28.583 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:28.583 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:28.583 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:28.583 00:35:28.583 Run status group 0 (all jobs): 00:35:28.583 READ: bw=93.8KiB/s (96.1kB/s), 93.8KiB/s-93.8KiB/s (96.1kB/s-96.1kB/s), io=96.0KiB (98.3kB), run=1023-1023msec 00:35:28.583 WRITE: bw=2002KiB/s (2050kB/s), 2002KiB/s-2002KiB/s (2050kB/s-2050kB/s), io=2048KiB (2097kB), run=1023-1023msec 00:35:28.583 00:35:28.583 Disk stats (read/write): 00:35:28.583 nvme0n1: ios=70/512, merge=0/0, ticks=799/71, in_queue=870, util=91.08% 00:35:28.583 12:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:28.842 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:35:28.843 12:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:28.843 12:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:35:28.843 12:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:35:28.843 12:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:28.843 12:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:35:28.843 12:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:28.843 12:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:35:28.843 12:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:35:28.843 12:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:35:28.843 12:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@335 -- # nvmfcleanup 00:35:28.843 12:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@99 -- # sync 00:35:28.843 12:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:35:28.843 12:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@102 -- # set +e 00:35:28.843 12:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@103 -- # for i in {1..20} 00:35:28.843 12:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:35:28.843 rmmod nvme_tcp 00:35:28.843 rmmod nvme_fabrics 00:35:28.843 rmmod nvme_keyring 00:35:28.843 12:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:35:28.843 12:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@106 -- # set -e 00:35:28.843 12:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@107 -- # return 0 00:35:28.843 12:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # '[' -n 299628 ']' 00:35:28.843 12:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@337 -- # killprocess 299628 00:35:28.843 12:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 299628 ']' 00:35:28.843 12:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 299628 00:35:28.843 12:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:35:28.843 12:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:28.843 12:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 299628 00:35:28.843 12:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:28.843 12:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:28.843 12:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 299628' 00:35:28.843 killing process with pid 299628 00:35:28.843 12:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 299628 00:35:28.843 12:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 299628 00:35:29.102 12:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:35:29.102 12:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@342 -- # nvmf_fini 00:35:29.102 12:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@264 -- # local dev 00:35:29.102 12:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@267 -- # remove_target_ns 00:35:29.102 12:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:35:29.102 12:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:35:29.102 12:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_target_ns 00:35:31.640 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@268 -- # delete_main_bridge 00:35:31.640 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:35:31.640 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@130 -- # return 0 00:35:31.640 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:35:31.640 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:35:31.640 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:35:31.640 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:35:31.640 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:35:31.640 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:35:31.640 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:35:31.640 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:35:31.640 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:35:31.640 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:35:31.640 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:35:31.640 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:35:31.640 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:35:31.640 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:35:31.640 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:35:31.640 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:35:31.640 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:35:31.640 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@41 -- # _dev=0 00:35:31.640 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@41 -- # dev_map=() 00:35:31.640 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@284 -- # iptr 00:35:31.640 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@542 -- # iptables-save 00:35:31.640 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:35:31.640 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@542 -- # iptables-restore 00:35:31.640 00:35:31.640 real 0m13.980s 00:35:31.640 user 0m24.928s 00:35:31.640 sys 0m6.148s 00:35:31.640 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:31.640 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:31.640 ************************************ 00:35:31.640 END TEST nvmf_nmic 00:35:31.640 ************************************ 00:35:31.640 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:35:31.640 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:31.640 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:31.641 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:31.641 ************************************ 00:35:31.641 START TEST nvmf_fio_target 00:35:31.641 ************************************ 00:35:31.641 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:35:31.641 * Looking for test storage... 00:35:31.641 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:31.641 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:31.641 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:35:31.641 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:31.641 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:31.641 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:31.641 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:31.641 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:31.641 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:35:31.641 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:35:31.641 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:35:31.641 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:35:31.641 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:35:31.641 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:35:31.641 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:35:31.641 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:31.641 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:35:31.641 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:35:31.641 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:31.641 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:31.641 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:35:31.641 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:35:31.641 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:31.641 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:35:31.641 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:35:31.641 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:35:31.641 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:35:31.641 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:31.641 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:35:31.641 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:35:31.641 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:31.641 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:31.641 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:35:31.641 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:31.641 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:31.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:31.641 --rc genhtml_branch_coverage=1 00:35:31.641 --rc genhtml_function_coverage=1 00:35:31.641 --rc genhtml_legend=1 00:35:31.641 --rc geninfo_all_blocks=1 00:35:31.641 --rc geninfo_unexecuted_blocks=1 00:35:31.641 00:35:31.641 ' 00:35:31.641 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:31.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:31.641 --rc genhtml_branch_coverage=1 00:35:31.641 --rc genhtml_function_coverage=1 00:35:31.641 --rc genhtml_legend=1 00:35:31.641 --rc geninfo_all_blocks=1 00:35:31.641 --rc geninfo_unexecuted_blocks=1 00:35:31.641 00:35:31.641 ' 00:35:31.641 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:31.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:31.641 --rc genhtml_branch_coverage=1 00:35:31.641 --rc genhtml_function_coverage=1 00:35:31.641 --rc genhtml_legend=1 00:35:31.641 --rc geninfo_all_blocks=1 00:35:31.641 --rc geninfo_unexecuted_blocks=1 00:35:31.641 00:35:31.641 ' 00:35:31.641 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:31.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:31.641 --rc genhtml_branch_coverage=1 00:35:31.641 --rc genhtml_function_coverage=1 00:35:31.641 --rc genhtml_legend=1 00:35:31.641 --rc geninfo_all_blocks=1 00:35:31.641 --rc geninfo_unexecuted_blocks=1 00:35:31.641 00:35:31.641 ' 00:35:31.641 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:31.641 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:35:31.641 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:31.641 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:31.641 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:31.641 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:31.641 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:31.641 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:35:31.641 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:31.641 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:35:31.641 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:35:31.641 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:35:31.641 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:31.641 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:35:31.641 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:35:31.641 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:31.641 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:31.641 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:35:31.641 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:31.641 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:31.641 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:31.641 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.641 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.641 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.641 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:35:31.642 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.642 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:35:31.642 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:35:31.642 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:35:31.642 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:35:31.642 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@50 -- # : 0 00:35:31.642 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:35:31.642 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:35:31.642 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:35:31.642 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:31.642 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:31.642 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:35:31.642 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:35:31.642 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:35:31.642 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:35:31.642 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@54 -- # have_pci_nics=0 00:35:31.642 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:31.642 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:31.642 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:31.642 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:35:31.642 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:35:31.642 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:31.642 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@296 -- # prepare_net_devs 00:35:31.642 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # local -g is_hw=no 00:35:31.642 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@260 -- # remove_target_ns 00:35:31.642 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:35:31.642 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:35:31.642 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:35:31.642 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:35:31.642 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:35:31.642 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # xtrace_disable 00:35:31.642 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:36.918 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:36.918 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@131 -- # pci_devs=() 00:35:36.918 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@131 -- # local -a pci_devs 00:35:36.918 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@132 -- # pci_net_devs=() 00:35:36.918 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:35:36.918 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@133 -- # pci_drivers=() 00:35:36.918 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@133 -- # local -A pci_drivers 00:35:36.918 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@135 -- # net_devs=() 00:35:36.918 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@135 -- # local -ga net_devs 00:35:36.918 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@136 -- # e810=() 00:35:36.918 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@136 -- # local -ga e810 00:35:36.918 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@137 -- # x722=() 00:35:36.918 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@137 -- # local -ga x722 00:35:36.918 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@138 -- # mlx=() 00:35:36.918 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@138 -- # local -ga mlx 00:35:36.918 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:36.918 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:36.918 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:36.918 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:36.918 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:36.918 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:36.918 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:36.918 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:36.918 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:36.918 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:36.918 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:36.918 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:36.918 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:35:36.918 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:35:36.918 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:35:36.918 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:35:36.918 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:35:36.918 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:35:36.918 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:35:36.918 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:35:36.918 Found 0000:86:00.0 (0x8086 - 0x159b) 00:35:36.918 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:35:36.918 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:35:36.918 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:36.918 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:36.918 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:35:36.918 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:35:36.918 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:35:36.918 Found 0000:86:00.1 (0x8086 - 0x159b) 00:35:36.918 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:35:36.918 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:35:36.918 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:36.918 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:36.918 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:35:36.918 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:35:36.918 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:35:36.918 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:35:36.918 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:35:36.919 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:36.919 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:35:36.919 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:36.919 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@234 -- # [[ up == up ]] 00:35:36.919 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:35:36.919 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:36.919 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:35:36.919 Found net devices under 0000:86:00.0: cvl_0_0 00:35:36.919 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:35:36.919 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:35:36.919 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:36.919 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:35:36.919 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:36.919 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@234 -- # [[ up == up ]] 00:35:36.919 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:35:36.919 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:36.919 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:35:36.919 Found net devices under 0000:86:00.1: cvl_0_1 00:35:36.919 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:35:36.919 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:35:36.919 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:35:36.919 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # is_hw=yes 00:35:36.919 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:35:36.919 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:35:36.919 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:35:36.919 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:35:36.919 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@257 -- # create_target_ns 00:35:36.919 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:35:36.919 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:35:36.919 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:35:36.919 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:36.919 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:35:36.919 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:35:36.919 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:36.919 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:36.919 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:35:36.919 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:35:36.919 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:35:36.919 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:35:36.919 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@27 -- # local -gA dev_map 00:35:36.919 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@28 -- # local -g _dev 00:35:36.919 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:35:36.919 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:35:36.919 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:35:36.919 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:35:36.919 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@44 -- # ips=() 00:35:36.919 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:35:36.919 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:35:36.919 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:35:36.919 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:35:36.919 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:35:36.919 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:35:36.919 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:35:36.919 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:35:36.919 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:35:36.919 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:35:36.919 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:35:36.919 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:35:36.919 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:35:36.919 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:35:36.919 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:35:36.919 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:35:37.179 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:35:37.179 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:35:37.179 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:35:37.179 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:35:37.179 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@11 -- # local val=167772161 00:35:37.179 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:35:37.179 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:35:37.179 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:35:37.179 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:35:37.179 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:35:37.180 10.0.0.1 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@11 -- # local val=167772162 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:35:37.180 10.0.0.2 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@38 -- # ping_ips 1 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@107 -- # local dev=initiator0 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:35:37.180 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:37.180 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.343 ms 00:35:37.180 00:35:37.180 --- 10.0.0.1 ping statistics --- 00:35:37.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:37.180 rtt min/avg/max/mdev = 0.343/0.343/0.343/0.000 ms 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@168 -- # get_net_dev target0 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@107 -- # local dev=target0 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:35:37.180 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:37.180 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:35:37.180 00:35:37.180 --- 10.0.0.2 ping statistics --- 00:35:37.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:37.180 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@98 -- # (( pair++ )) 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@270 -- # return 0 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:35:37.180 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:35:37.440 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:35:37.440 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:35:37.440 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:35:37.440 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:35:37.440 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:35:37.440 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:35:37.440 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:35:37.440 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:35:37.440 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@107 -- # local dev=initiator0 00:35:37.440 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:35:37.440 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:35:37.440 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:35:37.440 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:35:37.440 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:35:37.440 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:35:37.440 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:35:37.440 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:35:37.440 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:35:37.440 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:37.440 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:35:37.440 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:35:37.440 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:35:37.440 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:35:37.440 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:35:37.440 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:35:37.440 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@107 -- # local dev=initiator1 00:35:37.440 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:35:37.440 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:35:37.440 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@109 -- # return 1 00:35:37.440 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@168 -- # dev= 00:35:37.440 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@169 -- # return 0 00:35:37.440 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:35:37.440 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:35:37.440 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:35:37.440 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:35:37.440 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:35:37.440 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:37.440 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:37.440 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@168 -- # get_net_dev target0 00:35:37.440 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@107 -- # local dev=target0 00:35:37.440 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:35:37.440 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:35:37.440 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:35:37.440 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:35:37.440 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:35:37.440 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:35:37.440 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:35:37.440 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:35:37.440 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:35:37.440 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:37.440 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:35:37.440 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:35:37.440 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:35:37.440 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:35:37.441 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:37.441 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:37.441 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@168 -- # get_net_dev target1 00:35:37.441 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@107 -- # local dev=target1 00:35:37.441 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:35:37.441 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:35:37.441 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@109 -- # return 1 00:35:37.441 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@168 -- # dev= 00:35:37.441 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@169 -- # return 0 00:35:37.441 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:35:37.441 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:37.441 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:35:37.441 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:35:37.441 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:37.441 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:35:37.441 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:35:37.441 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:35:37.441 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:35:37.441 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:37.441 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:37.441 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # nvmfpid=304102 00:35:37.441 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@329 -- # waitforlisten 304102 00:35:37.441 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:35:37.441 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 304102 ']' 00:35:37.441 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:37.441 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:37.441 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:37.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:37.441 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:37.441 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:37.441 [2024-12-05 12:19:11.527578] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:37.441 [2024-12-05 12:19:11.528480] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:35:37.441 [2024-12-05 12:19:11.528514] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:37.441 [2024-12-05 12:19:11.606354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:37.699 [2024-12-05 12:19:11.648549] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:37.699 [2024-12-05 12:19:11.648586] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:37.700 [2024-12-05 12:19:11.648594] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:37.700 [2024-12-05 12:19:11.648600] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:37.700 [2024-12-05 12:19:11.648606] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:37.700 [2024-12-05 12:19:11.650110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:37.700 [2024-12-05 12:19:11.650221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:37.700 [2024-12-05 12:19:11.650329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:37.700 [2024-12-05 12:19:11.650331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:37.700 [2024-12-05 12:19:11.718023] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:37.700 [2024-12-05 12:19:11.718405] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:37.700 [2024-12-05 12:19:11.718861] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:37.700 [2024-12-05 12:19:11.719096] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:37.700 [2024-12-05 12:19:11.719144] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:37.700 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:37.700 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:35:37.700 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:35:37.700 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:37.700 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:37.700 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:37.700 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:35:37.958 [2024-12-05 12:19:11.954990] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:37.958 12:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:38.217 12:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:35:38.217 12:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:38.476 12:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:35:38.476 12:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:38.476 12:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:35:38.476 12:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:38.734 12:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:35:38.734 12:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:35:38.992 12:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:39.248 12:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:35:39.248 12:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:39.505 12:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:35:39.505 12:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:39.505 12:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:35:39.505 12:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:35:39.761 12:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:35:40.018 12:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:35:40.018 12:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:40.275 12:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:35:40.275 12:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:35:40.275 12:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:40.534 [2024-12-05 12:19:14.602926] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:40.534 12:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:35:40.792 12:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:35:41.049 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:35:41.306 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:35:41.306 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:35:41.306 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:35:41.306 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:35:41.306 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:35:41.306 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:35:43.209 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:35:43.209 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:35:43.209 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:35:43.209 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:35:43.209 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:35:43.209 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:35:43.209 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:35:43.209 [global] 00:35:43.209 thread=1 00:35:43.209 invalidate=1 00:35:43.209 rw=write 00:35:43.209 time_based=1 00:35:43.209 runtime=1 00:35:43.209 ioengine=libaio 00:35:43.209 direct=1 00:35:43.209 bs=4096 00:35:43.209 iodepth=1 00:35:43.209 norandommap=0 00:35:43.209 numjobs=1 00:35:43.209 00:35:43.209 verify_dump=1 00:35:43.209 verify_backlog=512 00:35:43.209 verify_state_save=0 00:35:43.209 do_verify=1 00:35:43.209 verify=crc32c-intel 00:35:43.209 [job0] 00:35:43.209 filename=/dev/nvme0n1 00:35:43.209 [job1] 00:35:43.209 filename=/dev/nvme0n2 00:35:43.209 [job2] 00:35:43.209 filename=/dev/nvme0n3 00:35:43.209 [job3] 00:35:43.209 filename=/dev/nvme0n4 00:35:43.466 Could not set queue depth (nvme0n1) 00:35:43.466 Could not set queue depth (nvme0n2) 00:35:43.466 Could not set queue depth (nvme0n3) 00:35:43.466 Could not set queue depth (nvme0n4) 00:35:43.466 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:43.466 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:43.466 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:43.466 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:43.466 fio-3.35 00:35:43.466 Starting 4 threads 00:35:44.844 00:35:44.844 job0: (groupid=0, jobs=1): err= 0: pid=305378: Thu Dec 5 12:19:18 2024 00:35:44.844 read: IOPS=21, BW=85.3KiB/s (87.3kB/s)(88.0KiB/1032msec) 00:35:44.844 slat (nsec): min=10964, max=24028, avg=14056.05, stdev=3610.80 00:35:44.844 clat (usec): min=40742, max=41056, avg=40970.03, stdev=56.40 00:35:44.844 lat (usec): min=40753, max=41072, avg=40984.09, stdev=56.65 00:35:44.844 clat percentiles (usec): 00:35:44.844 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:35:44.844 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:44.844 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:44.844 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:35:44.844 | 99.99th=[41157] 00:35:44.844 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:35:44.844 slat (nsec): min=10479, max=37080, avg=12709.12, stdev=2033.23 00:35:44.844 clat (usec): min=168, max=317, avg=238.40, stdev=11.49 00:35:44.844 lat (usec): min=184, max=329, avg=251.11, stdev=11.20 00:35:44.844 clat percentiles (usec): 00:35:44.844 | 1.00th=[ 188], 5.00th=[ 225], 10.00th=[ 229], 20.00th=[ 235], 00:35:44.844 | 30.00th=[ 237], 40.00th=[ 239], 50.00th=[ 239], 60.00th=[ 241], 00:35:44.844 | 70.00th=[ 243], 80.00th=[ 245], 90.00th=[ 249], 95.00th=[ 253], 00:35:44.844 | 99.00th=[ 265], 99.50th=[ 273], 99.90th=[ 318], 99.95th=[ 318], 00:35:44.844 | 99.99th=[ 318] 00:35:44.844 bw ( KiB/s): min= 4096, max= 4096, per=22.93%, avg=4096.00, stdev= 0.00, samples=1 00:35:44.844 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:44.844 lat (usec) : 250=89.14%, 500=6.74% 00:35:44.844 lat (msec) : 50=4.12% 00:35:44.844 cpu : usr=0.68%, sys=0.68%, ctx=534, majf=0, minf=1 00:35:44.844 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:44.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.844 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.844 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:44.844 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:44.844 job1: (groupid=0, jobs=1): err= 0: pid=305379: Thu Dec 5 12:19:18 2024 00:35:44.844 read: IOPS=831, BW=3325KiB/s (3404kB/s)(3328KiB/1001msec) 00:35:44.844 slat (nsec): min=6813, max=30656, avg=8269.40, stdev=2518.19 00:35:44.844 clat (usec): min=179, max=41974, avg=968.14, stdev=5325.44 00:35:44.844 lat (usec): min=187, max=41996, avg=976.41, stdev=5327.24 00:35:44.844 clat percentiles (usec): 00:35:44.844 | 1.00th=[ 192], 5.00th=[ 231], 10.00th=[ 239], 20.00th=[ 243], 00:35:44.844 | 30.00th=[ 245], 40.00th=[ 245], 50.00th=[ 247], 60.00th=[ 249], 00:35:44.844 | 70.00th=[ 249], 80.00th=[ 251], 90.00th=[ 258], 95.00th=[ 359], 00:35:44.844 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:35:44.844 | 99.99th=[42206] 00:35:44.844 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:35:44.844 slat (nsec): min=4256, max=34358, avg=10685.45, stdev=2896.56 00:35:44.844 clat (usec): min=131, max=372, avg=165.25, stdev=25.19 00:35:44.844 lat (usec): min=141, max=379, avg=175.93, stdev=25.95 00:35:44.844 clat percentiles (usec): 00:35:44.844 | 1.00th=[ 135], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 147], 00:35:44.844 | 30.00th=[ 151], 40.00th=[ 153], 50.00th=[ 159], 60.00th=[ 163], 00:35:44.844 | 70.00th=[ 172], 80.00th=[ 180], 90.00th=[ 192], 95.00th=[ 206], 00:35:44.844 | 99.00th=[ 245], 99.50th=[ 285], 99.90th=[ 326], 99.95th=[ 371], 00:35:44.844 | 99.99th=[ 371] 00:35:44.844 bw ( KiB/s): min= 4096, max= 4096, per=22.93%, avg=4096.00, stdev= 0.00, samples=1 00:35:44.844 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:44.844 lat (usec) : 250=87.66%, 500=11.48%, 750=0.05% 00:35:44.844 lat (msec) : 50=0.81% 00:35:44.844 cpu : usr=0.80%, sys=1.90%, ctx=1858, majf=0, minf=1 00:35:44.844 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:44.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.844 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.844 issued rwts: total=832,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:44.844 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:44.844 job2: (groupid=0, jobs=1): err= 0: pid=305380: Thu Dec 5 12:19:18 2024 00:35:44.844 read: IOPS=1577, BW=6310KiB/s (6462kB/s)(6348KiB/1006msec) 00:35:44.844 slat (nsec): min=5704, max=25888, avg=7749.69, stdev=1442.07 00:35:44.844 clat (usec): min=180, max=43703, avg=397.88, stdev=2725.95 00:35:44.844 lat (usec): min=187, max=43727, avg=405.63, stdev=2726.85 00:35:44.844 clat percentiles (usec): 00:35:44.844 | 1.00th=[ 184], 5.00th=[ 186], 10.00th=[ 188], 20.00th=[ 190], 00:35:44.844 | 30.00th=[ 192], 40.00th=[ 194], 50.00th=[ 198], 60.00th=[ 221], 00:35:44.844 | 70.00th=[ 243], 80.00th=[ 247], 90.00th=[ 251], 95.00th=[ 253], 00:35:44.844 | 99.00th=[ 375], 99.50th=[ 457], 99.90th=[41157], 99.95th=[43779], 00:35:44.844 | 99.99th=[43779] 00:35:44.844 write: IOPS=2035, BW=8143KiB/s (8339kB/s)(8192KiB/1006msec); 0 zone resets 00:35:44.844 slat (nsec): min=6146, max=32016, avg=10875.89, stdev=1797.17 00:35:44.844 clat (usec): min=123, max=317, avg=161.81, stdev=36.68 00:35:44.844 lat (usec): min=132, max=344, avg=172.68, stdev=36.89 00:35:44.844 clat percentiles (usec): 00:35:44.844 | 1.00th=[ 127], 5.00th=[ 130], 10.00th=[ 133], 20.00th=[ 135], 00:35:44.844 | 30.00th=[ 137], 40.00th=[ 139], 50.00th=[ 141], 60.00th=[ 161], 00:35:44.844 | 70.00th=[ 176], 80.00th=[ 184], 90.00th=[ 239], 95.00th=[ 247], 00:35:44.844 | 99.00th=[ 255], 99.50th=[ 258], 99.90th=[ 277], 99.95th=[ 306], 00:35:44.844 | 99.99th=[ 318] 00:35:44.844 bw ( KiB/s): min= 4096, max=12263, per=45.79%, avg=8179.50, stdev=5774.94, samples=2 00:35:44.844 iops : min= 1024, max= 3065, avg=2044.50, stdev=1443.20, samples=2 00:35:44.844 lat (usec) : 250=94.22%, 500=5.58% 00:35:44.844 lat (msec) : 50=0.19% 00:35:44.844 cpu : usr=2.59%, sys=2.79%, ctx=3635, majf=0, minf=1 00:35:44.844 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:44.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.844 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.844 issued rwts: total=1587,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:44.844 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:44.844 job3: (groupid=0, jobs=1): err= 0: pid=305381: Thu Dec 5 12:19:18 2024 00:35:44.844 read: IOPS=751, BW=3005KiB/s (3077kB/s)(3008KiB/1001msec) 00:35:44.844 slat (nsec): min=8422, max=23864, avg=9798.43, stdev=2034.73 00:35:44.844 clat (usec): min=217, max=41085, avg=1066.73, stdev=5694.07 00:35:44.844 lat (usec): min=227, max=41108, avg=1076.53, stdev=5695.90 00:35:44.844 clat percentiles (usec): 00:35:44.844 | 1.00th=[ 231], 5.00th=[ 235], 10.00th=[ 237], 20.00th=[ 241], 00:35:44.844 | 30.00th=[ 243], 40.00th=[ 243], 50.00th=[ 245], 60.00th=[ 247], 00:35:44.844 | 70.00th=[ 249], 80.00th=[ 253], 90.00th=[ 281], 95.00th=[ 408], 00:35:44.844 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:35:44.844 | 99.99th=[41157] 00:35:44.844 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:35:44.844 slat (nsec): min=9902, max=36711, avg=12002.75, stdev=1707.54 00:35:44.844 clat (usec): min=142, max=288, avg=169.11, stdev=12.90 00:35:44.844 lat (usec): min=155, max=325, avg=181.12, stdev=13.10 00:35:44.844 clat percentiles (usec): 00:35:44.844 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 161], 00:35:44.844 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 167], 60.00th=[ 169], 00:35:44.844 | 70.00th=[ 174], 80.00th=[ 178], 90.00th=[ 182], 95.00th=[ 192], 00:35:44.844 | 99.00th=[ 210], 99.50th=[ 229], 99.90th=[ 285], 99.95th=[ 289], 00:35:44.844 | 99.99th=[ 289] 00:35:44.844 bw ( KiB/s): min= 4096, max= 4096, per=22.93%, avg=4096.00, stdev= 0.00, samples=1 00:35:44.844 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:44.844 lat (usec) : 250=89.81%, 500=9.35% 00:35:44.844 lat (msec) : 50=0.84% 00:35:44.844 cpu : usr=1.70%, sys=1.30%, ctx=1778, majf=0, minf=1 00:35:44.844 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:44.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.844 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.844 issued rwts: total=752,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:44.844 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:44.844 00:35:44.844 Run status group 0 (all jobs): 00:35:44.844 READ: bw=12.1MiB/s (12.7MB/s), 85.3KiB/s-6310KiB/s (87.3kB/s-6462kB/s), io=12.5MiB (13.1MB), run=1001-1032msec 00:35:44.844 WRITE: bw=17.4MiB/s (18.3MB/s), 1984KiB/s-8143KiB/s (2032kB/s-8339kB/s), io=18.0MiB (18.9MB), run=1001-1032msec 00:35:44.844 00:35:44.844 Disk stats (read/write): 00:35:44.844 nvme0n1: ios=67/512, merge=0/0, ticks=718/119, in_queue=837, util=86.67% 00:35:44.844 nvme0n2: ios=558/632, merge=0/0, ticks=951/104, in_queue=1055, util=90.14% 00:35:44.844 nvme0n3: ios=1640/2048, merge=0/0, ticks=525/319, in_queue=844, util=94.79% 00:35:44.844 nvme0n4: ios=535/555, merge=0/0, ticks=1644/88, in_queue=1732, util=94.23% 00:35:44.844 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:35:44.844 [global] 00:35:44.844 thread=1 00:35:44.844 invalidate=1 00:35:44.844 rw=randwrite 00:35:44.844 time_based=1 00:35:44.844 runtime=1 00:35:44.844 ioengine=libaio 00:35:44.844 direct=1 00:35:44.844 bs=4096 00:35:44.844 iodepth=1 00:35:44.844 norandommap=0 00:35:44.844 numjobs=1 00:35:44.844 00:35:44.844 verify_dump=1 00:35:44.845 verify_backlog=512 00:35:44.845 verify_state_save=0 00:35:44.845 do_verify=1 00:35:44.845 verify=crc32c-intel 00:35:44.845 [job0] 00:35:44.845 filename=/dev/nvme0n1 00:35:44.845 [job1] 00:35:44.845 filename=/dev/nvme0n2 00:35:44.845 [job2] 00:35:44.845 filename=/dev/nvme0n3 00:35:44.845 [job3] 00:35:44.845 filename=/dev/nvme0n4 00:35:44.845 Could not set queue depth (nvme0n1) 00:35:44.845 Could not set queue depth (nvme0n2) 00:35:44.845 Could not set queue depth (nvme0n3) 00:35:44.845 Could not set queue depth (nvme0n4) 00:35:45.104 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:45.104 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:45.104 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:45.104 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:45.104 fio-3.35 00:35:45.104 Starting 4 threads 00:35:46.482 00:35:46.483 job0: (groupid=0, jobs=1): err= 0: pid=305747: Thu Dec 5 12:19:20 2024 00:35:46.483 read: IOPS=2141, BW=8567KiB/s (8773kB/s)(8576KiB/1001msec) 00:35:46.483 slat (nsec): min=7173, max=46798, avg=8467.80, stdev=1526.39 00:35:46.483 clat (usec): min=187, max=501, avg=238.22, stdev=37.79 00:35:46.483 lat (usec): min=196, max=509, avg=246.69, stdev=38.00 00:35:46.483 clat percentiles (usec): 00:35:46.483 | 1.00th=[ 198], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 210], 00:35:46.483 | 30.00th=[ 215], 40.00th=[ 219], 50.00th=[ 227], 60.00th=[ 241], 00:35:46.483 | 70.00th=[ 247], 80.00th=[ 251], 90.00th=[ 285], 95.00th=[ 306], 00:35:46.483 | 99.00th=[ 416], 99.50th=[ 429], 99.90th=[ 449], 99.95th=[ 482], 00:35:46.483 | 99.99th=[ 502] 00:35:46.483 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:35:46.483 slat (usec): min=6, max=178, avg=11.98, stdev= 3.80 00:35:46.483 clat (usec): min=123, max=3416, avg=166.58, stdev=68.37 00:35:46.483 lat (usec): min=143, max=3428, avg=178.56, stdev=68.77 00:35:46.483 clat percentiles (usec): 00:35:46.483 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 149], 20.00th=[ 153], 00:35:46.483 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 161], 00:35:46.483 | 70.00th=[ 167], 80.00th=[ 176], 90.00th=[ 188], 95.00th=[ 206], 00:35:46.483 | 99.00th=[ 245], 99.50th=[ 302], 99.90th=[ 408], 99.95th=[ 537], 00:35:46.483 | 99.99th=[ 3425] 00:35:46.483 bw ( KiB/s): min=10594, max=10594, per=45.97%, avg=10594.00, stdev= 0.00, samples=1 00:35:46.483 iops : min= 2648, max= 2648, avg=2648.00, stdev= 0.00, samples=1 00:35:46.483 lat (usec) : 250=89.16%, 500=10.78%, 750=0.04% 00:35:46.483 lat (msec) : 4=0.02% 00:35:46.483 cpu : usr=4.50%, sys=7.10%, ctx=4707, majf=0, minf=1 00:35:46.483 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:46.483 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.483 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.483 issued rwts: total=2144,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:46.483 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:46.483 job1: (groupid=0, jobs=1): err= 0: pid=305748: Thu Dec 5 12:19:20 2024 00:35:46.483 read: IOPS=22, BW=88.4KiB/s (90.5kB/s)(92.0KiB/1041msec) 00:35:46.483 slat (nsec): min=8997, max=23182, avg=13882.26, stdev=4832.91 00:35:46.483 clat (usec): min=40795, max=41056, avg=40970.70, stdev=60.85 00:35:46.483 lat (usec): min=40805, max=41066, avg=40984.58, stdev=60.50 00:35:46.483 clat percentiles (usec): 00:35:46.483 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:35:46.483 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:46.483 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:46.483 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:35:46.483 | 99.99th=[41157] 00:35:46.483 write: IOPS=491, BW=1967KiB/s (2015kB/s)(2048KiB/1041msec); 0 zone resets 00:35:46.483 slat (nsec): min=9068, max=36647, avg=10711.29, stdev=2186.02 00:35:46.483 clat (usec): min=135, max=275, avg=179.45, stdev=23.02 00:35:46.483 lat (usec): min=145, max=312, avg=190.16, stdev=23.92 00:35:46.483 clat percentiles (usec): 00:35:46.483 | 1.00th=[ 143], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 159], 00:35:46.483 | 30.00th=[ 169], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 182], 00:35:46.483 | 70.00th=[ 186], 80.00th=[ 192], 90.00th=[ 210], 95.00th=[ 229], 00:35:46.483 | 99.00th=[ 253], 99.50th=[ 258], 99.90th=[ 277], 99.95th=[ 277], 00:35:46.483 | 99.99th=[ 277] 00:35:46.483 bw ( KiB/s): min= 4087, max= 4087, per=17.73%, avg=4087.00, stdev= 0.00, samples=1 00:35:46.483 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:35:46.483 lat (usec) : 250=94.39%, 500=1.31% 00:35:46.483 lat (msec) : 50=4.30% 00:35:46.483 cpu : usr=0.19%, sys=0.48%, ctx=535, majf=0, minf=1 00:35:46.483 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:46.483 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.483 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.483 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:46.483 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:46.483 job2: (groupid=0, jobs=1): err= 0: pid=305749: Thu Dec 5 12:19:20 2024 00:35:46.483 read: IOPS=22, BW=91.6KiB/s (93.8kB/s)(92.0KiB/1004msec) 00:35:46.483 slat (nsec): min=10145, max=24410, avg=19882.70, stdev=3247.54 00:35:46.483 clat (usec): min=377, max=42114, avg=39175.99, stdev=8479.99 00:35:46.483 lat (usec): min=398, max=42136, avg=39195.87, stdev=8479.73 00:35:46.483 clat percentiles (usec): 00:35:46.483 | 1.00th=[ 379], 5.00th=[38536], 10.00th=[40633], 20.00th=[40633], 00:35:46.483 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:46.483 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:35:46.483 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:46.483 | 99.99th=[42206] 00:35:46.483 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:35:46.483 slat (nsec): min=10661, max=38555, avg=11970.58, stdev=2013.98 00:35:46.483 clat (usec): min=148, max=333, avg=185.01, stdev=16.27 00:35:46.483 lat (usec): min=160, max=345, avg=196.98, stdev=16.64 00:35:46.483 clat percentiles (usec): 00:35:46.483 | 1.00th=[ 161], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 174], 00:35:46.483 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 184], 60.00th=[ 186], 00:35:46.483 | 70.00th=[ 190], 80.00th=[ 194], 90.00th=[ 200], 95.00th=[ 208], 00:35:46.483 | 99.00th=[ 249], 99.50th=[ 281], 99.90th=[ 334], 99.95th=[ 334], 00:35:46.483 | 99.99th=[ 334] 00:35:46.483 bw ( KiB/s): min= 4087, max= 4087, per=17.73%, avg=4087.00, stdev= 0.00, samples=1 00:35:46.483 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:35:46.483 lat (usec) : 250=94.77%, 500=1.12% 00:35:46.483 lat (msec) : 50=4.11% 00:35:46.483 cpu : usr=0.30%, sys=0.60%, ctx=535, majf=0, minf=1 00:35:46.483 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:46.483 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.483 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.483 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:46.483 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:46.483 job3: (groupid=0, jobs=1): err= 0: pid=305750: Thu Dec 5 12:19:20 2024 00:35:46.483 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:35:46.483 slat (nsec): min=6584, max=28182, avg=8025.22, stdev=1142.21 00:35:46.483 clat (usec): min=203, max=3233, avg=250.07, stdev=97.16 00:35:46.483 lat (usec): min=211, max=3242, avg=258.10, stdev=97.54 00:35:46.483 clat percentiles (usec): 00:35:46.483 | 1.00th=[ 210], 5.00th=[ 217], 10.00th=[ 221], 20.00th=[ 229], 00:35:46.483 | 30.00th=[ 235], 40.00th=[ 239], 50.00th=[ 243], 60.00th=[ 247], 00:35:46.483 | 70.00th=[ 249], 80.00th=[ 253], 90.00th=[ 277], 95.00th=[ 289], 00:35:46.483 | 99.00th=[ 433], 99.50th=[ 441], 99.90th=[ 498], 99.95th=[ 3097], 00:35:46.483 | 99.99th=[ 3228] 00:35:46.483 write: IOPS=2411, BW=9646KiB/s (9878kB/s)(9656KiB/1001msec); 0 zone resets 00:35:46.483 slat (nsec): min=9563, max=48227, avg=11401.06, stdev=2084.83 00:35:46.483 clat (usec): min=139, max=1513, avg=179.46, stdev=40.88 00:35:46.483 lat (usec): min=150, max=1525, avg=190.86, stdev=41.35 00:35:46.483 clat percentiles (usec): 00:35:46.483 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 155], 20.00th=[ 159], 00:35:46.483 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 176], 00:35:46.483 | 70.00th=[ 182], 80.00th=[ 192], 90.00th=[ 217], 95.00th=[ 247], 00:35:46.483 | 99.00th=[ 281], 99.50th=[ 293], 99.90th=[ 429], 99.95th=[ 486], 00:35:46.483 | 99.99th=[ 1516] 00:35:46.483 bw ( KiB/s): min= 9996, max= 9996, per=43.37%, avg=9996.00, stdev= 0.00, samples=1 00:35:46.483 iops : min= 2499, max= 2499, avg=2499.00, stdev= 0.00, samples=1 00:35:46.483 lat (usec) : 250=84.54%, 500=15.40% 00:35:46.483 lat (msec) : 2=0.02%, 4=0.04% 00:35:46.483 cpu : usr=1.90%, sys=5.10%, ctx=4463, majf=0, minf=1 00:35:46.483 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:46.483 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.483 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.483 issued rwts: total=2048,2414,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:46.483 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:46.483 00:35:46.483 Run status group 0 (all jobs): 00:35:46.483 READ: bw=15.9MiB/s (16.7MB/s), 88.4KiB/s-8567KiB/s (90.5kB/s-8773kB/s), io=16.6MiB (17.4MB), run=1001-1041msec 00:35:46.483 WRITE: bw=22.5MiB/s (23.6MB/s), 1967KiB/s-9.99MiB/s (2015kB/s-10.5MB/s), io=23.4MiB (24.6MB), run=1001-1041msec 00:35:46.483 00:35:46.483 Disk stats (read/write): 00:35:46.483 nvme0n1: ios=1931/2048, merge=0/0, ticks=1138/330, in_queue=1468, util=98.00% 00:35:46.483 nvme0n2: ios=67/512, merge=0/0, ticks=760/89, in_queue=849, util=88.24% 00:35:46.483 nvme0n3: ios=76/512, merge=0/0, ticks=805/87, in_queue=892, util=90.86% 00:35:46.483 nvme0n4: ios=1731/2048, merge=0/0, ticks=1379/369, in_queue=1748, util=98.54% 00:35:46.483 12:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:35:46.483 [global] 00:35:46.483 thread=1 00:35:46.483 invalidate=1 00:35:46.483 rw=write 00:35:46.483 time_based=1 00:35:46.483 runtime=1 00:35:46.483 ioengine=libaio 00:35:46.483 direct=1 00:35:46.483 bs=4096 00:35:46.483 iodepth=128 00:35:46.483 norandommap=0 00:35:46.483 numjobs=1 00:35:46.483 00:35:46.483 verify_dump=1 00:35:46.483 verify_backlog=512 00:35:46.483 verify_state_save=0 00:35:46.483 do_verify=1 00:35:46.483 verify=crc32c-intel 00:35:46.483 [job0] 00:35:46.483 filename=/dev/nvme0n1 00:35:46.483 [job1] 00:35:46.483 filename=/dev/nvme0n2 00:35:46.483 [job2] 00:35:46.483 filename=/dev/nvme0n3 00:35:46.483 [job3] 00:35:46.483 filename=/dev/nvme0n4 00:35:46.483 Could not set queue depth (nvme0n1) 00:35:46.483 Could not set queue depth (nvme0n2) 00:35:46.483 Could not set queue depth (nvme0n3) 00:35:46.483 Could not set queue depth (nvme0n4) 00:35:46.743 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:46.743 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:46.743 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:46.743 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:46.743 fio-3.35 00:35:46.743 Starting 4 threads 00:35:48.125 00:35:48.125 job0: (groupid=0, jobs=1): err= 0: pid=306123: Thu Dec 5 12:19:22 2024 00:35:48.125 read: IOPS=3605, BW=14.1MiB/s (14.8MB/s)(14.4MiB/1020msec) 00:35:48.125 slat (nsec): min=1743, max=14282k, avg=145116.52, stdev=955621.96 00:35:48.125 clat (usec): min=3471, max=71366, avg=15718.64, stdev=13520.85 00:35:48.125 lat (usec): min=3482, max=71370, avg=15863.76, stdev=13622.60 00:35:48.125 clat percentiles (usec): 00:35:48.125 | 1.00th=[ 6063], 5.00th=[ 8094], 10.00th=[ 8455], 20.00th=[ 9241], 00:35:48.125 | 30.00th=[ 9765], 40.00th=[10814], 50.00th=[11338], 60.00th=[11600], 00:35:48.125 | 70.00th=[12780], 80.00th=[16319], 90.00th=[26346], 95.00th=[57410], 00:35:48.125 | 99.00th=[67634], 99.50th=[68682], 99.90th=[71828], 99.95th=[71828], 00:35:48.125 | 99.99th=[71828] 00:35:48.125 write: IOPS=4015, BW=15.7MiB/s (16.4MB/s)(16.0MiB/1020msec); 0 zone resets 00:35:48.125 slat (usec): min=2, max=18369, avg=105.69, stdev=603.86 00:35:48.125 clat (usec): min=1132, max=76588, avg=17461.77, stdev=11444.83 00:35:48.125 lat (usec): min=1142, max=76592, avg=17567.46, stdev=11480.29 00:35:48.125 clat percentiles (usec): 00:35:48.125 | 1.00th=[ 4490], 5.00th=[ 7242], 10.00th=[ 8455], 20.00th=[ 9634], 00:35:48.125 | 30.00th=[10159], 40.00th=[11076], 50.00th=[15926], 60.00th=[17433], 00:35:48.125 | 70.00th=[20579], 80.00th=[21103], 90.00th=[28443], 95.00th=[42206], 00:35:48.125 | 99.00th=[60556], 99.50th=[73925], 99.90th=[77071], 99.95th=[77071], 00:35:48.125 | 99.99th=[77071] 00:35:48.125 bw ( KiB/s): min=12016, max=20480, per=22.20%, avg=16248.00, stdev=5984.95, samples=2 00:35:48.125 iops : min= 3004, max= 5120, avg=4062.00, stdev=1496.24, samples=2 00:35:48.125 lat (msec) : 2=0.03%, 4=0.51%, 10=28.61%, 20=46.93%, 50=18.84% 00:35:48.125 lat (msec) : 100=5.08% 00:35:48.125 cpu : usr=3.53%, sys=4.61%, ctx=411, majf=0, minf=1 00:35:48.125 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:35:48.125 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.125 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:48.125 issued rwts: total=3678,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.125 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:48.125 job1: (groupid=0, jobs=1): err= 0: pid=306124: Thu Dec 5 12:19:22 2024 00:35:48.125 read: IOPS=5769, BW=22.5MiB/s (23.6MB/s)(22.6MiB/1003msec) 00:35:48.125 slat (nsec): min=1096, max=14853k, avg=80310.23, stdev=674491.16 00:35:48.125 clat (usec): min=2561, max=31788, avg=10730.13, stdev=3490.70 00:35:48.125 lat (usec): min=2564, max=31815, avg=10810.44, stdev=3538.57 00:35:48.125 clat percentiles (usec): 00:35:48.125 | 1.00th=[ 3228], 5.00th=[ 6194], 10.00th=[ 7701], 20.00th=[ 8356], 00:35:48.125 | 30.00th=[ 8979], 40.00th=[ 9372], 50.00th=[ 9896], 60.00th=[10421], 00:35:48.125 | 70.00th=[11863], 80.00th=[13173], 90.00th=[15664], 95.00th=[17171], 00:35:48.125 | 99.00th=[20579], 99.50th=[25297], 99.90th=[25297], 99.95th=[25822], 00:35:48.125 | 99.99th=[31851] 00:35:48.125 write: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec); 0 zone resets 00:35:48.125 slat (nsec): min=1893, max=21974k, avg=77036.23, stdev=625534.56 00:35:48.125 clat (usec): min=1115, max=41049, avg=10156.88, stdev=5269.84 00:35:48.125 lat (usec): min=1126, max=41055, avg=10233.91, stdev=5314.86 00:35:48.125 clat percentiles (usec): 00:35:48.125 | 1.00th=[ 3589], 5.00th=[ 5735], 10.00th=[ 6587], 20.00th=[ 7767], 00:35:48.125 | 30.00th=[ 8225], 40.00th=[ 8848], 50.00th=[ 9110], 60.00th=[ 9372], 00:35:48.125 | 70.00th=[ 9765], 80.00th=[10290], 90.00th=[13566], 95.00th=[19792], 00:35:48.125 | 99.00th=[35914], 99.50th=[38536], 99.90th=[41157], 99.95th=[41157], 00:35:48.125 | 99.99th=[41157] 00:35:48.125 bw ( KiB/s): min=20480, max=28672, per=33.58%, avg=24576.00, stdev=5792.62, samples=2 00:35:48.125 iops : min= 5120, max= 7168, avg=6144.00, stdev=1448.15, samples=2 00:35:48.125 lat (msec) : 2=0.19%, 4=1.78%, 10=64.22%, 20=30.26%, 50=3.55% 00:35:48.125 cpu : usr=4.49%, sys=6.59%, ctx=393, majf=0, minf=1 00:35:48.125 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:35:48.125 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.125 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:48.125 issued rwts: total=5787,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.125 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:48.125 job2: (groupid=0, jobs=1): err= 0: pid=306125: Thu Dec 5 12:19:22 2024 00:35:48.125 read: IOPS=3518, BW=13.7MiB/s (14.4MB/s)(13.9MiB/1014msec) 00:35:48.125 slat (nsec): min=1405, max=15940k, avg=123236.46, stdev=894007.12 00:35:48.125 clat (usec): min=4268, max=36217, avg=15304.16, stdev=5728.68 00:35:48.125 lat (usec): min=4275, max=36225, avg=15427.40, stdev=5783.16 00:35:48.125 clat percentiles (usec): 00:35:48.125 | 1.00th=[ 6915], 5.00th=[ 9241], 10.00th=[10028], 20.00th=[10945], 00:35:48.125 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12649], 60.00th=[15008], 00:35:48.125 | 70.00th=[17171], 80.00th=[19792], 90.00th=[24249], 95.00th=[26346], 00:35:48.125 | 99.00th=[34341], 99.50th=[34866], 99.90th=[36439], 99.95th=[36439], 00:35:48.125 | 99.99th=[36439] 00:35:48.125 write: IOPS=3534, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1014msec); 0 zone resets 00:35:48.125 slat (usec): min=2, max=15234, avg=152.10, stdev=803.32 00:35:48.125 clat (usec): min=1554, max=57269, avg=20654.63, stdev=12643.58 00:35:48.125 lat (usec): min=1567, max=57277, avg=20806.73, stdev=12709.50 00:35:48.125 clat percentiles (usec): 00:35:48.125 | 1.00th=[ 4424], 5.00th=[ 6915], 10.00th=[ 8029], 20.00th=[10945], 00:35:48.125 | 30.00th=[11600], 40.00th=[15795], 50.00th=[17171], 60.00th=[20841], 00:35:48.125 | 70.00th=[21365], 80.00th=[26346], 90.00th=[44303], 95.00th=[49546], 00:35:48.125 | 99.00th=[54789], 99.50th=[56361], 99.90th=[57410], 99.95th=[57410], 00:35:48.125 | 99.99th=[57410] 00:35:48.126 bw ( KiB/s): min=12048, max=16624, per=19.59%, avg=14336.00, stdev=3235.72, samples=2 00:35:48.126 iops : min= 3012, max= 4156, avg=3584.00, stdev=808.93, samples=2 00:35:48.126 lat (msec) : 2=0.03%, 4=0.25%, 10=11.69%, 20=56.49%, 50=29.24% 00:35:48.126 lat (msec) : 100=2.31% 00:35:48.126 cpu : usr=3.35%, sys=4.34%, ctx=402, majf=0, minf=1 00:35:48.126 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:35:48.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.126 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:48.126 issued rwts: total=3568,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.126 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:48.126 job3: (groupid=0, jobs=1): err= 0: pid=306126: Thu Dec 5 12:19:22 2024 00:35:48.126 read: IOPS=4544, BW=17.8MiB/s (18.6MB/s)(18.0MiB/1014msec) 00:35:48.126 slat (nsec): min=1761, max=23685k, avg=95284.86, stdev=716249.93 00:35:48.126 clat (usec): min=5080, max=50068, avg=12862.84, stdev=5406.63 00:35:48.126 lat (usec): min=5087, max=50075, avg=12958.12, stdev=5442.11 00:35:48.126 clat percentiles (usec): 00:35:48.126 | 1.00th=[ 5800], 5.00th=[ 7635], 10.00th=[ 8586], 20.00th=[ 9634], 00:35:48.126 | 30.00th=[10421], 40.00th=[10683], 50.00th=[11207], 60.00th=[11731], 00:35:48.126 | 70.00th=[12911], 80.00th=[14091], 90.00th=[21890], 95.00th=[25035], 00:35:48.126 | 99.00th=[33817], 99.50th=[33817], 99.90th=[33817], 99.95th=[35390], 00:35:48.126 | 99.99th=[50070] 00:35:48.126 write: IOPS=4770, BW=18.6MiB/s (19.5MB/s)(18.9MiB/1014msec); 0 zone resets 00:35:48.126 slat (usec): min=2, max=27930, avg=109.06, stdev=942.94 00:35:48.126 clat (usec): min=753, max=53482, avg=14241.25, stdev=7241.66 00:35:48.126 lat (usec): min=763, max=53514, avg=14350.32, stdev=7324.94 00:35:48.126 clat percentiles (usec): 00:35:48.126 | 1.00th=[ 4047], 5.00th=[ 7635], 10.00th=[ 9241], 20.00th=[10552], 00:35:48.126 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11469], 60.00th=[11731], 00:35:48.126 | 70.00th=[13304], 80.00th=[17171], 90.00th=[23462], 95.00th=[29754], 00:35:48.126 | 99.00th=[47449], 99.50th=[47973], 99.90th=[47973], 99.95th=[47973], 00:35:48.126 | 99.99th=[53740] 00:35:48.126 bw ( KiB/s): min=16384, max=21288, per=25.74%, avg=18836.00, stdev=3467.65, samples=2 00:35:48.126 iops : min= 4096, max= 5322, avg=4709.00, stdev=866.91, samples=2 00:35:48.126 lat (usec) : 1000=0.07% 00:35:48.126 lat (msec) : 4=0.24%, 10=19.58%, 20=66.28%, 50=13.80%, 100=0.03% 00:35:48.126 cpu : usr=4.15%, sys=4.64%, ctx=375, majf=0, minf=1 00:35:48.126 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:35:48.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.126 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:48.126 issued rwts: total=4608,4837,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.126 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:48.126 00:35:48.126 Run status group 0 (all jobs): 00:35:48.126 READ: bw=67.6MiB/s (70.8MB/s), 13.7MiB/s-22.5MiB/s (14.4MB/s-23.6MB/s), io=68.9MiB (72.3MB), run=1003-1020msec 00:35:48.126 WRITE: bw=71.5MiB/s (74.9MB/s), 13.8MiB/s-23.9MiB/s (14.5MB/s-25.1MB/s), io=72.9MiB (76.4MB), run=1003-1020msec 00:35:48.126 00:35:48.126 Disk stats (read/write): 00:35:48.126 nvme0n1: ios=3208/3584, merge=0/0, ticks=47939/57737, in_queue=105676, util=98.80% 00:35:48.126 nvme0n2: ios=4780/5120, merge=0/0, ticks=48728/52004, in_queue=100732, util=98.37% 00:35:48.126 nvme0n3: ios=3112/3103, merge=0/0, ticks=45240/59200, in_queue=104440, util=98.75% 00:35:48.126 nvme0n4: ios=3797/4096, merge=0/0, ticks=27863/31946, in_queue=59809, util=98.32% 00:35:48.126 12:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:35:48.126 [global] 00:35:48.126 thread=1 00:35:48.126 invalidate=1 00:35:48.126 rw=randwrite 00:35:48.126 time_based=1 00:35:48.126 runtime=1 00:35:48.126 ioengine=libaio 00:35:48.126 direct=1 00:35:48.126 bs=4096 00:35:48.126 iodepth=128 00:35:48.126 norandommap=0 00:35:48.126 numjobs=1 00:35:48.126 00:35:48.126 verify_dump=1 00:35:48.126 verify_backlog=512 00:35:48.126 verify_state_save=0 00:35:48.126 do_verify=1 00:35:48.126 verify=crc32c-intel 00:35:48.126 [job0] 00:35:48.126 filename=/dev/nvme0n1 00:35:48.126 [job1] 00:35:48.126 filename=/dev/nvme0n2 00:35:48.126 [job2] 00:35:48.126 filename=/dev/nvme0n3 00:35:48.126 [job3] 00:35:48.126 filename=/dev/nvme0n4 00:35:48.126 Could not set queue depth (nvme0n1) 00:35:48.126 Could not set queue depth (nvme0n2) 00:35:48.126 Could not set queue depth (nvme0n3) 00:35:48.126 Could not set queue depth (nvme0n4) 00:35:48.401 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:48.401 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:48.401 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:48.401 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:48.401 fio-3.35 00:35:48.401 Starting 4 threads 00:35:49.781 00:35:49.781 job0: (groupid=0, jobs=1): err= 0: pid=306495: Thu Dec 5 12:19:23 2024 00:35:49.781 read: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec) 00:35:49.781 slat (nsec): min=1350, max=18228k, avg=113429.04, stdev=836693.46 00:35:49.781 clat (usec): min=5944, max=50278, avg=14601.06, stdev=8470.00 00:35:49.781 lat (usec): min=5953, max=50306, avg=14714.49, stdev=8536.11 00:35:49.781 clat percentiles (usec): 00:35:49.781 | 1.00th=[ 6980], 5.00th=[ 8094], 10.00th=[ 8356], 20.00th=[ 9241], 00:35:49.781 | 30.00th=[10028], 40.00th=[10421], 50.00th=[10945], 60.00th=[12256], 00:35:49.781 | 70.00th=[13173], 80.00th=[19530], 90.00th=[30540], 95.00th=[33162], 00:35:49.781 | 99.00th=[41681], 99.50th=[42730], 99.90th=[43779], 99.95th=[43779], 00:35:49.781 | 99.99th=[50070] 00:35:49.781 write: IOPS=4799, BW=18.7MiB/s (19.7MB/s)(18.8MiB/1004msec); 0 zone resets 00:35:49.781 slat (usec): min=2, max=17573, avg=93.17, stdev=671.63 00:35:49.781 clat (usec): min=1609, max=47848, avg=12339.87, stdev=5410.26 00:35:49.781 lat (usec): min=5165, max=47879, avg=12433.04, stdev=5471.33 00:35:49.781 clat percentiles (usec): 00:35:49.781 | 1.00th=[ 7439], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[ 9503], 00:35:49.781 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10814], 60.00th=[11338], 00:35:49.781 | 70.00th=[11600], 80.00th=[11994], 90.00th=[19792], 95.00th=[23725], 00:35:49.781 | 99.00th=[35914], 99.50th=[35914], 99.90th=[37487], 99.95th=[41157], 00:35:49.781 | 99.99th=[47973] 00:35:49.781 bw ( KiB/s): min=12960, max=24526, per=25.12%, avg=18743.00, stdev=8178.40, samples=2 00:35:49.781 iops : min= 3240, max= 6131, avg=4685.50, stdev=2044.25, samples=2 00:35:49.781 lat (msec) : 2=0.01%, 10=36.08%, 20=49.57%, 50=14.33%, 100=0.01% 00:35:49.781 cpu : usr=3.59%, sys=5.78%, ctx=408, majf=0, minf=1 00:35:49.781 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:35:49.781 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.781 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:49.781 issued rwts: total=4608,4819,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:49.781 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:49.781 job1: (groupid=0, jobs=1): err= 0: pid=306496: Thu Dec 5 12:19:23 2024 00:35:49.781 read: IOPS=5598, BW=21.9MiB/s (22.9MB/s)(22.0MiB/1006msec) 00:35:49.781 slat (nsec): min=1268, max=43329k, avg=94910.30, stdev=805329.77 00:35:49.781 clat (usec): min=3832, max=54792, avg=12040.74, stdev=6430.67 00:35:49.781 lat (usec): min=3838, max=54795, avg=12135.65, stdev=6464.29 00:35:49.781 clat percentiles (usec): 00:35:49.781 | 1.00th=[ 6128], 5.00th=[ 8979], 10.00th=[ 9372], 20.00th=[ 9896], 00:35:49.781 | 30.00th=[10290], 40.00th=[10683], 50.00th=[11076], 60.00th=[11338], 00:35:49.781 | 70.00th=[11600], 80.00th=[12256], 90.00th=[13304], 95.00th=[14615], 00:35:49.781 | 99.00th=[53216], 99.50th=[53740], 99.90th=[54789], 99.95th=[54789], 00:35:49.781 | 99.99th=[54789] 00:35:49.781 write: IOPS=5643, BW=22.0MiB/s (23.1MB/s)(22.2MiB/1006msec); 0 zone resets 00:35:49.781 slat (usec): min=2, max=9408, avg=77.26, stdev=451.39 00:35:49.781 clat (usec): min=3543, max=17668, avg=10436.77, stdev=1898.43 00:35:49.781 lat (usec): min=3553, max=17678, avg=10514.03, stdev=1913.44 00:35:49.781 clat percentiles (usec): 00:35:49.781 | 1.00th=[ 5211], 5.00th=[ 6980], 10.00th=[ 7963], 20.00th=[ 9110], 00:35:49.781 | 30.00th=[ 9634], 40.00th=[10028], 50.00th=[10552], 60.00th=[11076], 00:35:49.781 | 70.00th=[11600], 80.00th=[11863], 90.00th=[12518], 95.00th=[13435], 00:35:49.781 | 99.00th=[14877], 99.50th=[15270], 99.90th=[16319], 99.95th=[16909], 00:35:49.781 | 99.99th=[17695] 00:35:49.781 bw ( KiB/s): min=20439, max=24576, per=30.17%, avg=22507.50, stdev=2925.30, samples=2 00:35:49.781 iops : min= 5109, max= 6144, avg=5626.50, stdev=731.86, samples=2 00:35:49.781 lat (msec) : 4=0.31%, 10=30.90%, 20=67.65%, 50=0.02%, 100=1.12% 00:35:49.781 cpu : usr=3.08%, sys=5.57%, ctx=589, majf=0, minf=1 00:35:49.781 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:35:49.781 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.781 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:49.781 issued rwts: total=5632,5677,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:49.781 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:49.781 job2: (groupid=0, jobs=1): err= 0: pid=306497: Thu Dec 5 12:19:23 2024 00:35:49.781 read: IOPS=4059, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1009msec) 00:35:49.781 slat (nsec): min=1574, max=13617k, avg=109888.07, stdev=879320.51 00:35:49.781 clat (usec): min=3128, max=34197, avg=14291.39, stdev=3890.51 00:35:49.781 lat (usec): min=3136, max=34203, avg=14401.28, stdev=3960.89 00:35:49.781 clat percentiles (usec): 00:35:49.781 | 1.00th=[ 6063], 5.00th=[10159], 10.00th=[10683], 20.00th=[11338], 00:35:49.781 | 30.00th=[12125], 40.00th=[12518], 50.00th=[13042], 60.00th=[14353], 00:35:49.781 | 70.00th=[15533], 80.00th=[16712], 90.00th=[19268], 95.00th=[21627], 00:35:49.781 | 99.00th=[28705], 99.50th=[31851], 99.90th=[34341], 99.95th=[34341], 00:35:49.781 | 99.99th=[34341] 00:35:49.781 write: IOPS=4549, BW=17.8MiB/s (18.6MB/s)(17.9MiB/1009msec); 0 zone resets 00:35:49.781 slat (usec): min=2, max=13479, avg=113.88, stdev=772.41 00:35:49.781 clat (usec): min=2932, max=44346, avg=15141.63, stdev=7628.00 00:35:49.781 lat (usec): min=2938, max=44360, avg=15255.52, stdev=7687.44 00:35:49.781 clat percentiles (usec): 00:35:49.781 | 1.00th=[ 4228], 5.00th=[ 8029], 10.00th=[ 8979], 20.00th=[10028], 00:35:49.781 | 30.00th=[11994], 40.00th=[12387], 50.00th=[13042], 60.00th=[14484], 00:35:49.781 | 70.00th=[16057], 80.00th=[17433], 90.00th=[22152], 95.00th=[35914], 00:35:49.781 | 99.00th=[43254], 99.50th=[43779], 99.90th=[44303], 99.95th=[44303], 00:35:49.781 | 99.99th=[44303] 00:35:49.781 bw ( KiB/s): min=15224, max=20439, per=23.90%, avg=17831.50, stdev=3687.56, samples=2 00:35:49.781 iops : min= 3806, max= 5109, avg=4457.50, stdev=921.36, samples=2 00:35:49.781 lat (msec) : 4=0.51%, 10=10.21%, 20=80.50%, 50=8.78% 00:35:49.781 cpu : usr=3.97%, sys=5.46%, ctx=305, majf=0, minf=1 00:35:49.781 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:35:49.781 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.781 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:49.781 issued rwts: total=4096,4590,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:49.781 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:49.781 job3: (groupid=0, jobs=1): err= 0: pid=306498: Thu Dec 5 12:19:23 2024 00:35:49.781 read: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec) 00:35:49.781 slat (usec): min=2, max=26000, avg=137.03, stdev=1048.40 00:35:49.781 clat (usec): min=7461, max=63631, avg=18307.08, stdev=10612.88 00:35:49.781 lat (usec): min=7471, max=63657, avg=18444.11, stdev=10692.33 00:35:49.781 clat percentiles (usec): 00:35:49.781 | 1.00th=[ 7635], 5.00th=[ 9896], 10.00th=[10421], 20.00th=[11731], 00:35:49.781 | 30.00th=[11994], 40.00th=[12518], 50.00th=[13042], 60.00th=[15139], 00:35:49.781 | 70.00th=[20579], 80.00th=[26608], 90.00th=[30540], 95.00th=[36439], 00:35:49.781 | 99.00th=[58983], 99.50th=[61080], 99.90th=[62129], 99.95th=[62129], 00:35:49.781 | 99.99th=[63701] 00:35:49.781 write: IOPS=3718, BW=14.5MiB/s (15.2MB/s)(14.6MiB/1004msec); 0 zone resets 00:35:49.781 slat (usec): min=3, max=26339, avg=129.16, stdev=978.61 00:35:49.781 clat (usec): min=1503, max=69271, avg=16377.57, stdev=8689.25 00:35:49.781 lat (usec): min=6066, max=69304, avg=16506.73, stdev=8785.87 00:35:49.781 clat percentiles (usec): 00:35:49.781 | 1.00th=[ 6521], 5.00th=[10159], 10.00th=[10814], 20.00th=[11863], 00:35:49.781 | 30.00th=[12387], 40.00th=[12911], 50.00th=[13042], 60.00th=[13173], 00:35:49.781 | 70.00th=[14746], 80.00th=[18220], 90.00th=[28967], 95.00th=[36963], 00:35:49.781 | 99.00th=[49021], 99.50th=[49021], 99.90th=[49021], 99.95th=[63177], 00:35:49.781 | 99.99th=[69731] 00:35:49.781 bw ( KiB/s): min=12464, max=16351, per=19.31%, avg=14407.50, stdev=2748.52, samples=2 00:35:49.781 iops : min= 3116, max= 4087, avg=3601.50, stdev=686.60, samples=2 00:35:49.781 lat (msec) : 2=0.01%, 10=4.28%, 20=72.82%, 50=21.02%, 100=1.87% 00:35:49.781 cpu : usr=4.39%, sys=4.59%, ctx=236, majf=0, minf=1 00:35:49.781 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:35:49.781 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.781 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:49.781 issued rwts: total=3584,3733,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:49.781 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:49.781 00:35:49.781 Run status group 0 (all jobs): 00:35:49.781 READ: bw=69.4MiB/s (72.7MB/s), 13.9MiB/s-21.9MiB/s (14.6MB/s-22.9MB/s), io=70.0MiB (73.4MB), run=1004-1009msec 00:35:49.782 WRITE: bw=72.9MiB/s (76.4MB/s), 14.5MiB/s-22.0MiB/s (15.2MB/s-23.1MB/s), io=73.5MiB (77.1MB), run=1004-1009msec 00:35:49.782 00:35:49.782 Disk stats (read/write): 00:35:49.782 nvme0n1: ios=4154/4608, merge=0/0, ticks=26151/25711, in_queue=51862, util=99.70% 00:35:49.782 nvme0n2: ios=4626/4756, merge=0/0, ticks=24707/20761, in_queue=45468, util=90.85% 00:35:49.782 nvme0n3: ios=3707/4096, merge=0/0, ticks=51892/52081, in_queue=103973, util=88.96% 00:35:49.782 nvme0n4: ios=2761/3072, merge=0/0, ticks=26273/25996, in_queue=52269, util=98.11% 00:35:49.782 12:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:35:49.782 12:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=306728 00:35:49.782 12:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:35:49.782 12:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:35:49.782 [global] 00:35:49.782 thread=1 00:35:49.782 invalidate=1 00:35:49.782 rw=read 00:35:49.782 time_based=1 00:35:49.782 runtime=10 00:35:49.782 ioengine=libaio 00:35:49.782 direct=1 00:35:49.782 bs=4096 00:35:49.782 iodepth=1 00:35:49.782 norandommap=1 00:35:49.782 numjobs=1 00:35:49.782 00:35:49.782 [job0] 00:35:49.782 filename=/dev/nvme0n1 00:35:49.782 [job1] 00:35:49.782 filename=/dev/nvme0n2 00:35:49.782 [job2] 00:35:49.782 filename=/dev/nvme0n3 00:35:49.782 [job3] 00:35:49.782 filename=/dev/nvme0n4 00:35:49.782 Could not set queue depth (nvme0n1) 00:35:49.782 Could not set queue depth (nvme0n2) 00:35:49.782 Could not set queue depth (nvme0n3) 00:35:49.782 Could not set queue depth (nvme0n4) 00:35:50.041 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:50.041 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:50.041 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:50.041 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:50.041 fio-3.35 00:35:50.041 Starting 4 threads 00:35:52.572 12:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:35:52.830 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=43343872, buflen=4096 00:35:52.830 fio: pid=306873, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:52.830 12:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:35:53.089 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=49074176, buflen=4096 00:35:53.090 fio: pid=306872, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:53.090 12:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:53.090 12:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:35:53.090 12:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:53.090 12:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:35:53.090 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=331776, buflen=4096 00:35:53.090 fio: pid=306870, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:53.351 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=17485824, buflen=4096 00:35:53.351 fio: pid=306871, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:53.351 12:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:53.351 12:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:35:53.351 00:35:53.351 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=306870: Thu Dec 5 12:19:27 2024 00:35:53.351 read: IOPS=26, BW=103KiB/s (106kB/s)(324KiB/3143msec) 00:35:53.351 slat (nsec): min=8026, max=76821, avg=22759.90, stdev=7090.67 00:35:53.351 clat (usec): min=242, max=41980, avg=38513.58, stdev=9850.29 00:35:53.351 lat (usec): min=265, max=42003, avg=38536.34, stdev=9849.70 00:35:53.351 clat percentiles (usec): 00:35:53.351 | 1.00th=[ 243], 5.00th=[ 449], 10.00th=[40633], 20.00th=[40633], 00:35:53.351 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:53.351 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:35:53.351 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:53.351 | 99.99th=[42206] 00:35:53.351 bw ( KiB/s): min= 93, max= 112, per=0.32%, avg=103.50, stdev= 7.89, samples=6 00:35:53.351 iops : min= 23, max= 28, avg=25.83, stdev= 2.04, samples=6 00:35:53.351 lat (usec) : 250=1.22%, 500=4.88% 00:35:53.351 lat (msec) : 50=92.68% 00:35:53.351 cpu : usr=0.00%, sys=0.10%, ctx=83, majf=0, minf=1 00:35:53.351 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:53.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:53.351 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:53.351 issued rwts: total=82,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:53.351 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:53.351 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=306871: Thu Dec 5 12:19:27 2024 00:35:53.351 read: IOPS=1282, BW=5128KiB/s (5251kB/s)(16.7MiB/3330msec) 00:35:53.351 slat (usec): min=6, max=11612, avg=16.28, stdev=269.22 00:35:53.351 clat (usec): min=170, max=41122, avg=755.54, stdev=4640.88 00:35:53.351 lat (usec): min=184, max=41145, avg=771.82, stdev=4649.53 00:35:53.351 clat percentiles (usec): 00:35:53.351 | 1.00th=[ 184], 5.00th=[ 188], 10.00th=[ 190], 20.00th=[ 194], 00:35:53.351 | 30.00th=[ 198], 40.00th=[ 204], 50.00th=[ 210], 60.00th=[ 217], 00:35:53.351 | 70.00th=[ 223], 80.00th=[ 243], 90.00th=[ 255], 95.00th=[ 277], 00:35:53.351 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:35:53.351 | 99.99th=[41157] 00:35:53.351 bw ( KiB/s): min= 96, max=12064, per=12.87%, avg=4162.33, stdev=6071.61, samples=6 00:35:53.351 iops : min= 24, max= 3016, avg=1040.50, stdev=1517.77, samples=6 00:35:53.351 lat (usec) : 250=87.03%, 500=11.52%, 750=0.02%, 1000=0.02% 00:35:53.351 lat (msec) : 2=0.05%, 20=0.02%, 50=1.31% 00:35:53.351 cpu : usr=0.72%, sys=2.04%, ctx=4274, majf=0, minf=1 00:35:53.351 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:53.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:53.351 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:53.351 issued rwts: total=4270,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:53.351 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:53.351 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=306872: Thu Dec 5 12:19:27 2024 00:35:53.351 read: IOPS=4118, BW=16.1MiB/s (16.9MB/s)(46.8MiB/2909msec) 00:35:53.351 slat (nsec): min=6951, max=47545, avg=8177.30, stdev=1557.78 00:35:53.351 clat (usec): min=186, max=41063, avg=230.99, stdev=579.72 00:35:53.351 lat (usec): min=193, max=41074, avg=239.16, stdev=579.80 00:35:53.351 clat percentiles (usec): 00:35:53.351 | 1.00th=[ 202], 5.00th=[ 208], 10.00th=[ 210], 20.00th=[ 212], 00:35:53.351 | 30.00th=[ 215], 40.00th=[ 217], 50.00th=[ 219], 60.00th=[ 221], 00:35:53.352 | 70.00th=[ 225], 80.00th=[ 229], 90.00th=[ 237], 95.00th=[ 249], 00:35:53.352 | 99.00th=[ 285], 99.50th=[ 302], 99.90th=[ 453], 99.95th=[ 482], 00:35:53.352 | 99.99th=[40633] 00:35:53.352 bw ( KiB/s): min=12464, max=17512, per=51.00%, avg=16488.00, stdev=2249.56, samples=5 00:35:53.352 iops : min= 3116, max= 4378, avg=4122.00, stdev=562.39, samples=5 00:35:53.352 lat (usec) : 250=95.42%, 500=4.53%, 750=0.02% 00:35:53.352 lat (msec) : 50=0.03% 00:35:53.352 cpu : usr=2.17%, sys=6.57%, ctx=11982, majf=0, minf=2 00:35:53.352 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:53.352 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:53.352 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:53.352 issued rwts: total=11982,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:53.352 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:53.352 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=306873: Thu Dec 5 12:19:27 2024 00:35:53.352 read: IOPS=3918, BW=15.3MiB/s (16.0MB/s)(41.3MiB/2701msec) 00:35:53.352 slat (nsec): min=7039, max=45345, avg=8547.70, stdev=1591.80 00:35:53.352 clat (usec): min=191, max=502, avg=242.71, stdev=13.02 00:35:53.352 lat (usec): min=199, max=510, avg=251.25, stdev=13.11 00:35:53.352 clat percentiles (usec): 00:35:53.352 | 1.00th=[ 210], 5.00th=[ 223], 10.00th=[ 229], 20.00th=[ 235], 00:35:53.352 | 30.00th=[ 237], 40.00th=[ 241], 50.00th=[ 243], 60.00th=[ 245], 00:35:53.352 | 70.00th=[ 249], 80.00th=[ 251], 90.00th=[ 258], 95.00th=[ 265], 00:35:53.352 | 99.00th=[ 277], 99.50th=[ 285], 99.90th=[ 306], 99.95th=[ 392], 00:35:53.352 | 99.99th=[ 449] 00:35:53.352 bw ( KiB/s): min=15672, max=16072, per=48.95%, avg=15825.60, stdev=177.39, samples=5 00:35:53.352 iops : min= 3918, max= 4018, avg=3956.40, stdev=44.35, samples=5 00:35:53.352 lat (usec) : 250=76.17%, 500=23.81%, 750=0.01% 00:35:53.352 cpu : usr=2.19%, sys=6.41%, ctx=10583, majf=0, minf=2 00:35:53.352 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:53.352 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:53.352 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:53.352 issued rwts: total=10583,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:53.352 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:53.352 00:35:53.352 Run status group 0 (all jobs): 00:35:53.352 READ: bw=31.6MiB/s (33.1MB/s), 103KiB/s-16.1MiB/s (106kB/s-16.9MB/s), io=105MiB (110MB), run=2701-3330msec 00:35:53.352 00:35:53.352 Disk stats (read/write): 00:35:53.352 nvme0n1: ios=80/0, merge=0/0, ticks=3080/0, in_queue=3080, util=95.72% 00:35:53.352 nvme0n2: ios=3423/0, merge=0/0, ticks=3026/0, in_queue=3026, util=95.67% 00:35:53.352 nvme0n3: ios=11840/0, merge=0/0, ticks=2591/0, in_queue=2591, util=96.55% 00:35:53.352 nvme0n4: ios=10288/0, merge=0/0, ticks=2360/0, in_queue=2360, util=96.48% 00:35:53.610 12:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:53.610 12:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:35:53.869 12:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:53.869 12:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:35:54.129 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:54.129 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:35:54.129 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:54.129 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:35:54.387 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:35:54.387 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 306728 00:35:54.388 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:35:54.388 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:54.647 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:54.647 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:54.648 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:35:54.648 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:35:54.648 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:54.648 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:35:54.648 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:54.648 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:35:54.648 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:35:54.648 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:35:54.648 nvmf hotplug test: fio failed as expected 00:35:54.648 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:54.648 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:35:54.648 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:35:54.648 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:35:54.907 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:35:54.907 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:35:54.907 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@335 -- # nvmfcleanup 00:35:54.907 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@99 -- # sync 00:35:54.907 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:35:54.907 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@102 -- # set +e 00:35:54.907 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@103 -- # for i in {1..20} 00:35:54.907 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:35:54.907 rmmod nvme_tcp 00:35:54.907 rmmod nvme_fabrics 00:35:54.907 rmmod nvme_keyring 00:35:54.907 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:35:54.907 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@106 -- # set -e 00:35:54.907 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@107 -- # return 0 00:35:54.907 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # '[' -n 304102 ']' 00:35:54.907 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@337 -- # killprocess 304102 00:35:54.907 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 304102 ']' 00:35:54.907 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 304102 00:35:54.907 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:35:54.907 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:54.907 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 304102 00:35:54.907 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:54.907 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:54.907 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 304102' 00:35:54.907 killing process with pid 304102 00:35:54.907 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 304102 00:35:54.907 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 304102 00:35:55.166 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:35:55.166 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@342 -- # nvmf_fini 00:35:55.166 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@264 -- # local dev 00:35:55.166 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@267 -- # remove_target_ns 00:35:55.166 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:35:55.166 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:35:55.166 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:35:57.073 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@268 -- # delete_main_bridge 00:35:57.073 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:35:57.073 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@130 -- # return 0 00:35:57.073 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:35:57.073 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:35:57.073 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:35:57.073 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:35:57.073 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:35:57.073 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:35:57.073 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:35:57.073 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:35:57.073 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:35:57.073 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:35:57.073 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:35:57.073 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:35:57.073 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:35:57.073 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:35:57.073 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:35:57.073 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:35:57.073 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:35:57.073 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@41 -- # _dev=0 00:35:57.073 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@41 -- # dev_map=() 00:35:57.073 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@284 -- # iptr 00:35:57.073 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@542 -- # iptables-save 00:35:57.073 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:35:57.073 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@542 -- # iptables-restore 00:35:57.073 00:35:57.073 real 0m25.868s 00:35:57.073 user 1m31.098s 00:35:57.073 sys 0m11.429s 00:35:57.073 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:57.073 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:57.073 ************************************ 00:35:57.073 END TEST nvmf_fio_target 00:35:57.073 ************************************ 00:35:57.073 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:35:57.073 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:57.073 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:57.073 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:57.333 ************************************ 00:35:57.333 START TEST nvmf_bdevio 00:35:57.333 ************************************ 00:35:57.333 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:35:57.333 * Looking for test storage... 00:35:57.333 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:57.333 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:57.333 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:35:57.333 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:57.333 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:57.333 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:57.333 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:57.333 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:57.333 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:35:57.333 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:35:57.333 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:35:57.333 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:35:57.333 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:35:57.333 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:35:57.333 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:35:57.333 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:57.333 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:35:57.333 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:35:57.333 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:57.333 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:57.333 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:35:57.333 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:35:57.333 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:57.333 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:35:57.333 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:35:57.333 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:35:57.333 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:35:57.333 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:57.333 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:35:57.333 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:35:57.333 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:57.333 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:57.333 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:35:57.333 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:57.333 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:57.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:57.333 --rc genhtml_branch_coverage=1 00:35:57.333 --rc genhtml_function_coverage=1 00:35:57.333 --rc genhtml_legend=1 00:35:57.333 --rc geninfo_all_blocks=1 00:35:57.333 --rc geninfo_unexecuted_blocks=1 00:35:57.333 00:35:57.333 ' 00:35:57.333 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:57.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:57.333 --rc genhtml_branch_coverage=1 00:35:57.333 --rc genhtml_function_coverage=1 00:35:57.333 --rc genhtml_legend=1 00:35:57.333 --rc geninfo_all_blocks=1 00:35:57.333 --rc geninfo_unexecuted_blocks=1 00:35:57.333 00:35:57.333 ' 00:35:57.333 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:57.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:57.333 --rc genhtml_branch_coverage=1 00:35:57.333 --rc genhtml_function_coverage=1 00:35:57.333 --rc genhtml_legend=1 00:35:57.333 --rc geninfo_all_blocks=1 00:35:57.333 --rc geninfo_unexecuted_blocks=1 00:35:57.333 00:35:57.333 ' 00:35:57.333 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:57.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:57.333 --rc genhtml_branch_coverage=1 00:35:57.333 --rc genhtml_function_coverage=1 00:35:57.333 --rc genhtml_legend=1 00:35:57.333 --rc geninfo_all_blocks=1 00:35:57.333 --rc geninfo_unexecuted_blocks=1 00:35:57.333 00:35:57.333 ' 00:35:57.333 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:57.333 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:35:57.333 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:57.333 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:57.333 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:57.333 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:57.333 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:57.333 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:35:57.333 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:57.333 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:35:57.333 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:35:57.333 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:35:57.333 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:57.333 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:35:57.333 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:35:57.333 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:57.333 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:57.334 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:35:57.334 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:57.334 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:57.334 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:57.334 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:57.334 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:57.334 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:57.334 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:35:57.334 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:57.334 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:35:57.334 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:35:57.334 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:35:57.334 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:35:57.334 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@50 -- # : 0 00:35:57.334 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:35:57.334 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:35:57.334 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:35:57.334 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:57.334 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:57.334 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:35:57.334 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:35:57.334 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:35:57.334 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:35:57.334 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@54 -- # have_pci_nics=0 00:35:57.334 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:57.334 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:57.334 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:35:57.334 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:35:57.334 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:57.334 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@296 -- # prepare_net_devs 00:35:57.334 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # local -g is_hw=no 00:35:57.334 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@260 -- # remove_target_ns 00:35:57.334 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:35:57.334 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:35:57.334 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_target_ns 00:35:57.334 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:35:57.334 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:35:57.334 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # xtrace_disable 00:35:57.334 12:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:03.906 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:03.906 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@131 -- # pci_devs=() 00:36:03.906 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@131 -- # local -a pci_devs 00:36:03.906 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@132 -- # pci_net_devs=() 00:36:03.906 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:36:03.906 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@133 -- # pci_drivers=() 00:36:03.906 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@133 -- # local -A pci_drivers 00:36:03.906 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@135 -- # net_devs=() 00:36:03.906 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@135 -- # local -ga net_devs 00:36:03.906 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@136 -- # e810=() 00:36:03.906 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@136 -- # local -ga e810 00:36:03.906 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@137 -- # x722=() 00:36:03.906 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@137 -- # local -ga x722 00:36:03.906 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@138 -- # mlx=() 00:36:03.906 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@138 -- # local -ga mlx 00:36:03.906 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:03.906 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:03.906 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:03.906 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:03.906 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:03.906 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:03.906 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:03.906 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:03.906 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:03.906 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:03.906 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:03.906 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:03.906 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:36:03.906 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:36:03.906 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:36:03.906 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:36:03.906 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:36:03.906 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:36:03.906 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:36:03.906 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:36:03.906 Found 0000:86:00.0 (0x8086 - 0x159b) 00:36:03.906 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:36:03.906 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:36:03.906 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:03.906 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:03.906 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:36:03.906 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:36:03.906 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:36:03.906 Found 0000:86:00.1 (0x8086 - 0x159b) 00:36:03.906 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:36:03.906 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:36:03.906 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:03.906 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:03.906 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:36:03.906 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:36:03.906 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:36:03.906 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:36:03.906 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:36:03.906 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:03.906 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:36:03.906 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:03.906 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@234 -- # [[ up == up ]] 00:36:03.906 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:36:03.906 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:03.906 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:36:03.906 Found net devices under 0000:86:00.0: cvl_0_0 00:36:03.906 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:36:03.906 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:36:03.906 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:03.906 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:36:03.906 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:03.906 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@234 -- # [[ up == up ]] 00:36:03.906 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:36:03.906 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:03.906 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:36:03.906 Found net devices under 0000:86:00.1: cvl_0_1 00:36:03.906 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:36:03.906 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # is_hw=yes 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@257 -- # create_target_ns 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@27 -- # local -gA dev_map 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@28 -- # local -g _dev 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@44 -- # ips=() 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@11 -- # local val=167772161 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:36:03.907 10.0.0.1 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@11 -- # local val=167772162 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:36:03.907 10.0.0.2 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:36:03.907 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@38 -- # ping_ips 1 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@107 -- # local dev=initiator0 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:36:03.908 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:03.908 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.440 ms 00:36:03.908 00:36:03.908 --- 10.0.0.1 ping statistics --- 00:36:03.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:03.908 rtt min/avg/max/mdev = 0.440/0.440/0.440/0.000 ms 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@168 -- # get_net_dev target0 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@107 -- # local dev=target0 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:36:03.908 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:03.908 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:36:03.908 00:36:03.908 --- 10.0.0.2 ping statistics --- 00:36:03.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:03.908 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@98 -- # (( pair++ )) 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@270 -- # return 0 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@107 -- # local dev=initiator0 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:36:03.908 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:36:03.909 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:36:03.909 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:36:03.909 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:36:03.909 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:36:03.909 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:36:03.909 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:36:03.909 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:03.909 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:36:03.909 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:36:03.909 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:36:03.909 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:36:03.909 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:36:03.909 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:36:03.909 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@107 -- # local dev=initiator1 00:36:03.909 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:36:03.909 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:36:03.909 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@109 -- # return 1 00:36:03.909 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@168 -- # dev= 00:36:03.909 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@169 -- # return 0 00:36:03.909 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:36:03.909 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:36:03.909 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:36:03.909 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:36:03.909 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:36:03.909 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:03.909 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:03.909 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@168 -- # get_net_dev target0 00:36:03.909 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@107 -- # local dev=target0 00:36:03.909 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:36:03.909 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:36:03.909 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:36:03.909 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:36:03.909 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:36:03.909 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:36:03.909 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:36:03.909 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:36:03.909 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:36:03.909 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:03.909 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:36:03.909 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:36:03.909 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:36:03.909 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:36:03.909 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:03.909 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:03.909 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@168 -- # get_net_dev target1 00:36:03.909 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@107 -- # local dev=target1 00:36:03.909 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:36:03.909 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:36:03.909 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@109 -- # return 1 00:36:03.909 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@168 -- # dev= 00:36:03.909 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@169 -- # return 0 00:36:03.909 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:36:03.909 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:03.909 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:36:03.909 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:36:03.909 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:03.909 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:36:03.909 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:36:03.909 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:36:03.909 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:36:03.909 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:03.909 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:03.909 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # nvmfpid=311138 00:36:03.909 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@329 -- # waitforlisten 311138 00:36:03.909 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:36:03.909 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 311138 ']' 00:36:03.909 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:03.909 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:03.909 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:03.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:03.909 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:03.909 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:03.909 [2024-12-05 12:19:37.522189] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:03.909 [2024-12-05 12:19:37.523158] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:36:03.909 [2024-12-05 12:19:37.523198] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:03.909 [2024-12-05 12:19:37.601413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:03.909 [2024-12-05 12:19:37.641430] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:03.910 [2024-12-05 12:19:37.641467] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:03.910 [2024-12-05 12:19:37.641475] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:03.910 [2024-12-05 12:19:37.641481] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:03.910 [2024-12-05 12:19:37.641486] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:03.910 [2024-12-05 12:19:37.642962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:36:03.910 [2024-12-05 12:19:37.643070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:36:03.910 [2024-12-05 12:19:37.643152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:03.910 [2024-12-05 12:19:37.643153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:36:03.910 [2024-12-05 12:19:37.710968] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:03.910 [2024-12-05 12:19:37.711482] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:03.910 [2024-12-05 12:19:37.711909] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:36:03.910 [2024-12-05 12:19:37.712088] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:03.910 [2024-12-05 12:19:37.712141] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:03.910 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:03.910 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:36:03.910 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:36:03.910 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:03.910 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:03.910 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:03.910 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:03.910 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.910 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:03.910 [2024-12-05 12:19:37.791948] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:03.910 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.910 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:03.910 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.910 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:03.910 Malloc0 00:36:03.910 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.910 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:03.910 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.910 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:03.910 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.910 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:03.910 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.910 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:03.910 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.910 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:03.910 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.910 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:03.910 [2024-12-05 12:19:37.872095] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:03.910 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.910 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:36:03.910 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:36:03.910 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # config=() 00:36:03.910 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # local subsystem config 00:36:03.910 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:36:03.910 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:36:03.910 { 00:36:03.910 "params": { 00:36:03.910 "name": "Nvme$subsystem", 00:36:03.910 "trtype": "$TEST_TRANSPORT", 00:36:03.910 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:03.910 "adrfam": "ipv4", 00:36:03.910 "trsvcid": "$NVMF_PORT", 00:36:03.910 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:03.910 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:03.910 "hdgst": ${hdgst:-false}, 00:36:03.910 "ddgst": ${ddgst:-false} 00:36:03.910 }, 00:36:03.910 "method": "bdev_nvme_attach_controller" 00:36:03.910 } 00:36:03.910 EOF 00:36:03.910 )") 00:36:03.910 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@394 -- # cat 00:36:03.910 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@396 -- # jq . 00:36:03.910 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@397 -- # IFS=, 00:36:03.910 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:36:03.910 "params": { 00:36:03.910 "name": "Nvme1", 00:36:03.910 "trtype": "tcp", 00:36:03.910 "traddr": "10.0.0.2", 00:36:03.910 "adrfam": "ipv4", 00:36:03.910 "trsvcid": "4420", 00:36:03.910 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:03.910 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:03.910 "hdgst": false, 00:36:03.910 "ddgst": false 00:36:03.910 }, 00:36:03.910 "method": "bdev_nvme_attach_controller" 00:36:03.910 }' 00:36:03.910 [2024-12-05 12:19:37.925028] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:36:03.910 [2024-12-05 12:19:37.925078] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid311229 ] 00:36:03.910 [2024-12-05 12:19:38.003784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:03.910 [2024-12-05 12:19:38.048247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:03.910 [2024-12-05 12:19:38.048304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:03.910 [2024-12-05 12:19:38.048305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:04.168 I/O targets: 00:36:04.168 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:36:04.168 00:36:04.168 00:36:04.168 CUnit - A unit testing framework for C - Version 2.1-3 00:36:04.168 http://cunit.sourceforge.net/ 00:36:04.168 00:36:04.168 00:36:04.168 Suite: bdevio tests on: Nvme1n1 00:36:04.426 Test: blockdev write read block ...passed 00:36:04.426 Test: blockdev write zeroes read block ...passed 00:36:04.426 Test: blockdev write zeroes read no split ...passed 00:36:04.426 Test: blockdev write zeroes read split ...passed 00:36:04.426 Test: blockdev write zeroes read split partial ...passed 00:36:04.426 Test: blockdev reset ...[2024-12-05 12:19:38.510517] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:36:04.426 [2024-12-05 12:19:38.510577] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12ca350 (9): Bad file descriptor 00:36:04.426 [2024-12-05 12:19:38.513824] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:36:04.426 passed 00:36:04.426 Test: blockdev write read 8 blocks ...passed 00:36:04.426 Test: blockdev write read size > 128k ...passed 00:36:04.426 Test: blockdev write read invalid size ...passed 00:36:04.684 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:36:04.684 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:36:04.684 Test: blockdev write read max offset ...passed 00:36:04.684 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:36:04.684 Test: blockdev writev readv 8 blocks ...passed 00:36:04.684 Test: blockdev writev readv 30 x 1block ...passed 00:36:04.684 Test: blockdev writev readv block ...passed 00:36:04.684 Test: blockdev writev readv size > 128k ...passed 00:36:04.684 Test: blockdev writev readv size > 128k in two iovs ...passed 00:36:04.684 Test: blockdev comparev and writev ...[2024-12-05 12:19:38.805556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:04.684 [2024-12-05 12:19:38.805588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:04.684 [2024-12-05 12:19:38.805602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:04.684 [2024-12-05 12:19:38.805610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:04.684 [2024-12-05 12:19:38.805898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:04.684 [2024-12-05 12:19:38.805909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:36:04.684 [2024-12-05 12:19:38.805920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:04.684 [2024-12-05 12:19:38.805927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:36:04.684 [2024-12-05 12:19:38.806199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:04.684 [2024-12-05 12:19:38.806209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:36:04.684 [2024-12-05 12:19:38.806222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:04.684 [2024-12-05 12:19:38.806230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:36:04.684 [2024-12-05 12:19:38.806505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:04.684 [2024-12-05 12:19:38.806516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:36:04.684 [2024-12-05 12:19:38.806527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:04.684 [2024-12-05 12:19:38.806533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:36:04.684 passed 00:36:04.942 Test: blockdev nvme passthru rw ...passed 00:36:04.942 Test: blockdev nvme passthru vendor specific ...[2024-12-05 12:19:38.888768] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:36:04.942 [2024-12-05 12:19:38.888783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:36:04.942 [2024-12-05 12:19:38.888894] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:36:04.942 [2024-12-05 12:19:38.888904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:36:04.942 [2024-12-05 12:19:38.889022] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:36:04.942 [2024-12-05 12:19:38.889031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:36:04.942 [2024-12-05 12:19:38.889147] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:36:04.942 [2024-12-05 12:19:38.889156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:36:04.942 passed 00:36:04.942 Test: blockdev nvme admin passthru ...passed 00:36:04.942 Test: blockdev copy ...passed 00:36:04.942 00:36:04.942 Run Summary: Type Total Ran Passed Failed Inactive 00:36:04.942 suites 1 1 n/a 0 0 00:36:04.942 tests 23 23 23 0 0 00:36:04.942 asserts 152 152 152 0 n/a 00:36:04.942 00:36:04.942 Elapsed time = 1.166 seconds 00:36:04.942 12:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:04.942 12:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.942 12:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:04.942 12:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.942 12:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:36:04.942 12:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:36:04.942 12:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@335 -- # nvmfcleanup 00:36:04.942 12:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@99 -- # sync 00:36:04.942 12:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:36:04.942 12:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@102 -- # set +e 00:36:04.942 12:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@103 -- # for i in {1..20} 00:36:04.942 12:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:36:04.942 rmmod nvme_tcp 00:36:04.942 rmmod nvme_fabrics 00:36:04.942 rmmod nvme_keyring 00:36:04.942 12:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:36:05.201 12:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@106 -- # set -e 00:36:05.201 12:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@107 -- # return 0 00:36:05.201 12:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # '[' -n 311138 ']' 00:36:05.201 12:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@337 -- # killprocess 311138 00:36:05.201 12:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 311138 ']' 00:36:05.201 12:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 311138 00:36:05.201 12:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:36:05.201 12:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:05.201 12:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 311138 00:36:05.201 12:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:36:05.201 12:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:36:05.201 12:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 311138' 00:36:05.201 killing process with pid 311138 00:36:05.201 12:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 311138 00:36:05.201 12:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 311138 00:36:05.201 12:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:36:05.201 12:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@342 -- # nvmf_fini 00:36:05.201 12:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@264 -- # local dev 00:36:05.201 12:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@267 -- # remove_target_ns 00:36:05.201 12:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:36:05.201 12:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:36:05.201 12:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_target_ns 00:36:07.737 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@268 -- # delete_main_bridge 00:36:07.737 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:36:07.737 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@130 -- # return 0 00:36:07.737 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:36:07.737 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:36:07.737 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:36:07.737 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:36:07.737 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:36:07.737 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:36:07.737 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:36:07.737 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:36:07.738 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:36:07.738 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:36:07.738 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:36:07.738 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:36:07.738 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:36:07.738 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:36:07.738 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:36:07.738 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:36:07.738 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:36:07.738 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@41 -- # _dev=0 00:36:07.738 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@41 -- # dev_map=() 00:36:07.738 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@284 -- # iptr 00:36:07.738 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@542 -- # iptables-save 00:36:07.738 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:36:07.738 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@542 -- # iptables-restore 00:36:07.738 00:36:07.738 real 0m10.190s 00:36:07.738 user 0m9.749s 00:36:07.738 sys 0m5.231s 00:36:07.738 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:07.738 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:07.738 ************************************ 00:36:07.738 END TEST nvmf_bdevio 00:36:07.738 ************************************ 00:36:07.738 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # [[ tcp == \t\c\p ]] 00:36:07.738 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # [[ phy != phy ]] 00:36:07.738 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:36:07.738 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:07.738 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:07.738 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:07.738 ************************************ 00:36:07.738 START TEST nvmf_zcopy 00:36:07.738 ************************************ 00:36:07.738 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:36:07.738 * Looking for test storage... 00:36:07.738 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:07.738 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:07.738 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:36:07.738 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:07.738 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:07.738 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:07.738 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:07.738 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:07.738 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:36:07.738 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:36:07.738 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:36:07.738 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:36:07.738 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:36:07.738 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:36:07.738 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:36:07.738 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:07.738 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:36:07.738 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:36:07.738 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:07.738 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:07.738 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:36:07.738 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:36:07.738 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:07.738 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:36:07.738 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:36:07.738 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:36:07.738 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:36:07.738 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:07.738 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:36:07.738 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:36:07.738 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:07.738 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:07.738 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:36:07.738 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:07.738 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:07.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:07.738 --rc genhtml_branch_coverage=1 00:36:07.738 --rc genhtml_function_coverage=1 00:36:07.738 --rc genhtml_legend=1 00:36:07.738 --rc geninfo_all_blocks=1 00:36:07.738 --rc geninfo_unexecuted_blocks=1 00:36:07.738 00:36:07.738 ' 00:36:07.738 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:07.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:07.738 --rc genhtml_branch_coverage=1 00:36:07.738 --rc genhtml_function_coverage=1 00:36:07.738 --rc genhtml_legend=1 00:36:07.738 --rc geninfo_all_blocks=1 00:36:07.738 --rc geninfo_unexecuted_blocks=1 00:36:07.738 00:36:07.738 ' 00:36:07.738 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:07.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:07.738 --rc genhtml_branch_coverage=1 00:36:07.738 --rc genhtml_function_coverage=1 00:36:07.738 --rc genhtml_legend=1 00:36:07.738 --rc geninfo_all_blocks=1 00:36:07.738 --rc geninfo_unexecuted_blocks=1 00:36:07.738 00:36:07.738 ' 00:36:07.738 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:07.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:07.738 --rc genhtml_branch_coverage=1 00:36:07.738 --rc genhtml_function_coverage=1 00:36:07.738 --rc genhtml_legend=1 00:36:07.738 --rc geninfo_all_blocks=1 00:36:07.738 --rc geninfo_unexecuted_blocks=1 00:36:07.738 00:36:07.738 ' 00:36:07.738 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:07.738 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:36:07.738 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:07.738 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:07.738 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:07.738 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:07.738 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:07.738 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:36:07.738 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:07.738 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:36:07.739 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:36:07.739 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:36:07.739 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:07.739 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:36:07.739 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:36:07.739 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:07.739 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:07.739 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:36:07.739 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:07.739 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:07.739 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:07.739 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:07.739 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:07.739 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:07.739 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:36:07.739 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:07.739 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:36:07.739 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:36:07.739 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:36:07.739 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:36:07.739 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@50 -- # : 0 00:36:07.739 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:36:07.739 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:36:07.739 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:36:07.739 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:07.739 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:07.739 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:36:07.739 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:36:07.739 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:36:07.739 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:36:07.739 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@54 -- # have_pci_nics=0 00:36:07.739 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:36:07.739 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:36:07.739 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:07.739 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@296 -- # prepare_net_devs 00:36:07.739 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # local -g is_hw=no 00:36:07.739 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@260 -- # remove_target_ns 00:36:07.739 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:36:07.739 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:36:07.739 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_target_ns 00:36:07.739 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:36:07.739 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:36:07.739 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # xtrace_disable 00:36:07.739 12:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@131 -- # pci_devs=() 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@131 -- # local -a pci_devs 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@132 -- # pci_net_devs=() 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@133 -- # pci_drivers=() 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@133 -- # local -A pci_drivers 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@135 -- # net_devs=() 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@135 -- # local -ga net_devs 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@136 -- # e810=() 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@136 -- # local -ga e810 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@137 -- # x722=() 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@137 -- # local -ga x722 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@138 -- # mlx=() 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@138 -- # local -ga mlx 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:36:14.312 Found 0000:86:00.0 (0x8086 - 0x159b) 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:36:14.312 Found 0000:86:00.1 (0x8086 - 0x159b) 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@234 -- # [[ up == up ]] 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:36:14.312 Found net devices under 0000:86:00.0: cvl_0_0 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@234 -- # [[ up == up ]] 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:36:14.312 Found net devices under 0000:86:00.1: cvl_0_1 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # is_hw=yes 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@257 -- # create_target_ns 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@27 -- # local -gA dev_map 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@28 -- # local -g _dev 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:36:14.312 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@44 -- # ips=() 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@11 -- # local val=167772161 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:36:14.313 10.0.0.1 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@11 -- # local val=167772162 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:36:14.313 10.0.0.2 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@38 -- # ping_ips 1 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@107 -- # local dev=initiator0 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:36:14.313 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:14.313 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.428 ms 00:36:14.313 00:36:14.313 --- 10.0.0.1 ping statistics --- 00:36:14.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:14.313 rtt min/avg/max/mdev = 0.428/0.428/0.428/0.000 ms 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@168 -- # get_net_dev target0 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@107 -- # local dev=target0 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:36:14.313 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:36:14.314 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:14.314 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:36:14.314 00:36:14.314 --- 10.0.0.2 ping statistics --- 00:36:14.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:14.314 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@98 -- # (( pair++ )) 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@270 -- # return 0 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@107 -- # local dev=initiator0 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@107 -- # local dev=initiator1 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@109 -- # return 1 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@168 -- # dev= 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@169 -- # return 0 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@168 -- # get_net_dev target0 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@107 -- # local dev=target0 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@168 -- # get_net_dev target1 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@107 -- # local dev=target1 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@109 -- # return 1 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@168 -- # dev= 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@169 -- # return 0 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # nvmfpid=314938 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@329 -- # waitforlisten 314938 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 314938 ']' 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:14.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:14.314 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:14.314 [2024-12-05 12:19:47.820341] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:14.315 [2024-12-05 12:19:47.821255] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:36:14.315 [2024-12-05 12:19:47.821287] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:14.315 [2024-12-05 12:19:47.896370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:14.315 [2024-12-05 12:19:47.936321] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:14.315 [2024-12-05 12:19:47.936356] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:14.315 [2024-12-05 12:19:47.936363] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:14.315 [2024-12-05 12:19:47.936373] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:14.315 [2024-12-05 12:19:47.936378] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:14.315 [2024-12-05 12:19:47.936927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:14.315 [2024-12-05 12:19:48.002820] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:14.315 [2024-12-05 12:19:48.003007] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:14.315 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:14.315 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:36:14.315 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:36:14.315 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:14.315 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:14.315 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:14.315 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:36:14.315 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.315 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:14.315 [2024-12-05 12:19:48.069679] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:14.315 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.315 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:36:14.315 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.315 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:14.315 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.315 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@20 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:14.315 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.315 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:14.315 [2024-12-05 12:19:48.097886] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:14.315 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.315 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:14.315 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.315 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:14.315 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.315 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:36:14.315 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.315 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:14.315 malloc0 00:36:14.315 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.315 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:36:14.315 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.315 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:14.315 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.315 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:36:14.315 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@28 -- # gen_nvmf_target_json 00:36:14.315 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # config=() 00:36:14.315 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # local subsystem config 00:36:14.315 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:36:14.315 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:36:14.315 { 00:36:14.315 "params": { 00:36:14.315 "name": "Nvme$subsystem", 00:36:14.315 "trtype": "$TEST_TRANSPORT", 00:36:14.315 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:14.315 "adrfam": "ipv4", 00:36:14.315 "trsvcid": "$NVMF_PORT", 00:36:14.315 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:14.315 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:14.315 "hdgst": ${hdgst:-false}, 00:36:14.315 "ddgst": ${ddgst:-false} 00:36:14.315 }, 00:36:14.315 "method": "bdev_nvme_attach_controller" 00:36:14.315 } 00:36:14.315 EOF 00:36:14.315 )") 00:36:14.315 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@394 -- # cat 00:36:14.315 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@396 -- # jq . 00:36:14.315 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@397 -- # IFS=, 00:36:14.315 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:36:14.315 "params": { 00:36:14.315 "name": "Nvme1", 00:36:14.315 "trtype": "tcp", 00:36:14.315 "traddr": "10.0.0.2", 00:36:14.315 "adrfam": "ipv4", 00:36:14.315 "trsvcid": "4420", 00:36:14.315 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:14.315 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:14.315 "hdgst": false, 00:36:14.315 "ddgst": false 00:36:14.315 }, 00:36:14.315 "method": "bdev_nvme_attach_controller" 00:36:14.315 }' 00:36:14.315 [2024-12-05 12:19:48.195514] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:36:14.315 [2024-12-05 12:19:48.195563] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid315010 ] 00:36:14.315 [2024-12-05 12:19:48.269925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:14.315 [2024-12-05 12:19:48.311917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:14.575 Running I/O for 10 seconds... 00:36:16.449 8558.00 IOPS, 66.86 MiB/s [2024-12-05T11:19:52.023Z] 8586.50 IOPS, 67.08 MiB/s [2024-12-05T11:19:52.959Z] 8617.00 IOPS, 67.32 MiB/s [2024-12-05T11:19:53.897Z] 8656.50 IOPS, 67.63 MiB/s [2024-12-05T11:19:54.833Z] 8667.00 IOPS, 67.71 MiB/s [2024-12-05T11:19:55.770Z] 8668.33 IOPS, 67.72 MiB/s [2024-12-05T11:19:56.834Z] 8668.57 IOPS, 67.72 MiB/s [2024-12-05T11:19:57.771Z] 8663.75 IOPS, 67.69 MiB/s [2024-12-05T11:19:58.710Z] 8671.67 IOPS, 67.75 MiB/s [2024-12-05T11:19:58.710Z] 8669.60 IOPS, 67.73 MiB/s 00:36:24.514 Latency(us) 00:36:24.514 [2024-12-05T11:19:58.710Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:24.514 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:36:24.514 Verification LBA range: start 0x0 length 0x1000 00:36:24.514 Nvme1n1 : 10.01 8672.29 67.75 0.00 0.00 14717.13 1997.29 20846.69 00:36:24.514 [2024-12-05T11:19:58.710Z] =================================================================================================================== 00:36:24.514 [2024-12-05T11:19:58.710Z] Total : 8672.29 67.75 0.00 0.00 14717.13 1997.29 20846.69 00:36:24.774 12:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@34 -- # perfpid=316786 00:36:24.774 12:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@36 -- # xtrace_disable 00:36:24.774 12:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:24.774 12:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:36:24.774 12:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@32 -- # gen_nvmf_target_json 00:36:24.774 12:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # config=() 00:36:24.774 12:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # local subsystem config 00:36:24.774 12:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:36:24.774 12:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:36:24.774 { 00:36:24.774 "params": { 00:36:24.774 "name": "Nvme$subsystem", 00:36:24.774 "trtype": "$TEST_TRANSPORT", 00:36:24.774 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:24.774 "adrfam": "ipv4", 00:36:24.774 "trsvcid": "$NVMF_PORT", 00:36:24.774 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:24.774 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:24.774 "hdgst": ${hdgst:-false}, 00:36:24.774 "ddgst": ${ddgst:-false} 00:36:24.774 }, 00:36:24.774 "method": "bdev_nvme_attach_controller" 00:36:24.774 } 00:36:24.774 EOF 00:36:24.774 )") 00:36:24.774 [2024-12-05 12:19:58.789263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:24.774 [2024-12-05 12:19:58.789293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:24.774 12:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@394 -- # cat 00:36:24.774 12:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@396 -- # jq . 00:36:24.774 12:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@397 -- # IFS=, 00:36:24.774 12:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:36:24.774 "params": { 00:36:24.774 "name": "Nvme1", 00:36:24.774 "trtype": "tcp", 00:36:24.774 "traddr": "10.0.0.2", 00:36:24.774 "adrfam": "ipv4", 00:36:24.774 "trsvcid": "4420", 00:36:24.774 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:24.774 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:24.774 "hdgst": false, 00:36:24.774 "ddgst": false 00:36:24.774 }, 00:36:24.774 "method": "bdev_nvme_attach_controller" 00:36:24.774 }' 00:36:24.774 [2024-12-05 12:19:58.801230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:24.774 [2024-12-05 12:19:58.801243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:24.774 [2024-12-05 12:19:58.813228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:24.774 [2024-12-05 12:19:58.813237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:24.774 [2024-12-05 12:19:58.825225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:24.774 [2024-12-05 12:19:58.825234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:24.774 [2024-12-05 12:19:58.831812] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:36:24.774 [2024-12-05 12:19:58.831852] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid316786 ] 00:36:24.774 [2024-12-05 12:19:58.837227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:24.774 [2024-12-05 12:19:58.837237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:24.774 [2024-12-05 12:19:58.849226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:24.774 [2024-12-05 12:19:58.849235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:24.774 [2024-12-05 12:19:58.861228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:24.774 [2024-12-05 12:19:58.861237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:24.774 [2024-12-05 12:19:58.873227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:24.774 [2024-12-05 12:19:58.873236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:24.774 [2024-12-05 12:19:58.885225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:24.774 [2024-12-05 12:19:58.885234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:24.774 [2024-12-05 12:19:58.897225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:24.774 [2024-12-05 12:19:58.897233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:24.774 [2024-12-05 12:19:58.906721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:24.774 [2024-12-05 12:19:58.909225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:24.774 [2024-12-05 12:19:58.909234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:24.774 [2024-12-05 12:19:58.921227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:24.774 [2024-12-05 12:19:58.921241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:24.774 [2024-12-05 12:19:58.933225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:24.774 [2024-12-05 12:19:58.933234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:24.774 [2024-12-05 12:19:58.945225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:24.774 [2024-12-05 12:19:58.945235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:24.774 [2024-12-05 12:19:58.947862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:24.774 [2024-12-05 12:19:58.957230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:24.774 [2024-12-05 12:19:58.957241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:24.774 [2024-12-05 12:19:58.969244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:24.774 [2024-12-05 12:19:58.969267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.033 [2024-12-05 12:19:58.981232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.033 [2024-12-05 12:19:58.981246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.033 [2024-12-05 12:19:58.993229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.033 [2024-12-05 12:19:58.993241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.033 [2024-12-05 12:19:59.005232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.033 [2024-12-05 12:19:59.005243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.033 [2024-12-05 12:19:59.017226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.033 [2024-12-05 12:19:59.017237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.033 [2024-12-05 12:19:59.029236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.033 [2024-12-05 12:19:59.029252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.033 [2024-12-05 12:19:59.041234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.033 [2024-12-05 12:19:59.041250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.033 [2024-12-05 12:19:59.053231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.033 [2024-12-05 12:19:59.053245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.033 [2024-12-05 12:19:59.065229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.033 [2024-12-05 12:19:59.065241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.033 [2024-12-05 12:19:59.077226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.033 [2024-12-05 12:19:59.077236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.034 [2024-12-05 12:19:59.089226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.034 [2024-12-05 12:19:59.089236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.034 [2024-12-05 12:19:59.101229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.034 [2024-12-05 12:19:59.101241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.034 [2024-12-05 12:19:59.113228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.034 [2024-12-05 12:19:59.113243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.034 [2024-12-05 12:19:59.125226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.034 [2024-12-05 12:19:59.125235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.034 [2024-12-05 12:19:59.137225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.034 [2024-12-05 12:19:59.137233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.034 [2024-12-05 12:19:59.149225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.034 [2024-12-05 12:19:59.149234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.034 [2024-12-05 12:19:59.161228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.034 [2024-12-05 12:19:59.161240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.034 [2024-12-05 12:19:59.173226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.034 [2024-12-05 12:19:59.173235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.034 [2024-12-05 12:19:59.185226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.034 [2024-12-05 12:19:59.185234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.034 [2024-12-05 12:19:59.197228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.034 [2024-12-05 12:19:59.197240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.034 [2024-12-05 12:19:59.209225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.034 [2024-12-05 12:19:59.209234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.034 [2024-12-05 12:19:59.221225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.034 [2024-12-05 12:19:59.221233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.293 [2024-12-05 12:19:59.233227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.293 [2024-12-05 12:19:59.233237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.293 [2024-12-05 12:19:59.245288] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.293 [2024-12-05 12:19:59.245302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.293 [2024-12-05 12:19:59.257230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.293 [2024-12-05 12:19:59.257246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.293 Running I/O for 5 seconds... 00:36:25.293 [2024-12-05 12:19:59.271191] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.293 [2024-12-05 12:19:59.271210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.293 [2024-12-05 12:19:59.285910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.293 [2024-12-05 12:19:59.285928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.293 [2024-12-05 12:19:59.301357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.293 [2024-12-05 12:19:59.301380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.293 [2024-12-05 12:19:59.312640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.293 [2024-12-05 12:19:59.312658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.293 [2024-12-05 12:19:59.326895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.293 [2024-12-05 12:19:59.326912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.293 [2024-12-05 12:19:59.341471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.293 [2024-12-05 12:19:59.341489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.293 [2024-12-05 12:19:59.353771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.293 [2024-12-05 12:19:59.353789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.293 [2024-12-05 12:19:59.366953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.293 [2024-12-05 12:19:59.366971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.293 [2024-12-05 12:19:59.381605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.293 [2024-12-05 12:19:59.381622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.293 [2024-12-05 12:19:59.393880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.293 [2024-12-05 12:19:59.393898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.293 [2024-12-05 12:19:59.408759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.293 [2024-12-05 12:19:59.408777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.293 [2024-12-05 12:19:59.422813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.293 [2024-12-05 12:19:59.422832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.293 [2024-12-05 12:19:59.437378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.293 [2024-12-05 12:19:59.437397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.293 [2024-12-05 12:19:59.451141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.293 [2024-12-05 12:19:59.451159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.293 [2024-12-05 12:19:59.465390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.293 [2024-12-05 12:19:59.465407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.293 [2024-12-05 12:19:59.478319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.293 [2024-12-05 12:19:59.478337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.552 [2024-12-05 12:19:59.493375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.552 [2024-12-05 12:19:59.493394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.552 [2024-12-05 12:19:59.504633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.552 [2024-12-05 12:19:59.504651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.552 [2024-12-05 12:19:59.519017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.552 [2024-12-05 12:19:59.519035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.552 [2024-12-05 12:19:59.533710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.552 [2024-12-05 12:19:59.533728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.552 [2024-12-05 12:19:59.548957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.552 [2024-12-05 12:19:59.548980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.552 [2024-12-05 12:19:59.563322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.552 [2024-12-05 12:19:59.563340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.552 [2024-12-05 12:19:59.577827] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.552 [2024-12-05 12:19:59.577847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.552 [2024-12-05 12:19:59.589591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.552 [2024-12-05 12:19:59.589610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.552 [2024-12-05 12:19:59.603517] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.552 [2024-12-05 12:19:59.603535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.552 [2024-12-05 12:19:59.618570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.552 [2024-12-05 12:19:59.618588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.552 [2024-12-05 12:19:59.633288] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.552 [2024-12-05 12:19:59.633306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.552 [2024-12-05 12:19:59.647109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.552 [2024-12-05 12:19:59.647127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.552 [2024-12-05 12:19:59.662059] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.552 [2024-12-05 12:19:59.662077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.552 [2024-12-05 12:19:59.677243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.552 [2024-12-05 12:19:59.677262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.552 [2024-12-05 12:19:59.690767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.552 [2024-12-05 12:19:59.690787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.552 [2024-12-05 12:19:59.705247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.552 [2024-12-05 12:19:59.705267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.552 [2024-12-05 12:19:59.718002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.552 [2024-12-05 12:19:59.718022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.552 [2024-12-05 12:19:59.730840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.552 [2024-12-05 12:19:59.730858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.552 [2024-12-05 12:19:59.745392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.552 [2024-12-05 12:19:59.745410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.812 [2024-12-05 12:19:59.757014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.812 [2024-12-05 12:19:59.757033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.812 [2024-12-05 12:19:59.770933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.812 [2024-12-05 12:19:59.770951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.812 [2024-12-05 12:19:59.785659] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.812 [2024-12-05 12:19:59.785677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.812 [2024-12-05 12:19:59.798112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.812 [2024-12-05 12:19:59.798130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.812 [2024-12-05 12:19:59.813810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.812 [2024-12-05 12:19:59.813832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.812 [2024-12-05 12:19:59.828847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.812 [2024-12-05 12:19:59.828866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.812 [2024-12-05 12:19:59.843166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.812 [2024-12-05 12:19:59.843184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.812 [2024-12-05 12:19:59.857918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.812 [2024-12-05 12:19:59.857936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.812 [2024-12-05 12:19:59.873349] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.812 [2024-12-05 12:19:59.873373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.812 [2024-12-05 12:19:59.886570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.812 [2024-12-05 12:19:59.886588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.812 [2024-12-05 12:19:59.900811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.812 [2024-12-05 12:19:59.900829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.812 [2024-12-05 12:19:59.915075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.812 [2024-12-05 12:19:59.915094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.812 [2024-12-05 12:19:59.929527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.812 [2024-12-05 12:19:59.929545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.812 [2024-12-05 12:19:59.945293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.812 [2024-12-05 12:19:59.945312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.812 [2024-12-05 12:19:59.958824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.812 [2024-12-05 12:19:59.958842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.812 [2024-12-05 12:19:59.973638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.812 [2024-12-05 12:19:59.973655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.812 [2024-12-05 12:19:59.989319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.812 [2024-12-05 12:19:59.989339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.812 [2024-12-05 12:20:00.002861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.812 [2024-12-05 12:20:00.002881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.071 [2024-12-05 12:20:00.018574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.071 [2024-12-05 12:20:00.018593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.071 [2024-12-05 12:20:00.033157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.071 [2024-12-05 12:20:00.033179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.071 [2024-12-05 12:20:00.045014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.071 [2024-12-05 12:20:00.045032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.071 [2024-12-05 12:20:00.059163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.071 [2024-12-05 12:20:00.059182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.071 [2024-12-05 12:20:00.073984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.071 [2024-12-05 12:20:00.074004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.071 [2024-12-05 12:20:00.089432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.071 [2024-12-05 12:20:00.089457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.071 [2024-12-05 12:20:00.101997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.071 [2024-12-05 12:20:00.102015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.071 [2024-12-05 12:20:00.116946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.071 [2024-12-05 12:20:00.116966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.071 [2024-12-05 12:20:00.130891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.071 [2024-12-05 12:20:00.130910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.071 [2024-12-05 12:20:00.145740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.071 [2024-12-05 12:20:00.145758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.071 [2024-12-05 12:20:00.161443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.071 [2024-12-05 12:20:00.161461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.071 [2024-12-05 12:20:00.172129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.071 [2024-12-05 12:20:00.172147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.071 [2024-12-05 12:20:00.187507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.071 [2024-12-05 12:20:00.187525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.071 [2024-12-05 12:20:00.201980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.071 [2024-12-05 12:20:00.201998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.071 [2024-12-05 12:20:00.218119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.071 [2024-12-05 12:20:00.218138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.071 [2024-12-05 12:20:00.232939] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.071 [2024-12-05 12:20:00.232957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.071 [2024-12-05 12:20:00.246111] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.071 [2024-12-05 12:20:00.246129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.072 [2024-12-05 12:20:00.261567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.072 [2024-12-05 12:20:00.261584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.331 16807.00 IOPS, 131.30 MiB/s [2024-12-05T11:20:00.527Z] [2024-12-05 12:20:00.275003] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.331 [2024-12-05 12:20:00.275021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.331 [2024-12-05 12:20:00.290199] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.331 [2024-12-05 12:20:00.290217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.331 [2024-12-05 12:20:00.305661] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.331 [2024-12-05 12:20:00.305679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.331 [2024-12-05 12:20:00.321258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.331 [2024-12-05 12:20:00.321276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.331 [2024-12-05 12:20:00.333528] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.331 [2024-12-05 12:20:00.333546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.331 [2024-12-05 12:20:00.347054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.331 [2024-12-05 12:20:00.347072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.331 [2024-12-05 12:20:00.362153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.331 [2024-12-05 12:20:00.362171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.331 [2024-12-05 12:20:00.376998] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.331 [2024-12-05 12:20:00.377016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.331 [2024-12-05 12:20:00.391213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.331 [2024-12-05 12:20:00.391231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.331 [2024-12-05 12:20:00.405748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.331 [2024-12-05 12:20:00.405765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.331 [2024-12-05 12:20:00.420362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.331 [2024-12-05 12:20:00.420388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.331 [2024-12-05 12:20:00.434045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.331 [2024-12-05 12:20:00.434063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.331 [2024-12-05 12:20:00.449674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.331 [2024-12-05 12:20:00.449691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.331 [2024-12-05 12:20:00.465548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.331 [2024-12-05 12:20:00.465565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.331 [2024-12-05 12:20:00.481528] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.331 [2024-12-05 12:20:00.481546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.331 [2024-12-05 12:20:00.497264] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.331 [2024-12-05 12:20:00.497283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.331 [2024-12-05 12:20:00.511378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.331 [2024-12-05 12:20:00.511397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.331 [2024-12-05 12:20:00.526093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.331 [2024-12-05 12:20:00.526111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.590 [2024-12-05 12:20:00.541624] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.590 [2024-12-05 12:20:00.541642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.590 [2024-12-05 12:20:00.554270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.590 [2024-12-05 12:20:00.554288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.590 [2024-12-05 12:20:00.569480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.590 [2024-12-05 12:20:00.569498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.590 [2024-12-05 12:20:00.580691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.590 [2024-12-05 12:20:00.580709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.590 [2024-12-05 12:20:00.594882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.590 [2024-12-05 12:20:00.594900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.590 [2024-12-05 12:20:00.609588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.590 [2024-12-05 12:20:00.609605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.590 [2024-12-05 12:20:00.625326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.590 [2024-12-05 12:20:00.625346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.590 [2024-12-05 12:20:00.636296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.590 [2024-12-05 12:20:00.636315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.590 [2024-12-05 12:20:00.650946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.591 [2024-12-05 12:20:00.650964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.591 [2024-12-05 12:20:00.665634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.591 [2024-12-05 12:20:00.665651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.591 [2024-12-05 12:20:00.678267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.591 [2024-12-05 12:20:00.678284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.591 [2024-12-05 12:20:00.693839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.591 [2024-12-05 12:20:00.693857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.591 [2024-12-05 12:20:00.708754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.591 [2024-12-05 12:20:00.708772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.591 [2024-12-05 12:20:00.722120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.591 [2024-12-05 12:20:00.722137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.591 [2024-12-05 12:20:00.737822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.591 [2024-12-05 12:20:00.737840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.591 [2024-12-05 12:20:00.752893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.591 [2024-12-05 12:20:00.752911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.591 [2024-12-05 12:20:00.766910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.591 [2024-12-05 12:20:00.766929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.591 [2024-12-05 12:20:00.781416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.591 [2024-12-05 12:20:00.781435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.851 [2024-12-05 12:20:00.791901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.851 [2024-12-05 12:20:00.791920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.851 [2024-12-05 12:20:00.806795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.851 [2024-12-05 12:20:00.806814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.851 [2024-12-05 12:20:00.821329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.851 [2024-12-05 12:20:00.821348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.851 [2024-12-05 12:20:00.831822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.851 [2024-12-05 12:20:00.831840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.851 [2024-12-05 12:20:00.847126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.851 [2024-12-05 12:20:00.847144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.851 [2024-12-05 12:20:00.861732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.851 [2024-12-05 12:20:00.861750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.851 [2024-12-05 12:20:00.877131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.851 [2024-12-05 12:20:00.877152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.851 [2024-12-05 12:20:00.890631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.851 [2024-12-05 12:20:00.890650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.851 [2024-12-05 12:20:00.905289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.851 [2024-12-05 12:20:00.905308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.851 [2024-12-05 12:20:00.918175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.851 [2024-12-05 12:20:00.918194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.851 [2024-12-05 12:20:00.930684] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.851 [2024-12-05 12:20:00.930702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.851 [2024-12-05 12:20:00.945488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.851 [2024-12-05 12:20:00.945507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.851 [2024-12-05 12:20:00.957789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.851 [2024-12-05 12:20:00.957807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.851 [2024-12-05 12:20:00.973361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.851 [2024-12-05 12:20:00.973385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.851 [2024-12-05 12:20:00.985719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.851 [2024-12-05 12:20:00.985737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.851 [2024-12-05 12:20:01.001487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.851 [2024-12-05 12:20:01.001505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.851 [2024-12-05 12:20:01.012494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.851 [2024-12-05 12:20:01.012511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.851 [2024-12-05 12:20:01.027112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.851 [2024-12-05 12:20:01.027130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.851 [2024-12-05 12:20:01.041762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.851 [2024-12-05 12:20:01.041780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.108 [2024-12-05 12:20:01.057359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.108 [2024-12-05 12:20:01.057382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.108 [2024-12-05 12:20:01.068445] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.108 [2024-12-05 12:20:01.068463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.108 [2024-12-05 12:20:01.083129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.108 [2024-12-05 12:20:01.083147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.108 [2024-12-05 12:20:01.097482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.108 [2024-12-05 12:20:01.097500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.108 [2024-12-05 12:20:01.108048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.108 [2024-12-05 12:20:01.108066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.108 [2024-12-05 12:20:01.122543] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.108 [2024-12-05 12:20:01.122562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.108 [2024-12-05 12:20:01.137102] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.108 [2024-12-05 12:20:01.137121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.108 [2024-12-05 12:20:01.150797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.108 [2024-12-05 12:20:01.150817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.108 [2024-12-05 12:20:01.165149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.108 [2024-12-05 12:20:01.165169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.108 [2024-12-05 12:20:01.177768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.108 [2024-12-05 12:20:01.177786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.108 [2024-12-05 12:20:01.191075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.108 [2024-12-05 12:20:01.191093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.108 [2024-12-05 12:20:01.205452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.108 [2024-12-05 12:20:01.205471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.108 [2024-12-05 12:20:01.216592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.108 [2024-12-05 12:20:01.216611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.108 [2024-12-05 12:20:01.231243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.108 [2024-12-05 12:20:01.231263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.108 [2024-12-05 12:20:01.245984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.108 [2024-12-05 12:20:01.246002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.108 [2024-12-05 12:20:01.261431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.108 [2024-12-05 12:20:01.261449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.108 16834.00 IOPS, 131.52 MiB/s [2024-12-05T11:20:01.304Z] [2024-12-05 12:20:01.274013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.108 [2024-12-05 12:20:01.274030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.108 [2024-12-05 12:20:01.289265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.108 [2024-12-05 12:20:01.289284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.108 [2024-12-05 12:20:01.303263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.108 [2024-12-05 12:20:01.303282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.364 [2024-12-05 12:20:01.317675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.364 [2024-12-05 12:20:01.317693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.364 [2024-12-05 12:20:01.329703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.364 [2024-12-05 12:20:01.329721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.364 [2024-12-05 12:20:01.343003] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.364 [2024-12-05 12:20:01.343021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.364 [2024-12-05 12:20:01.357805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.364 [2024-12-05 12:20:01.357823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.364 [2024-12-05 12:20:01.372902] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.364 [2024-12-05 12:20:01.372921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.364 [2024-12-05 12:20:01.386278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.364 [2024-12-05 12:20:01.386296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.364 [2024-12-05 12:20:01.400919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.364 [2024-12-05 12:20:01.400938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.365 [2024-12-05 12:20:01.414537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.365 [2024-12-05 12:20:01.414560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.365 [2024-12-05 12:20:01.429319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.365 [2024-12-05 12:20:01.429337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.365 [2024-12-05 12:20:01.441937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.365 [2024-12-05 12:20:01.441956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.365 [2024-12-05 12:20:01.455035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.365 [2024-12-05 12:20:01.455053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.365 [2024-12-05 12:20:01.469577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.365 [2024-12-05 12:20:01.469595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.365 [2024-12-05 12:20:01.484458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.365 [2024-12-05 12:20:01.484477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.365 [2024-12-05 12:20:01.499148] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.365 [2024-12-05 12:20:01.499167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.365 [2024-12-05 12:20:01.513819] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.365 [2024-12-05 12:20:01.513837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.365 [2024-12-05 12:20:01.529077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.365 [2024-12-05 12:20:01.529096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.365 [2024-12-05 12:20:01.540244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.365 [2024-12-05 12:20:01.540262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.365 [2024-12-05 12:20:01.554832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.365 [2024-12-05 12:20:01.554850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.623 [2024-12-05 12:20:01.569669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.623 [2024-12-05 12:20:01.569688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.623 [2024-12-05 12:20:01.585165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.623 [2024-12-05 12:20:01.585183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.623 [2024-12-05 12:20:01.599015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.623 [2024-12-05 12:20:01.599032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.623 [2024-12-05 12:20:01.613516] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.623 [2024-12-05 12:20:01.613533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.623 [2024-12-05 12:20:01.625979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.623 [2024-12-05 12:20:01.625997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.623 [2024-12-05 12:20:01.638981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.623 [2024-12-05 12:20:01.638999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.623 [2024-12-05 12:20:01.653653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.623 [2024-12-05 12:20:01.653672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.623 [2024-12-05 12:20:01.668987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.623 [2024-12-05 12:20:01.669005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.623 [2024-12-05 12:20:01.683037] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.623 [2024-12-05 12:20:01.683060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.623 [2024-12-05 12:20:01.697545] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.623 [2024-12-05 12:20:01.697562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.623 [2024-12-05 12:20:01.709109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.623 [2024-12-05 12:20:01.709127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.623 [2024-12-05 12:20:01.722652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.623 [2024-12-05 12:20:01.722670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.623 [2024-12-05 12:20:01.736829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.623 [2024-12-05 12:20:01.736847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.623 [2024-12-05 12:20:01.751379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.623 [2024-12-05 12:20:01.751397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.623 [2024-12-05 12:20:01.765083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.623 [2024-12-05 12:20:01.765101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.623 [2024-12-05 12:20:01.777803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.623 [2024-12-05 12:20:01.777820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.623 [2024-12-05 12:20:01.789987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.623 [2024-12-05 12:20:01.790005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.623 [2024-12-05 12:20:01.804716] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.623 [2024-12-05 12:20:01.804734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.623 [2024-12-05 12:20:01.818022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.623 [2024-12-05 12:20:01.818040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.882 [2024-12-05 12:20:01.833404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.882 [2024-12-05 12:20:01.833423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.882 [2024-12-05 12:20:01.846706] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.882 [2024-12-05 12:20:01.846725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.882 [2024-12-05 12:20:01.857197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.882 [2024-12-05 12:20:01.857215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.882 [2024-12-05 12:20:01.870815] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.882 [2024-12-05 12:20:01.870832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.882 [2024-12-05 12:20:01.885365] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.882 [2024-12-05 12:20:01.885389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.882 [2024-12-05 12:20:01.896279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.882 [2024-12-05 12:20:01.896296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.882 [2024-12-05 12:20:01.910545] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.882 [2024-12-05 12:20:01.910562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.882 [2024-12-05 12:20:01.925333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.882 [2024-12-05 12:20:01.925351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.882 [2024-12-05 12:20:01.937704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.882 [2024-12-05 12:20:01.937725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.882 [2024-12-05 12:20:01.953544] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.882 [2024-12-05 12:20:01.953561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.882 [2024-12-05 12:20:01.965859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.882 [2024-12-05 12:20:01.965876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.882 [2024-12-05 12:20:01.979102] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.882 [2024-12-05 12:20:01.979119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.882 [2024-12-05 12:20:01.993595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.882 [2024-12-05 12:20:01.993612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.882 [2024-12-05 12:20:02.008812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.882 [2024-12-05 12:20:02.008830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.882 [2024-12-05 12:20:02.023235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.882 [2024-12-05 12:20:02.023253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.882 [2024-12-05 12:20:02.037515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.882 [2024-12-05 12:20:02.037532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.882 [2024-12-05 12:20:02.049752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.882 [2024-12-05 12:20:02.049769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.882 [2024-12-05 12:20:02.062919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.882 [2024-12-05 12:20:02.062938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.882 [2024-12-05 12:20:02.077196] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.882 [2024-12-05 12:20:02.077214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.140 [2024-12-05 12:20:02.088379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.140 [2024-12-05 12:20:02.088399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.140 [2024-12-05 12:20:02.103363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.140 [2024-12-05 12:20:02.103388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.140 [2024-12-05 12:20:02.117859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.140 [2024-12-05 12:20:02.117876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.140 [2024-12-05 12:20:02.133344] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.140 [2024-12-05 12:20:02.133362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.140 [2024-12-05 12:20:02.144438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.140 [2024-12-05 12:20:02.144456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.140 [2024-12-05 12:20:02.158878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.140 [2024-12-05 12:20:02.158895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.140 [2024-12-05 12:20:02.173204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.140 [2024-12-05 12:20:02.173222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.140 [2024-12-05 12:20:02.185987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.140 [2024-12-05 12:20:02.186004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.140 [2024-12-05 12:20:02.198431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.140 [2024-12-05 12:20:02.198449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.140 [2024-12-05 12:20:02.212823] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.140 [2024-12-05 12:20:02.212842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.140 [2024-12-05 12:20:02.226503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.140 [2024-12-05 12:20:02.226521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.140 [2024-12-05 12:20:02.236718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.140 [2024-12-05 12:20:02.236736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.140 [2024-12-05 12:20:02.251386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.140 [2024-12-05 12:20:02.251404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.140 [2024-12-05 12:20:02.265837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.140 [2024-12-05 12:20:02.265854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.140 16890.00 IOPS, 131.95 MiB/s [2024-12-05T11:20:02.336Z] [2024-12-05 12:20:02.281287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.140 [2024-12-05 12:20:02.281305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.140 [2024-12-05 12:20:02.293950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.140 [2024-12-05 12:20:02.293969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.140 [2024-12-05 12:20:02.308750] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.140 [2024-12-05 12:20:02.308768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.140 [2024-12-05 12:20:02.323455] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.140 [2024-12-05 12:20:02.323472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.399 [2024-12-05 12:20:02.337892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.399 [2024-12-05 12:20:02.337913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.399 [2024-12-05 12:20:02.353016] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.399 [2024-12-05 12:20:02.353034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.399 [2024-12-05 12:20:02.366422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.399 [2024-12-05 12:20:02.366440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.399 [2024-12-05 12:20:02.381220] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.399 [2024-12-05 12:20:02.381239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.399 [2024-12-05 12:20:02.395253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.399 [2024-12-05 12:20:02.395270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.399 [2024-12-05 12:20:02.410121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.399 [2024-12-05 12:20:02.410138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.399 [2024-12-05 12:20:02.425809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.399 [2024-12-05 12:20:02.425826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.399 [2024-12-05 12:20:02.438210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.399 [2024-12-05 12:20:02.438227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.399 [2024-12-05 12:20:02.453078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.399 [2024-12-05 12:20:02.453096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.399 [2024-12-05 12:20:02.466015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.399 [2024-12-05 12:20:02.466033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.399 [2024-12-05 12:20:02.479271] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.399 [2024-12-05 12:20:02.479288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.399 [2024-12-05 12:20:02.494229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.399 [2024-12-05 12:20:02.494246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.399 [2024-12-05 12:20:02.509239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.399 [2024-12-05 12:20:02.509256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.399 [2024-12-05 12:20:02.522448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.399 [2024-12-05 12:20:02.522465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.399 [2024-12-05 12:20:02.537256] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.399 [2024-12-05 12:20:02.537273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.399 [2024-12-05 12:20:02.550834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.399 [2024-12-05 12:20:02.550852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.399 [2024-12-05 12:20:02.565217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.399 [2024-12-05 12:20:02.565235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.399 [2024-12-05 12:20:02.577911] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.399 [2024-12-05 12:20:02.577930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.399 [2024-12-05 12:20:02.591348] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.399 [2024-12-05 12:20:02.591372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.658 [2024-12-05 12:20:02.606039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.658 [2024-12-05 12:20:02.606060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.658 [2024-12-05 12:20:02.621380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.658 [2024-12-05 12:20:02.621400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.658 [2024-12-05 12:20:02.635045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.658 [2024-12-05 12:20:02.635063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.658 [2024-12-05 12:20:02.649660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.658 [2024-12-05 12:20:02.649678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.658 [2024-12-05 12:20:02.664848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.658 [2024-12-05 12:20:02.664867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.658 [2024-12-05 12:20:02.679211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.658 [2024-12-05 12:20:02.679230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.658 [2024-12-05 12:20:02.693583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.658 [2024-12-05 12:20:02.693601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.658 [2024-12-05 12:20:02.709893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.658 [2024-12-05 12:20:02.709912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.658 [2024-12-05 12:20:02.725546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.658 [2024-12-05 12:20:02.725564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.658 [2024-12-05 12:20:02.738714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.658 [2024-12-05 12:20:02.738732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.658 [2024-12-05 12:20:02.753198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.658 [2024-12-05 12:20:02.753217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.658 [2024-12-05 12:20:02.766997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.658 [2024-12-05 12:20:02.767016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.658 [2024-12-05 12:20:02.781062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.658 [2024-12-05 12:20:02.781081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.658 [2024-12-05 12:20:02.794721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.658 [2024-12-05 12:20:02.794740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.658 [2024-12-05 12:20:02.808890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.658 [2024-12-05 12:20:02.808908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.658 [2024-12-05 12:20:02.822382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.658 [2024-12-05 12:20:02.822401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.658 [2024-12-05 12:20:02.837618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.658 [2024-12-05 12:20:02.837637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.658 [2024-12-05 12:20:02.852739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.658 [2024-12-05 12:20:02.852757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.917 [2024-12-05 12:20:02.866935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.917 [2024-12-05 12:20:02.866953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.917 [2024-12-05 12:20:02.881262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.917 [2024-12-05 12:20:02.881280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.917 [2024-12-05 12:20:02.893829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.917 [2024-12-05 12:20:02.893846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.917 [2024-12-05 12:20:02.909086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.917 [2024-12-05 12:20:02.909103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.917 [2024-12-05 12:20:02.923114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.917 [2024-12-05 12:20:02.923131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.917 [2024-12-05 12:20:02.937871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.917 [2024-12-05 12:20:02.937889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.917 [2024-12-05 12:20:02.949832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.917 [2024-12-05 12:20:02.949849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.917 [2024-12-05 12:20:02.965664] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.917 [2024-12-05 12:20:02.965682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.917 [2024-12-05 12:20:02.981913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.917 [2024-12-05 12:20:02.981932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.917 [2024-12-05 12:20:02.997160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.917 [2024-12-05 12:20:02.997187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.917 [2024-12-05 12:20:03.010038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.917 [2024-12-05 12:20:03.010057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.917 [2024-12-05 12:20:03.025091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.917 [2024-12-05 12:20:03.025109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.917 [2024-12-05 12:20:03.037811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.917 [2024-12-05 12:20:03.037829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.917 [2024-12-05 12:20:03.053552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.917 [2024-12-05 12:20:03.053569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.917 [2024-12-05 12:20:03.069463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.917 [2024-12-05 12:20:03.069482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.917 [2024-12-05 12:20:03.082770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.917 [2024-12-05 12:20:03.082790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.917 [2024-12-05 12:20:03.097030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.917 [2024-12-05 12:20:03.097049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.917 [2024-12-05 12:20:03.109868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.917 [2024-12-05 12:20:03.109885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.176 [2024-12-05 12:20:03.122598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.176 [2024-12-05 12:20:03.122617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.176 [2024-12-05 12:20:03.137948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.176 [2024-12-05 12:20:03.137965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.176 [2024-12-05 12:20:03.152854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.176 [2024-12-05 12:20:03.152872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.176 [2024-12-05 12:20:03.166788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.176 [2024-12-05 12:20:03.166806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.176 [2024-12-05 12:20:03.181248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.176 [2024-12-05 12:20:03.181266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.176 [2024-12-05 12:20:03.193442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.176 [2024-12-05 12:20:03.193460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.176 [2024-12-05 12:20:03.206785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.176 [2024-12-05 12:20:03.206803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.176 [2024-12-05 12:20:03.221360] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.176 [2024-12-05 12:20:03.221383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.176 [2024-12-05 12:20:03.233865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.176 [2024-12-05 12:20:03.233883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.176 [2024-12-05 12:20:03.248856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.176 [2024-12-05 12:20:03.248874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.176 [2024-12-05 12:20:03.263050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.176 [2024-12-05 12:20:03.263073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.176 16913.75 IOPS, 132.14 MiB/s [2024-12-05T11:20:03.372Z] [2024-12-05 12:20:03.278063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.176 [2024-12-05 12:20:03.278081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.176 [2024-12-05 12:20:03.293151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.176 [2024-12-05 12:20:03.293171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.176 [2024-12-05 12:20:03.307120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.176 [2024-12-05 12:20:03.307138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.176 [2024-12-05 12:20:03.322034] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.176 [2024-12-05 12:20:03.322052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.176 [2024-12-05 12:20:03.337060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.176 [2024-12-05 12:20:03.337078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.176 [2024-12-05 12:20:03.350666] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.176 [2024-12-05 12:20:03.350684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.176 [2024-12-05 12:20:03.365520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.176 [2024-12-05 12:20:03.365538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.435 [2024-12-05 12:20:03.381192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.435 [2024-12-05 12:20:03.381211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.435 [2024-12-05 12:20:03.394070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.435 [2024-12-05 12:20:03.394089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.435 [2024-12-05 12:20:03.409192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.435 [2024-12-05 12:20:03.409210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.435 [2024-12-05 12:20:03.421874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.435 [2024-12-05 12:20:03.421892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.435 [2024-12-05 12:20:03.436547] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.435 [2024-12-05 12:20:03.436564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.435 [2024-12-05 12:20:03.449874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.435 [2024-12-05 12:20:03.449892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.435 [2024-12-05 12:20:03.464661] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.435 [2024-12-05 12:20:03.464679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.435 [2024-12-05 12:20:03.479193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.435 [2024-12-05 12:20:03.479211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.435 [2024-12-05 12:20:03.494000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.435 [2024-12-05 12:20:03.494017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.435 [2024-12-05 12:20:03.508989] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.435 [2024-12-05 12:20:03.509007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.435 [2024-12-05 12:20:03.522797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.435 [2024-12-05 12:20:03.522815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.435 [2024-12-05 12:20:03.537618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.435 [2024-12-05 12:20:03.537640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.435 [2024-12-05 12:20:03.553107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.435 [2024-12-05 12:20:03.553126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.435 [2024-12-05 12:20:03.566373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.435 [2024-12-05 12:20:03.566391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.435 [2024-12-05 12:20:03.577229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.435 [2024-12-05 12:20:03.577247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.435 [2024-12-05 12:20:03.590651] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.435 [2024-12-05 12:20:03.590669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.435 [2024-12-05 12:20:03.605550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.435 [2024-12-05 12:20:03.605568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.435 [2024-12-05 12:20:03.621230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.435 [2024-12-05 12:20:03.621248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.694 [2024-12-05 12:20:03.632710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.694 [2024-12-05 12:20:03.632729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.694 [2024-12-05 12:20:03.647068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.694 [2024-12-05 12:20:03.647086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.694 [2024-12-05 12:20:03.661966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.694 [2024-12-05 12:20:03.661984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.694 [2024-12-05 12:20:03.676551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.694 [2024-12-05 12:20:03.676569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.694 [2024-12-05 12:20:03.691147] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.694 [2024-12-05 12:20:03.691166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.694 [2024-12-05 12:20:03.705847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.694 [2024-12-05 12:20:03.705866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.694 [2024-12-05 12:20:03.720934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.694 [2024-12-05 12:20:03.720953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.694 [2024-12-05 12:20:03.734847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.694 [2024-12-05 12:20:03.734865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.694 [2024-12-05 12:20:03.750409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.694 [2024-12-05 12:20:03.750427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.694 [2024-12-05 12:20:03.765628] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.694 [2024-12-05 12:20:03.765646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.694 [2024-12-05 12:20:03.780958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.694 [2024-12-05 12:20:03.780977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.694 [2024-12-05 12:20:03.794960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.694 [2024-12-05 12:20:03.794978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.694 [2024-12-05 12:20:03.809834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.694 [2024-12-05 12:20:03.809852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.694 [2024-12-05 12:20:03.825565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.694 [2024-12-05 12:20:03.825583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.695 [2024-12-05 12:20:03.838192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.695 [2024-12-05 12:20:03.838211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.695 [2024-12-05 12:20:03.853514] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.695 [2024-12-05 12:20:03.853531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.695 [2024-12-05 12:20:03.869683] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.695 [2024-12-05 12:20:03.869701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.695 [2024-12-05 12:20:03.881550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.695 [2024-12-05 12:20:03.881566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.954 [2024-12-05 12:20:03.894776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.954 [2024-12-05 12:20:03.894794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.954 [2024-12-05 12:20:03.909694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.954 [2024-12-05 12:20:03.909712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.954 [2024-12-05 12:20:03.925070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.954 [2024-12-05 12:20:03.925088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.954 [2024-12-05 12:20:03.938181] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.954 [2024-12-05 12:20:03.938198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.954 [2024-12-05 12:20:03.952892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.954 [2024-12-05 12:20:03.952910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.954 [2024-12-05 12:20:03.965633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.954 [2024-12-05 12:20:03.965650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.954 [2024-12-05 12:20:03.978654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.954 [2024-12-05 12:20:03.978671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.954 [2024-12-05 12:20:03.993334] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.954 [2024-12-05 12:20:03.993352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.954 [2024-12-05 12:20:04.006008] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.954 [2024-12-05 12:20:04.006026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.954 [2024-12-05 12:20:04.020940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.954 [2024-12-05 12:20:04.020960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.954 [2024-12-05 12:20:04.034863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.954 [2024-12-05 12:20:04.034881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.954 [2024-12-05 12:20:04.049554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.954 [2024-12-05 12:20:04.049573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.954 [2024-12-05 12:20:04.064828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.954 [2024-12-05 12:20:04.064849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.954 [2024-12-05 12:20:04.078992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.954 [2024-12-05 12:20:04.079012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.954 [2024-12-05 12:20:04.093913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.954 [2024-12-05 12:20:04.093932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.954 [2024-12-05 12:20:04.109045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.954 [2024-12-05 12:20:04.109064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.954 [2024-12-05 12:20:04.123300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.954 [2024-12-05 12:20:04.123319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.954 [2024-12-05 12:20:04.137703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.954 [2024-12-05 12:20:04.137721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.240 [2024-12-05 12:20:04.153206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.240 [2024-12-05 12:20:04.153225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.240 [2024-12-05 12:20:04.166119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.240 [2024-12-05 12:20:04.166137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.241 [2024-12-05 12:20:04.181423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.241 [2024-12-05 12:20:04.181442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.241 [2024-12-05 12:20:04.193316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.241 [2024-12-05 12:20:04.193335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.241 [2024-12-05 12:20:04.207017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.241 [2024-12-05 12:20:04.207035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.241 [2024-12-05 12:20:04.222118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.241 [2024-12-05 12:20:04.222137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.241 [2024-12-05 12:20:04.237414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.241 [2024-12-05 12:20:04.237435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.241 [2024-12-05 12:20:04.251406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.241 [2024-12-05 12:20:04.251423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.241 [2024-12-05 12:20:04.265698] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.241 [2024-12-05 12:20:04.265716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.241 16928.00 IOPS, 132.25 MiB/s [2024-12-05T11:20:04.437Z] [2024-12-05 12:20:04.280295] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.241 [2024-12-05 12:20:04.280314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.241 00:36:30.241 Latency(us) 00:36:30.241 [2024-12-05T11:20:04.437Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:30.241 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:36:30.241 Nvme1n1 : 5.01 16929.17 132.26 0.00 0.00 7553.19 1934.87 13419.28 00:36:30.241 [2024-12-05T11:20:04.437Z] =================================================================================================================== 00:36:30.241 [2024-12-05T11:20:04.437Z] Total : 16929.17 132.26 0.00 0.00 7553.19 1934.87 13419.28 00:36:30.241 [2024-12-05 12:20:04.289233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.241 [2024-12-05 12:20:04.289254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.241 [2024-12-05 12:20:04.301231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.241 [2024-12-05 12:20:04.301245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.241 [2024-12-05 12:20:04.313244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.241 [2024-12-05 12:20:04.313262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.241 [2024-12-05 12:20:04.325234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.241 [2024-12-05 12:20:04.325250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.241 [2024-12-05 12:20:04.337235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.241 [2024-12-05 12:20:04.337249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.241 [2024-12-05 12:20:04.349230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.241 [2024-12-05 12:20:04.349243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.241 [2024-12-05 12:20:04.361229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.241 [2024-12-05 12:20:04.361242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.241 [2024-12-05 12:20:04.373230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.241 [2024-12-05 12:20:04.373244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.241 [2024-12-05 12:20:04.385241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.241 [2024-12-05 12:20:04.385255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.241 [2024-12-05 12:20:04.397228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.241 [2024-12-05 12:20:04.397239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.241 [2024-12-05 12:20:04.409227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.241 [2024-12-05 12:20:04.409238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.241 [2024-12-05 12:20:04.421232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.241 [2024-12-05 12:20:04.421244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.241 [2024-12-05 12:20:04.433226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.241 [2024-12-05 12:20:04.433235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.501 [2024-12-05 12:20:04.445226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.501 [2024-12-05 12:20:04.445236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 37: kill: (316786) - No such process 00:36:30.501 12:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@44 -- # wait 316786 00:36:30.501 12:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@47 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:30.501 12:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.501 12:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:30.501 12:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.501 12:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@48 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:30.501 12:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.501 12:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:30.501 delay0 00:36:30.501 12:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.501 12:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:36:30.501 12:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.501 12:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:30.501 12:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.501 12:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:36:30.501 [2024-12-05 12:20:04.630514] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:36:38.620 Initializing NVMe Controllers 00:36:38.620 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:38.620 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:36:38.620 Initialization complete. Launching workers. 00:36:38.620 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 3278 00:36:38.620 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 3564, failed to submit 34 00:36:38.620 success 3424, unsuccessful 140, failed 0 00:36:38.620 12:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:36:38.620 12:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@55 -- # nvmftestfini 00:36:38.620 12:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@335 -- # nvmfcleanup 00:36:38.620 12:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@99 -- # sync 00:36:38.620 12:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:36:38.620 12:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@102 -- # set +e 00:36:38.620 12:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@103 -- # for i in {1..20} 00:36:38.620 12:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:36:38.620 rmmod nvme_tcp 00:36:38.620 rmmod nvme_fabrics 00:36:38.620 rmmod nvme_keyring 00:36:38.620 12:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:36:38.620 12:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@106 -- # set -e 00:36:38.620 12:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@107 -- # return 0 00:36:38.620 12:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # '[' -n 314938 ']' 00:36:38.620 12:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@337 -- # killprocess 314938 00:36:38.620 12:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 314938 ']' 00:36:38.620 12:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 314938 00:36:38.620 12:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:36:38.620 12:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:38.620 12:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 314938 00:36:38.620 12:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:38.620 12:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:38.620 12:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 314938' 00:36:38.620 killing process with pid 314938 00:36:38.620 12:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 314938 00:36:38.620 12:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 314938 00:36:38.620 12:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:36:38.620 12:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@342 -- # nvmf_fini 00:36:38.620 12:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@264 -- # local dev 00:36:38.620 12:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@267 -- # remove_target_ns 00:36:38.620 12:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:36:38.620 12:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:36:38.620 12:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_target_ns 00:36:39.556 12:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@268 -- # delete_main_bridge 00:36:39.556 12:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:36:39.556 12:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@130 -- # return 0 00:36:39.556 12:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:36:39.556 12:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:36:39.556 12:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:36:39.556 12:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:36:39.556 12:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:36:39.556 12:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:36:39.556 12:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:36:39.556 12:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:36:39.556 12:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:36:39.556 12:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:36:39.556 12:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:36:39.556 12:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:36:39.556 12:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:36:39.556 12:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:36:39.556 12:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:36:39.556 12:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:36:39.556 12:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:36:39.556 12:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@41 -- # _dev=0 00:36:39.556 12:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@41 -- # dev_map=() 00:36:39.556 12:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@284 -- # iptr 00:36:39.556 12:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@542 -- # iptables-save 00:36:39.556 12:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:36:39.556 12:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@542 -- # iptables-restore 00:36:39.556 00:36:39.556 real 0m32.123s 00:36:39.556 user 0m41.485s 00:36:39.556 sys 0m12.825s 00:36:39.556 12:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:39.556 12:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:39.556 ************************************ 00:36:39.556 END TEST nvmf_zcopy 00:36:39.556 ************************************ 00:36:39.557 12:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:36:39.557 00:36:39.557 real 4m25.544s 00:36:39.557 user 9m4.645s 00:36:39.557 sys 1m47.232s 00:36:39.557 12:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:39.557 12:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:39.557 ************************************ 00:36:39.557 END TEST nvmf_target_core_interrupt_mode 00:36:39.557 ************************************ 00:36:39.557 12:20:13 nvmf_tcp -- nvmf/nvmf.sh@17 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:36:39.557 12:20:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:39.557 12:20:13 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:39.557 12:20:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:39.816 ************************************ 00:36:39.816 START TEST nvmf_interrupt 00:36:39.816 ************************************ 00:36:39.816 12:20:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:36:39.816 * Looking for test storage... 00:36:39.816 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:39.816 12:20:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:39.816 12:20:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:36:39.816 12:20:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:39.816 12:20:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:39.816 12:20:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:39.816 12:20:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:39.816 12:20:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:39.816 12:20:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:36:39.816 12:20:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:36:39.816 12:20:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:36:39.816 12:20:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:36:39.816 12:20:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:36:39.816 12:20:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:36:39.816 12:20:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:36:39.816 12:20:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:39.816 12:20:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:36:39.816 12:20:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:36:39.816 12:20:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:39.816 12:20:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:39.816 12:20:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:36:39.816 12:20:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:36:39.816 12:20:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:39.816 12:20:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:36:39.816 12:20:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:36:39.816 12:20:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:36:39.816 12:20:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:36:39.816 12:20:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:39.816 12:20:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:36:39.816 12:20:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:36:39.816 12:20:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:39.816 12:20:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:39.816 12:20:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:36:39.816 12:20:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:39.816 12:20:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:39.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:39.816 --rc genhtml_branch_coverage=1 00:36:39.816 --rc genhtml_function_coverage=1 00:36:39.816 --rc genhtml_legend=1 00:36:39.816 --rc geninfo_all_blocks=1 00:36:39.816 --rc geninfo_unexecuted_blocks=1 00:36:39.816 00:36:39.816 ' 00:36:39.816 12:20:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:39.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:39.816 --rc genhtml_branch_coverage=1 00:36:39.816 --rc genhtml_function_coverage=1 00:36:39.816 --rc genhtml_legend=1 00:36:39.816 --rc geninfo_all_blocks=1 00:36:39.817 --rc geninfo_unexecuted_blocks=1 00:36:39.817 00:36:39.817 ' 00:36:39.817 12:20:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:39.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:39.817 --rc genhtml_branch_coverage=1 00:36:39.817 --rc genhtml_function_coverage=1 00:36:39.817 --rc genhtml_legend=1 00:36:39.817 --rc geninfo_all_blocks=1 00:36:39.817 --rc geninfo_unexecuted_blocks=1 00:36:39.817 00:36:39.817 ' 00:36:39.817 12:20:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:39.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:39.817 --rc genhtml_branch_coverage=1 00:36:39.817 --rc genhtml_function_coverage=1 00:36:39.817 --rc genhtml_legend=1 00:36:39.817 --rc geninfo_all_blocks=1 00:36:39.817 --rc geninfo_unexecuted_blocks=1 00:36:39.817 00:36:39.817 ' 00:36:39.817 12:20:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:39.817 12:20:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:36:39.817 12:20:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:39.817 12:20:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:39.817 12:20:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:39.817 12:20:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:39.817 12:20:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:39.817 12:20:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:36:39.817 12:20:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:39.817 12:20:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:36:39.817 12:20:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:36:39.817 12:20:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:36:39.817 12:20:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:39.817 12:20:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:36:39.817 12:20:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:36:39.817 12:20:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:39.817 12:20:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:39.817 12:20:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:36:39.817 12:20:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:39.817 12:20:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:39.817 12:20:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:39.817 12:20:13 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:39.817 12:20:13 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:39.817 12:20:13 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:39.817 12:20:13 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:36:39.817 12:20:13 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:39.817 12:20:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:36:39.817 12:20:13 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:36:39.817 12:20:13 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:36:39.817 12:20:13 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:36:39.817 12:20:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@50 -- # : 0 00:36:39.817 12:20:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:36:39.817 12:20:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:36:39.817 12:20:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:36:39.817 12:20:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:39.817 12:20:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:39.817 12:20:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:36:39.817 12:20:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:36:39.817 12:20:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:36:39.817 12:20:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:36:39.817 12:20:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@54 -- # have_pci_nics=0 00:36:39.817 12:20:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:36:39.817 12:20:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:36:39.817 12:20:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:36:39.817 12:20:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:36:39.817 12:20:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:39.817 12:20:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@296 -- # prepare_net_devs 00:36:39.817 12:20:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # local -g is_hw=no 00:36:39.817 12:20:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@260 -- # remove_target_ns 00:36:39.817 12:20:13 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:36:39.817 12:20:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 14> /dev/null' 00:36:39.817 12:20:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_target_ns 00:36:39.817 12:20:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:36:39.817 12:20:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:36:39.817 12:20:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # xtrace_disable 00:36:39.817 12:20:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:46.386 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:46.386 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@131 -- # pci_devs=() 00:36:46.386 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@131 -- # local -a pci_devs 00:36:46.386 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@132 -- # pci_net_devs=() 00:36:46.386 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:36:46.386 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@133 -- # pci_drivers=() 00:36:46.386 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@133 -- # local -A pci_drivers 00:36:46.386 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@135 -- # net_devs=() 00:36:46.386 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@135 -- # local -ga net_devs 00:36:46.386 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@136 -- # e810=() 00:36:46.386 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@136 -- # local -ga e810 00:36:46.386 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@137 -- # x722=() 00:36:46.386 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@137 -- # local -ga x722 00:36:46.386 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@138 -- # mlx=() 00:36:46.386 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@138 -- # local -ga mlx 00:36:46.386 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:46.386 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:46.386 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:46.386 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:46.386 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:46.386 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:46.386 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:46.386 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:46.386 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:46.386 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:46.386 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:46.386 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:46.386 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:36:46.386 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:36:46.386 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:36:46.386 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:36:46.386 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:36:46.386 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:36:46.386 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:36:46.386 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:36:46.386 Found 0000:86:00.0 (0x8086 - 0x159b) 00:36:46.386 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:36:46.386 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:36:46.386 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:46.386 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:46.386 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:36:46.386 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:36:46.386 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:36:46.386 Found 0000:86:00.1 (0x8086 - 0x159b) 00:36:46.386 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:36:46.386 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:36:46.386 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:46.386 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:46.386 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:36:46.386 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:36:46.386 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:36:46.386 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:36:46.386 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:36:46.386 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:46.386 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:36:46.386 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:46.386 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@234 -- # [[ up == up ]] 00:36:46.386 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:36:46.386 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:46.386 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:36:46.386 Found net devices under 0000:86:00.0: cvl_0_0 00:36:46.386 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:36:46.386 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@234 -- # [[ up == up ]] 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:36:46.387 Found net devices under 0000:86:00.1: cvl_0_1 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # is_hw=yes 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@257 -- # create_target_ns 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@27 -- # local -gA dev_map 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@28 -- # local -g _dev 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@44 -- # ips=() 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@11 -- # local val=167772161 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:36:46.387 10.0.0.1 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@11 -- # local val=167772162 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:36:46.387 10.0.0.2 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:36:46.387 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@38 -- # ping_ips 1 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@107 -- # local dev=initiator0 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:36:46.388 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:46.388 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.425 ms 00:36:46.388 00:36:46.388 --- 10.0.0.1 ping statistics --- 00:36:46.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:46.388 rtt min/avg/max/mdev = 0.425/0.425/0.425/0.000 ms 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@168 -- # get_net_dev target0 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@107 -- # local dev=target0 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:36:46.388 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:46.388 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:36:46.388 00:36:46.388 --- 10.0.0.2 ping statistics --- 00:36:46.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:46.388 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@98 -- # (( pair++ )) 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@270 -- # return 0 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@107 -- # local dev=initiator0 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@107 -- # local dev=initiator1 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@109 -- # return 1 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@168 -- # dev= 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@169 -- # return 0 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:46.388 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:46.389 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@168 -- # get_net_dev target0 00:36:46.389 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@107 -- # local dev=target0 00:36:46.389 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:36:46.389 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:36:46.389 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:36:46.389 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:36:46.389 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:36:46.389 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:36:46.389 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:36:46.389 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:36:46.389 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:36:46.389 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:46.389 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:36:46.389 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:36:46.389 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:36:46.389 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:36:46.389 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:46.389 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:46.389 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@168 -- # get_net_dev target1 00:36:46.389 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@107 -- # local dev=target1 00:36:46.389 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:36:46.389 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:36:46.389 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@109 -- # return 1 00:36:46.389 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@168 -- # dev= 00:36:46.389 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@169 -- # return 0 00:36:46.389 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:36:46.389 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:46.389 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:36:46.389 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:36:46.389 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:46.389 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:36:46.389 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:36:46.389 12:20:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:36:46.389 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:36:46.389 12:20:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:46.389 12:20:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:46.389 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # nvmfpid=322180 00:36:46.389 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@329 -- # waitforlisten 322180 00:36:46.389 12:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:36:46.389 12:20:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 322180 ']' 00:36:46.389 12:20:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:46.389 12:20:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:46.389 12:20:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:46.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:46.389 12:20:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:46.389 12:20:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:46.389 [2024-12-05 12:20:20.014058] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:46.389 [2024-12-05 12:20:20.015071] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:36:46.389 [2024-12-05 12:20:20.015105] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:46.389 [2024-12-05 12:20:20.093356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:46.389 [2024-12-05 12:20:20.136469] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:46.389 [2024-12-05 12:20:20.136500] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:46.389 [2024-12-05 12:20:20.136507] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:46.389 [2024-12-05 12:20:20.136513] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:46.389 [2024-12-05 12:20:20.136519] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:46.389 [2024-12-05 12:20:20.137549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:46.389 [2024-12-05 12:20:20.137550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:46.389 [2024-12-05 12:20:20.205576] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:46.389 [2024-12-05 12:20:20.206108] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:46.389 [2024-12-05 12:20:20.206259] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:46.389 12:20:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:46.389 12:20:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:36:46.389 12:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:36:46.389 12:20:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:46.389 12:20:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:46.389 12:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:46.389 12:20:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:36:46.389 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:36:46.389 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:36:46.389 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:36:46.389 5000+0 records in 00:36:46.389 5000+0 records out 00:36:46.389 10240000 bytes (10 MB, 9.8 MiB) copied, 0.018496 s, 554 MB/s 00:36:46.389 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:36:46.389 12:20:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.389 12:20:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:46.389 AIO0 00:36:46.389 12:20:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.389 12:20:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:36:46.389 12:20:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.389 12:20:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:46.389 [2024-12-05 12:20:20.354353] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:46.389 12:20:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.389 12:20:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:36:46.389 12:20:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.389 12:20:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:46.389 12:20:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.390 12:20:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:36:46.390 12:20:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.390 12:20:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:46.390 12:20:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.390 12:20:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:46.390 12:20:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.390 12:20:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:46.390 [2024-12-05 12:20:20.394712] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:46.390 12:20:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.390 12:20:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:36:46.390 12:20:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 322180 0 00:36:46.390 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 322180 0 idle 00:36:46.390 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=322180 00:36:46.390 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:36:46.390 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:46.390 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:46.390 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:46.390 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:46.390 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:46.390 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:46.390 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:46.390 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:46.390 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 322180 -w 256 00:36:46.390 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:36:46.390 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 322180 root 20 0 128.2g 46848 34560 S 6.7 0.0 0:00.25 reactor_0' 00:36:46.390 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 322180 root 20 0 128.2g 46848 34560 S 6.7 0.0 0:00.25 reactor_0 00:36:46.390 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:46.390 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:46.650 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:36:46.650 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:36:46.650 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:46.650 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:46.650 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:46.650 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:46.650 12:20:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:36:46.650 12:20:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 322180 1 00:36:46.650 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 322180 1 idle 00:36:46.650 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=322180 00:36:46.650 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:36:46.650 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:46.650 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:46.650 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:46.650 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:46.650 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:46.650 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:46.650 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:46.650 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:46.650 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 322180 -w 256 00:36:46.650 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:36:46.650 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 322184 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.00 reactor_1' 00:36:46.650 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 322184 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.00 reactor_1 00:36:46.650 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:46.650 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:46.650 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:46.650 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:46.650 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:46.650 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:46.650 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:46.650 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:46.650 12:20:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:36:46.650 12:20:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=322439 00:36:46.650 12:20:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:36:46.650 12:20:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:46.650 12:20:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:36:46.650 12:20:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 322180 0 00:36:46.650 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 322180 0 busy 00:36:46.650 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=322180 00:36:46.650 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:36:46.650 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:36:46.650 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:36:46.650 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:46.650 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:36:46.650 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:46.650 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:46.650 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:46.650 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 322180 -w 256 00:36:46.650 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:36:46.910 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 322180 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:00.44 reactor_0' 00:36:46.910 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 322180 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:00.44 reactor_0 00:36:46.910 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:46.910 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:46.910 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:36:46.910 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:36:46.910 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:36:46.910 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:36:46.910 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:36:46.910 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:46.910 12:20:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:36:46.910 12:20:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:36:46.910 12:20:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 322180 1 00:36:46.910 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 322180 1 busy 00:36:46.910 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=322180 00:36:46.910 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:36:46.910 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:36:46.910 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:36:46.910 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:46.910 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:36:46.910 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:46.910 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:46.910 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:46.910 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 322180 -w 256 00:36:46.910 12:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:36:47.169 12:20:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 322184 root 20 0 128.2g 47616 34560 R 93.3 0.0 0:00.27 reactor_1' 00:36:47.169 12:20:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 322184 root 20 0 128.2g 47616 34560 R 93.3 0.0 0:00.27 reactor_1 00:36:47.169 12:20:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:47.169 12:20:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:47.169 12:20:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.3 00:36:47.169 12:20:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:36:47.169 12:20:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:36:47.169 12:20:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:36:47.169 12:20:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:36:47.169 12:20:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:47.169 12:20:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 322439 00:36:57.146 Initializing NVMe Controllers 00:36:57.146 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:57.146 Controller IO queue size 256, less than required. 00:36:57.146 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:57.146 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:36:57.146 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:36:57.146 Initialization complete. Launching workers. 00:36:57.146 ======================================================== 00:36:57.146 Latency(us) 00:36:57.146 Device Information : IOPS MiB/s Average min max 00:36:57.146 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16671.63 65.12 15363.58 4333.51 30595.26 00:36:57.146 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 16450.43 64.26 15567.08 8112.20 31269.53 00:36:57.146 ======================================================== 00:36:57.146 Total : 33122.05 129.38 15464.65 4333.51 31269.53 00:36:57.146 00:36:57.146 12:20:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:36:57.146 12:20:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 322180 0 00:36:57.146 12:20:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 322180 0 idle 00:36:57.146 12:20:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=322180 00:36:57.146 12:20:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:36:57.146 12:20:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:57.146 12:20:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:57.146 12:20:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:57.146 12:20:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:57.146 12:20:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:57.146 12:20:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:57.146 12:20:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:57.146 12:20:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:57.146 12:20:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:36:57.146 12:20:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 322180 -w 256 00:36:57.147 12:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 322180 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:20.24 reactor_0' 00:36:57.147 12:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 322180 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:20.24 reactor_0 00:36:57.147 12:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:57.147 12:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:57.147 12:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:57.147 12:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:57.147 12:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:57.147 12:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:57.147 12:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:57.147 12:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:57.147 12:20:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:36:57.147 12:20:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 322180 1 00:36:57.147 12:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 322180 1 idle 00:36:57.147 12:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=322180 00:36:57.147 12:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:36:57.147 12:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:57.147 12:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:57.147 12:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:57.147 12:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:57.147 12:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:57.147 12:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:57.147 12:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:57.147 12:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:57.147 12:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 322180 -w 256 00:36:57.147 12:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:36:57.147 12:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 322184 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:10.01 reactor_1' 00:36:57.147 12:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 322184 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:10.01 reactor_1 00:36:57.147 12:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:57.147 12:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:57.147 12:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:57.147 12:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:57.147 12:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:57.147 12:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:57.147 12:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:57.147 12:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:57.147 12:20:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:36:57.715 12:20:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:36:57.715 12:20:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:36:57.715 12:20:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:36:57.715 12:20:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:36:57.715 12:20:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:36:59.618 12:20:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:36:59.618 12:20:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:36:59.618 12:20:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:36:59.618 12:20:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:36:59.618 12:20:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:36:59.618 12:20:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:36:59.618 12:20:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:36:59.618 12:20:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 322180 0 00:36:59.618 12:20:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 322180 0 idle 00:36:59.618 12:20:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=322180 00:36:59.618 12:20:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:36:59.618 12:20:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:59.618 12:20:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:59.618 12:20:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:59.618 12:20:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:59.618 12:20:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:59.618 12:20:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:59.618 12:20:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:59.618 12:20:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:59.618 12:20:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 322180 -w 256 00:36:59.618 12:20:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:36:59.876 12:20:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 322180 root 20 0 128.2g 73728 34560 S 6.7 0.0 0:20.48 reactor_0' 00:36:59.876 12:20:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 322180 root 20 0 128.2g 73728 34560 S 6.7 0.0 0:20.48 reactor_0 00:36:59.876 12:20:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:59.876 12:20:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:59.876 12:20:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:36:59.876 12:20:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:36:59.876 12:20:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:59.876 12:20:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:59.876 12:20:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:59.876 12:20:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:59.876 12:20:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:36:59.876 12:20:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 322180 1 00:36:59.876 12:20:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 322180 1 idle 00:36:59.876 12:20:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=322180 00:36:59.876 12:20:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:36:59.876 12:20:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:59.876 12:20:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:59.876 12:20:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:59.876 12:20:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:59.876 12:20:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:59.876 12:20:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:59.876 12:20:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:59.876 12:20:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:59.876 12:20:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 322180 -w 256 00:36:59.876 12:20:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:37:00.134 12:20:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 322184 root 20 0 128.2g 73728 34560 S 0.0 0.0 0:10.09 reactor_1' 00:37:00.134 12:20:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 322184 root 20 0 128.2g 73728 34560 S 0.0 0.0 0:10.09 reactor_1 00:37:00.134 12:20:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:00.134 12:20:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:00.134 12:20:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:37:00.134 12:20:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:37:00.134 12:20:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:37:00.134 12:20:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:37:00.134 12:20:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:37:00.134 12:20:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:00.134 12:20:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:37:00.392 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:37:00.392 12:20:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:37:00.392 12:20:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:37:00.392 12:20:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:37:00.392 12:20:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:00.392 12:20:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:37:00.392 12:20:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:00.392 12:20:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:37:00.392 12:20:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:37:00.392 12:20:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:37:00.392 12:20:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@335 -- # nvmfcleanup 00:37:00.392 12:20:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@99 -- # sync 00:37:00.392 12:20:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:37:00.392 12:20:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@102 -- # set +e 00:37:00.392 12:20:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@103 -- # for i in {1..20} 00:37:00.392 12:20:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:37:00.392 rmmod nvme_tcp 00:37:00.392 rmmod nvme_fabrics 00:37:00.392 rmmod nvme_keyring 00:37:00.392 12:20:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:37:00.392 12:20:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@106 -- # set -e 00:37:00.392 12:20:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@107 -- # return 0 00:37:00.392 12:20:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # '[' -n 322180 ']' 00:37:00.393 12:20:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@337 -- # killprocess 322180 00:37:00.393 12:20:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 322180 ']' 00:37:00.393 12:20:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 322180 00:37:00.393 12:20:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:37:00.393 12:20:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:00.393 12:20:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 322180 00:37:00.393 12:20:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:00.393 12:20:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:00.393 12:20:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 322180' 00:37:00.393 killing process with pid 322180 00:37:00.393 12:20:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 322180 00:37:00.393 12:20:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 322180 00:37:00.650 12:20:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:37:00.650 12:20:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@342 -- # nvmf_fini 00:37:00.650 12:20:34 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@264 -- # local dev 00:37:00.650 12:20:34 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@267 -- # remove_target_ns 00:37:00.650 12:20:34 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:37:00.650 12:20:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 14> /dev/null' 00:37:00.650 12:20:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_target_ns 00:37:03.178 12:20:36 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@268 -- # delete_main_bridge 00:37:03.178 12:20:36 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:37:03.178 12:20:36 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@130 -- # return 0 00:37:03.178 12:20:36 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:37:03.178 12:20:36 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:37:03.178 12:20:36 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:37:03.178 12:20:36 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:37:03.178 12:20:36 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:37:03.178 12:20:36 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:37:03.178 12:20:36 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:37:03.178 12:20:36 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:37:03.178 12:20:36 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:37:03.178 12:20:36 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:37:03.178 12:20:36 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:37:03.178 12:20:36 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:37:03.178 12:20:36 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:37:03.178 12:20:36 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:37:03.178 12:20:36 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:37:03.178 12:20:36 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:37:03.178 12:20:36 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:37:03.178 12:20:36 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@41 -- # _dev=0 00:37:03.178 12:20:36 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@41 -- # dev_map=() 00:37:03.178 12:20:36 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@284 -- # iptr 00:37:03.178 12:20:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@542 -- # iptables-save 00:37:03.178 12:20:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:37:03.178 12:20:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@542 -- # iptables-restore 00:37:03.178 00:37:03.178 real 0m23.049s 00:37:03.178 user 0m39.891s 00:37:03.178 sys 0m8.255s 00:37:03.178 12:20:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:03.178 12:20:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:03.178 ************************************ 00:37:03.178 END TEST nvmf_interrupt 00:37:03.178 ************************************ 00:37:03.178 00:37:03.178 real 27m18.440s 00:37:03.178 user 56m35.742s 00:37:03.178 sys 9m9.792s 00:37:03.179 12:20:36 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:03.179 12:20:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:03.179 ************************************ 00:37:03.179 END TEST nvmf_tcp 00:37:03.179 ************************************ 00:37:03.179 12:20:36 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:37:03.179 12:20:36 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:37:03.179 12:20:36 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:03.179 12:20:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:03.179 12:20:36 -- common/autotest_common.sh@10 -- # set +x 00:37:03.179 ************************************ 00:37:03.179 START TEST spdkcli_nvmf_tcp 00:37:03.179 ************************************ 00:37:03.179 12:20:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:37:03.179 * Looking for test storage... 00:37:03.179 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:03.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:03.179 --rc genhtml_branch_coverage=1 00:37:03.179 --rc genhtml_function_coverage=1 00:37:03.179 --rc genhtml_legend=1 00:37:03.179 --rc geninfo_all_blocks=1 00:37:03.179 --rc geninfo_unexecuted_blocks=1 00:37:03.179 00:37:03.179 ' 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:03.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:03.179 --rc genhtml_branch_coverage=1 00:37:03.179 --rc genhtml_function_coverage=1 00:37:03.179 --rc genhtml_legend=1 00:37:03.179 --rc geninfo_all_blocks=1 00:37:03.179 --rc geninfo_unexecuted_blocks=1 00:37:03.179 00:37:03.179 ' 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:03.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:03.179 --rc genhtml_branch_coverage=1 00:37:03.179 --rc genhtml_function_coverage=1 00:37:03.179 --rc genhtml_legend=1 00:37:03.179 --rc geninfo_all_blocks=1 00:37:03.179 --rc geninfo_unexecuted_blocks=1 00:37:03.179 00:37:03.179 ' 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:03.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:03.179 --rc genhtml_branch_coverage=1 00:37:03.179 --rc genhtml_function_coverage=1 00:37:03.179 --rc genhtml_legend=1 00:37:03.179 --rc geninfo_all_blocks=1 00:37:03.179 --rc geninfo_unexecuted_blocks=1 00:37:03.179 00:37:03.179 ' 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- nvmf/common.sh@50 -- # : 0 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:37:03.179 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- nvmf/common.sh@54 -- # have_pci_nics=0 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=325125 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 325125 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 325125 ']' 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:03.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:03.179 12:20:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:03.179 [2024-12-05 12:20:37.183318] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:37:03.179 [2024-12-05 12:20:37.183372] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid325125 ] 00:37:03.179 [2024-12-05 12:20:37.254907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:03.179 [2024-12-05 12:20:37.298215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:03.179 [2024-12-05 12:20:37.298218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:03.438 12:20:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:03.438 12:20:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:37:03.438 12:20:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:37:03.438 12:20:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:03.438 12:20:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:03.438 12:20:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:37:03.438 12:20:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:37:03.438 12:20:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:37:03.438 12:20:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:03.438 12:20:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:03.438 12:20:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:37:03.438 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:37:03.438 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:37:03.438 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:37:03.438 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:37:03.438 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:37:03.438 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:37:03.438 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:37:03.438 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:37:03.438 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:37:03.438 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:37:03.438 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:03.438 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:37:03.438 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:37:03.438 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:03.438 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:37:03.438 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:37:03.438 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:37:03.438 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:37:03.438 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:03.438 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:37:03.438 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:37:03.438 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:37:03.438 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:37:03.438 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:03.438 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:37:03.438 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:37:03.438 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:37:03.438 ' 00:37:05.968 [2024-12-05 12:20:40.118009] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:07.344 [2024-12-05 12:20:41.450443] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:37:09.876 [2024-12-05 12:20:43.925924] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:37:12.418 [2024-12-05 12:20:46.080600] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:37:13.795 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:37:13.795 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:37:13.795 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:37:13.795 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:37:13.795 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:37:13.795 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:37:13.795 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:37:13.795 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:37:13.795 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:37:13.795 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:37:13.795 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:37:13.795 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:13.795 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:37:13.795 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:37:13.795 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:13.795 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:37:13.795 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:37:13.795 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:37:13.795 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:37:13.795 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:13.795 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:37:13.795 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:37:13.795 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:37:13.795 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:37:13.795 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:13.795 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:37:13.795 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:37:13.795 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:37:13.795 12:20:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:37:13.795 12:20:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:13.795 12:20:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:13.795 12:20:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:37:13.795 12:20:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:13.795 12:20:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:13.795 12:20:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:37:13.795 12:20:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:37:14.363 12:20:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:37:14.363 12:20:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:37:14.363 12:20:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:37:14.363 12:20:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:14.363 12:20:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:14.363 12:20:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:37:14.363 12:20:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:14.363 12:20:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:14.363 12:20:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:37:14.363 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:37:14.363 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:37:14.363 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:37:14.363 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:37:14.363 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:37:14.363 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:37:14.363 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:37:14.363 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:37:14.363 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:37:14.363 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:37:14.363 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:37:14.363 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:37:14.363 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:37:14.363 ' 00:37:20.932 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:37:20.932 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:37:20.932 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:37:20.932 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:37:20.932 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:37:20.932 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:37:20.932 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:37:20.932 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:37:20.932 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:37:20.932 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:37:20.932 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:37:20.932 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:37:20.932 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:37:20.932 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:37:20.932 12:20:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:37:20.932 12:20:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:20.932 12:20:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:20.932 12:20:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 325125 00:37:20.932 12:20:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 325125 ']' 00:37:20.932 12:20:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 325125 00:37:20.932 12:20:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:37:20.932 12:20:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:20.932 12:20:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 325125 00:37:20.932 12:20:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:20.932 12:20:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:20.932 12:20:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 325125' 00:37:20.932 killing process with pid 325125 00:37:20.932 12:20:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 325125 00:37:20.932 12:20:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 325125 00:37:20.932 12:20:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:37:20.932 12:20:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:37:20.932 12:20:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 325125 ']' 00:37:20.932 12:20:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 325125 00:37:20.932 12:20:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 325125 ']' 00:37:20.932 12:20:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 325125 00:37:20.932 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (325125) - No such process 00:37:20.932 12:20:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 325125 is not found' 00:37:20.932 Process with pid 325125 is not found 00:37:20.932 12:20:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:37:20.932 12:20:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:37:20.932 12:20:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:37:20.932 00:37:20.932 real 0m17.286s 00:37:20.932 user 0m38.059s 00:37:20.932 sys 0m0.788s 00:37:20.932 12:20:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:20.932 12:20:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:20.932 ************************************ 00:37:20.932 END TEST spdkcli_nvmf_tcp 00:37:20.932 ************************************ 00:37:20.932 12:20:54 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:37:20.932 12:20:54 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:20.932 12:20:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:20.932 12:20:54 -- common/autotest_common.sh@10 -- # set +x 00:37:20.932 ************************************ 00:37:20.932 START TEST nvmf_identify_passthru 00:37:20.932 ************************************ 00:37:20.932 12:20:54 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:37:20.932 * Looking for test storage... 00:37:20.932 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:20.932 12:20:54 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:20.932 12:20:54 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:37:20.932 12:20:54 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:20.932 12:20:54 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:20.932 12:20:54 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:20.932 12:20:54 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:20.932 12:20:54 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:20.932 12:20:54 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:37:20.932 12:20:54 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:37:20.932 12:20:54 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:37:20.932 12:20:54 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:37:20.932 12:20:54 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:37:20.932 12:20:54 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:37:20.932 12:20:54 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:37:20.932 12:20:54 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:20.932 12:20:54 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:37:20.932 12:20:54 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:37:20.932 12:20:54 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:20.932 12:20:54 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:20.932 12:20:54 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:37:20.932 12:20:54 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:37:20.932 12:20:54 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:20.932 12:20:54 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:37:20.932 12:20:54 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:37:20.932 12:20:54 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:37:20.932 12:20:54 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:37:20.932 12:20:54 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:20.932 12:20:54 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:37:20.932 12:20:54 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:37:20.932 12:20:54 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:20.932 12:20:54 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:20.932 12:20:54 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:37:20.932 12:20:54 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:20.932 12:20:54 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:20.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:20.932 --rc genhtml_branch_coverage=1 00:37:20.932 --rc genhtml_function_coverage=1 00:37:20.932 --rc genhtml_legend=1 00:37:20.932 --rc geninfo_all_blocks=1 00:37:20.932 --rc geninfo_unexecuted_blocks=1 00:37:20.932 00:37:20.932 ' 00:37:20.932 12:20:54 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:20.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:20.932 --rc genhtml_branch_coverage=1 00:37:20.932 --rc genhtml_function_coverage=1 00:37:20.932 --rc genhtml_legend=1 00:37:20.932 --rc geninfo_all_blocks=1 00:37:20.932 --rc geninfo_unexecuted_blocks=1 00:37:20.932 00:37:20.932 ' 00:37:20.932 12:20:54 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:20.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:20.932 --rc genhtml_branch_coverage=1 00:37:20.932 --rc genhtml_function_coverage=1 00:37:20.932 --rc genhtml_legend=1 00:37:20.932 --rc geninfo_all_blocks=1 00:37:20.933 --rc geninfo_unexecuted_blocks=1 00:37:20.933 00:37:20.933 ' 00:37:20.933 12:20:54 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:20.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:20.933 --rc genhtml_branch_coverage=1 00:37:20.933 --rc genhtml_function_coverage=1 00:37:20.933 --rc genhtml_legend=1 00:37:20.933 --rc geninfo_all_blocks=1 00:37:20.933 --rc geninfo_unexecuted_blocks=1 00:37:20.933 00:37:20.933 ' 00:37:20.933 12:20:54 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:20.933 12:20:54 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:37:20.933 12:20:54 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:20.933 12:20:54 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:20.933 12:20:54 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:20.933 12:20:54 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:20.933 12:20:54 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:20.933 12:20:54 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:37:20.933 12:20:54 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:20.933 12:20:54 nvmf_identify_passthru -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:37:20.933 12:20:54 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:37:20.933 12:20:54 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:37:20.933 12:20:54 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:20.933 12:20:54 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:37:20.933 12:20:54 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:37:20.933 12:20:54 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:20.933 12:20:54 nvmf_identify_passthru -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:20.933 12:20:54 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:37:20.933 12:20:54 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:20.933 12:20:54 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:20.933 12:20:54 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:20.933 12:20:54 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:20.933 12:20:54 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:20.933 12:20:54 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:20.933 12:20:54 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:37:20.933 12:20:54 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:20.933 12:20:54 nvmf_identify_passthru -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:37:20.933 12:20:54 nvmf_identify_passthru -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:37:20.933 12:20:54 nvmf_identify_passthru -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:37:20.933 12:20:54 nvmf_identify_passthru -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:37:20.933 12:20:54 nvmf_identify_passthru -- nvmf/common.sh@50 -- # : 0 00:37:20.933 12:20:54 nvmf_identify_passthru -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:37:20.933 12:20:54 nvmf_identify_passthru -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:37:20.933 12:20:54 nvmf_identify_passthru -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:37:20.933 12:20:54 nvmf_identify_passthru -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:20.933 12:20:54 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:20.933 12:20:54 nvmf_identify_passthru -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:37:20.933 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:37:20.933 12:20:54 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:37:20.933 12:20:54 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:37:20.933 12:20:54 nvmf_identify_passthru -- nvmf/common.sh@54 -- # have_pci_nics=0 00:37:20.933 12:20:54 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:20.933 12:20:54 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:37:20.933 12:20:54 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:20.933 12:20:54 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:20.933 12:20:54 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:20.933 12:20:54 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:20.933 12:20:54 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:20.933 12:20:54 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:20.933 12:20:54 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:37:20.933 12:20:54 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:20.933 12:20:54 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:37:20.933 12:20:54 nvmf_identify_passthru -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:37:20.933 12:20:54 nvmf_identify_passthru -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:20.933 12:20:54 nvmf_identify_passthru -- nvmf/common.sh@296 -- # prepare_net_devs 00:37:20.933 12:20:54 nvmf_identify_passthru -- nvmf/common.sh@258 -- # local -g is_hw=no 00:37:20.933 12:20:54 nvmf_identify_passthru -- nvmf/common.sh@260 -- # remove_target_ns 00:37:20.933 12:20:54 nvmf_identify_passthru -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:37:20.933 12:20:54 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 13> /dev/null' 00:37:20.933 12:20:54 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_target_ns 00:37:20.933 12:20:54 nvmf_identify_passthru -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:37:20.933 12:20:54 nvmf_identify_passthru -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:37:20.933 12:20:54 nvmf_identify_passthru -- nvmf/common.sh@125 -- # xtrace_disable 00:37:20.933 12:20:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:26.209 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:26.209 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@131 -- # pci_devs=() 00:37:26.209 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@131 -- # local -a pci_devs 00:37:26.209 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@132 -- # pci_net_devs=() 00:37:26.209 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:37:26.209 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@133 -- # pci_drivers=() 00:37:26.209 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@133 -- # local -A pci_drivers 00:37:26.209 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@135 -- # net_devs=() 00:37:26.209 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@135 -- # local -ga net_devs 00:37:26.209 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@136 -- # e810=() 00:37:26.209 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@136 -- # local -ga e810 00:37:26.209 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@137 -- # x722=() 00:37:26.209 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@137 -- # local -ga x722 00:37:26.209 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@138 -- # mlx=() 00:37:26.209 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@138 -- # local -ga mlx 00:37:26.209 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:26.209 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:26.209 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:26.209 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:26.209 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:26.209 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:26.209 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:26.209 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:26.209 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:26.209 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:26.209 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:26.209 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:26.209 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:37:26.209 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:37:26.209 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:37:26.209 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:37:26.209 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:37:26.209 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:37:26.210 Found 0000:86:00.0 (0x8086 - 0x159b) 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:37:26.210 Found 0000:86:00.1 (0x8086 - 0x159b) 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@234 -- # [[ up == up ]] 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:37:26.210 Found net devices under 0000:86:00.0: cvl_0_0 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@234 -- # [[ up == up ]] 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:37:26.210 Found net devices under 0000:86:00.1: cvl_0_1 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@262 -- # is_hw=yes 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@257 -- # create_target_ns 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@27 -- # local -gA dev_map 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@28 -- # local -g _dev 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@44 -- # ips=() 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@11 -- # local val=167772161 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:37:26.210 10.0.0.1 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@11 -- # local val=167772162 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:37:26.210 10.0.0.2 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:37:26.210 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:37:26.211 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:37:26.211 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:26.211 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:26.211 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:37:26.211 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:37:26.211 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:37:26.211 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:37:26.211 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:37:26.211 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:37:26.211 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:37:26.211 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:37:26.211 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:37:26.211 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:37:26.211 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:37:26.211 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@38 -- # ping_ips 1 00:37:26.211 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:37:26.211 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:37:26.211 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:37:26.211 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:37:26.211 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:37:26.211 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:37:26.211 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:37:26.211 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:37:26.211 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:37:26.211 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@107 -- # local dev=initiator0 00:37:26.211 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:37:26.211 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:37:26.211 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:37:26.211 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:37:26.211 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:37:26.211 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:37:26.211 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:37:26.211 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:37:26.211 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:37:26.211 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:37:26.211 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:37:26.211 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:26.211 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:26.211 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:37:26.211 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:37:26.211 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:26.211 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.448 ms 00:37:26.211 00:37:26.211 --- 10.0.0.1 ping statistics --- 00:37:26.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:26.211 rtt min/avg/max/mdev = 0.448/0.448/0.448/0.000 ms 00:37:26.211 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:37:26.211 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:37:26.211 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:37:26.211 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:37:26.211 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:26.211 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:26.211 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@168 -- # get_net_dev target0 00:37:26.211 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@107 -- # local dev=target0 00:37:26.211 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:37:26.211 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:37:26.211 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:37:26.211 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:37:26.211 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:37:26.211 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:37:26.470 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:26.470 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:37:26.470 00:37:26.470 --- 10.0.0.2 ping statistics --- 00:37:26.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:26.470 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@98 -- # (( pair++ )) 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@270 -- # return 0 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@107 -- # local dev=initiator0 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@107 -- # local dev=initiator1 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@109 -- # return 1 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@168 -- # dev= 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@169 -- # return 0 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@168 -- # get_net_dev target0 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@107 -- # local dev=target0 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@168 -- # get_net_dev target1 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@107 -- # local dev=target1 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@109 -- # return 1 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@168 -- # dev= 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@169 -- # return 0 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:37:26.470 12:21:00 nvmf_identify_passthru -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:37:26.470 12:21:00 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:37:26.470 12:21:00 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:26.470 12:21:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:26.470 12:21:00 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:37:26.470 12:21:00 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:37:26.470 12:21:00 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:37:26.470 12:21:00 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:37:26.470 12:21:00 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:37:26.470 12:21:00 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:37:26.470 12:21:00 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:37:26.471 12:21:00 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:37:26.471 12:21:00 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:37:26.471 12:21:00 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:37:26.471 12:21:00 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:37:26.471 12:21:00 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:37:26.471 12:21:00 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:37:26.471 12:21:00 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:37:26.471 12:21:00 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:37:26.471 12:21:00 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:37:26.471 12:21:00 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:37:26.471 12:21:00 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:37:31.794 12:21:05 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLN951000C61P6AGN 00:37:31.795 12:21:05 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:37:31.795 12:21:05 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:37:31.795 12:21:05 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:37:35.985 12:21:10 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:37:35.985 12:21:10 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:37:35.985 12:21:10 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:35.985 12:21:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:35.985 12:21:10 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:37:35.985 12:21:10 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:35.985 12:21:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:35.985 12:21:10 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=333141 00:37:35.985 12:21:10 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:37:35.986 12:21:10 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:35.986 12:21:10 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 333141 00:37:35.986 12:21:10 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 333141 ']' 00:37:35.986 12:21:10 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:35.986 12:21:10 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:35.986 12:21:10 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:35.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:35.986 12:21:10 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:35.986 12:21:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:36.244 [2024-12-05 12:21:10.224818] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:37:36.244 [2024-12-05 12:21:10.224866] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:36.244 [2024-12-05 12:21:10.301951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:36.244 [2024-12-05 12:21:10.344944] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:36.244 [2024-12-05 12:21:10.344980] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:36.244 [2024-12-05 12:21:10.344988] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:36.244 [2024-12-05 12:21:10.344994] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:36.244 [2024-12-05 12:21:10.344999] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:36.244 [2024-12-05 12:21:10.346519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:36.244 [2024-12-05 12:21:10.346626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:36.244 [2024-12-05 12:21:10.346730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:36.244 [2024-12-05 12:21:10.346731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:36.244 12:21:10 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:36.244 12:21:10 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:37:36.244 12:21:10 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:37:36.244 12:21:10 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.244 12:21:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:36.244 INFO: Log level set to 20 00:37:36.244 INFO: Requests: 00:37:36.244 { 00:37:36.244 "jsonrpc": "2.0", 00:37:36.244 "method": "nvmf_set_config", 00:37:36.244 "id": 1, 00:37:36.244 "params": { 00:37:36.244 "admin_cmd_passthru": { 00:37:36.244 "identify_ctrlr": true 00:37:36.244 } 00:37:36.244 } 00:37:36.244 } 00:37:36.244 00:37:36.244 INFO: response: 00:37:36.244 { 00:37:36.244 "jsonrpc": "2.0", 00:37:36.244 "id": 1, 00:37:36.244 "result": true 00:37:36.244 } 00:37:36.244 00:37:36.244 12:21:10 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.244 12:21:10 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:37:36.244 12:21:10 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.244 12:21:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:36.244 INFO: Setting log level to 20 00:37:36.244 INFO: Setting log level to 20 00:37:36.244 INFO: Log level set to 20 00:37:36.244 INFO: Log level set to 20 00:37:36.244 INFO: Requests: 00:37:36.244 { 00:37:36.244 "jsonrpc": "2.0", 00:37:36.244 "method": "framework_start_init", 00:37:36.244 "id": 1 00:37:36.244 } 00:37:36.244 00:37:36.244 INFO: Requests: 00:37:36.244 { 00:37:36.244 "jsonrpc": "2.0", 00:37:36.244 "method": "framework_start_init", 00:37:36.244 "id": 1 00:37:36.244 } 00:37:36.244 00:37:36.502 [2024-12-05 12:21:10.454838] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:37:36.502 INFO: response: 00:37:36.502 { 00:37:36.502 "jsonrpc": "2.0", 00:37:36.502 "id": 1, 00:37:36.502 "result": true 00:37:36.502 } 00:37:36.502 00:37:36.502 INFO: response: 00:37:36.502 { 00:37:36.502 "jsonrpc": "2.0", 00:37:36.502 "id": 1, 00:37:36.502 "result": true 00:37:36.502 } 00:37:36.502 00:37:36.502 12:21:10 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.502 12:21:10 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:36.502 12:21:10 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.502 12:21:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:36.502 INFO: Setting log level to 40 00:37:36.502 INFO: Setting log level to 40 00:37:36.502 INFO: Setting log level to 40 00:37:36.502 [2024-12-05 12:21:10.468137] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:36.502 12:21:10 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.502 12:21:10 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:37:36.502 12:21:10 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:36.502 12:21:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:36.502 12:21:10 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:37:36.502 12:21:10 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.502 12:21:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:39.791 Nvme0n1 00:37:39.791 12:21:13 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.791 12:21:13 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:37:39.791 12:21:13 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.791 12:21:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:39.791 12:21:13 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.791 12:21:13 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:37:39.791 12:21:13 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.791 12:21:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:39.791 12:21:13 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.791 12:21:13 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:39.791 12:21:13 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.791 12:21:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:39.791 [2024-12-05 12:21:13.372604] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:39.791 12:21:13 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.791 12:21:13 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:37:39.791 12:21:13 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.791 12:21:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:39.791 [ 00:37:39.791 { 00:37:39.791 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:37:39.791 "subtype": "Discovery", 00:37:39.791 "listen_addresses": [], 00:37:39.791 "allow_any_host": true, 00:37:39.791 "hosts": [] 00:37:39.791 }, 00:37:39.791 { 00:37:39.791 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:37:39.791 "subtype": "NVMe", 00:37:39.791 "listen_addresses": [ 00:37:39.791 { 00:37:39.791 "trtype": "TCP", 00:37:39.791 "adrfam": "IPv4", 00:37:39.791 "traddr": "10.0.0.2", 00:37:39.791 "trsvcid": "4420" 00:37:39.791 } 00:37:39.791 ], 00:37:39.791 "allow_any_host": true, 00:37:39.791 "hosts": [], 00:37:39.791 "serial_number": "SPDK00000000000001", 00:37:39.791 "model_number": "SPDK bdev Controller", 00:37:39.791 "max_namespaces": 1, 00:37:39.791 "min_cntlid": 1, 00:37:39.791 "max_cntlid": 65519, 00:37:39.791 "namespaces": [ 00:37:39.791 { 00:37:39.791 "nsid": 1, 00:37:39.791 "bdev_name": "Nvme0n1", 00:37:39.791 "name": "Nvme0n1", 00:37:39.791 "nguid": "DF037BBC76FC44E686606812E80C17B4", 00:37:39.791 "uuid": "df037bbc-76fc-44e6-8660-6812e80c17b4" 00:37:39.791 } 00:37:39.791 ] 00:37:39.791 } 00:37:39.791 ] 00:37:39.791 12:21:13 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.791 12:21:13 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:37:39.791 12:21:13 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:37:39.791 12:21:13 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:37:39.791 12:21:13 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLN951000C61P6AGN 00:37:39.791 12:21:13 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:37:39.791 12:21:13 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:37:39.791 12:21:13 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:37:39.791 12:21:13 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:37:39.791 12:21:13 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLN951000C61P6AGN '!=' PHLN951000C61P6AGN ']' 00:37:39.791 12:21:13 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:37:39.791 12:21:13 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:39.791 12:21:13 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.791 12:21:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:39.791 12:21:13 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.791 12:21:13 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:37:39.791 12:21:13 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:37:39.791 12:21:13 nvmf_identify_passthru -- nvmf/common.sh@335 -- # nvmfcleanup 00:37:39.791 12:21:13 nvmf_identify_passthru -- nvmf/common.sh@99 -- # sync 00:37:39.791 12:21:13 nvmf_identify_passthru -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:37:39.791 12:21:13 nvmf_identify_passthru -- nvmf/common.sh@102 -- # set +e 00:37:39.791 12:21:13 nvmf_identify_passthru -- nvmf/common.sh@103 -- # for i in {1..20} 00:37:39.791 12:21:13 nvmf_identify_passthru -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:37:39.791 rmmod nvme_tcp 00:37:39.791 rmmod nvme_fabrics 00:37:39.791 rmmod nvme_keyring 00:37:39.791 12:21:13 nvmf_identify_passthru -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:37:39.791 12:21:13 nvmf_identify_passthru -- nvmf/common.sh@106 -- # set -e 00:37:39.791 12:21:13 nvmf_identify_passthru -- nvmf/common.sh@107 -- # return 0 00:37:39.791 12:21:13 nvmf_identify_passthru -- nvmf/common.sh@336 -- # '[' -n 333141 ']' 00:37:39.791 12:21:13 nvmf_identify_passthru -- nvmf/common.sh@337 -- # killprocess 333141 00:37:39.791 12:21:13 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 333141 ']' 00:37:39.791 12:21:13 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 333141 00:37:39.791 12:21:13 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:37:39.791 12:21:13 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:39.791 12:21:13 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 333141 00:37:39.791 12:21:13 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:39.791 12:21:13 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:39.791 12:21:13 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 333141' 00:37:39.791 killing process with pid 333141 00:37:39.791 12:21:13 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 333141 00:37:39.791 12:21:13 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 333141 00:37:41.693 12:21:15 nvmf_identify_passthru -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:37:41.693 12:21:15 nvmf_identify_passthru -- nvmf/common.sh@342 -- # nvmf_fini 00:37:41.693 12:21:15 nvmf_identify_passthru -- nvmf/setup.sh@264 -- # local dev 00:37:41.693 12:21:15 nvmf_identify_passthru -- nvmf/setup.sh@267 -- # remove_target_ns 00:37:41.693 12:21:15 nvmf_identify_passthru -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:37:41.693 12:21:15 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 13> /dev/null' 00:37:41.693 12:21:15 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_target_ns 00:37:44.227 12:21:17 nvmf_identify_passthru -- nvmf/setup.sh@268 -- # delete_main_bridge 00:37:44.227 12:21:17 nvmf_identify_passthru -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:37:44.227 12:21:17 nvmf_identify_passthru -- nvmf/setup.sh@130 -- # return 0 00:37:44.227 12:21:17 nvmf_identify_passthru -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:37:44.227 12:21:17 nvmf_identify_passthru -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:37:44.227 12:21:17 nvmf_identify_passthru -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:37:44.227 12:21:17 nvmf_identify_passthru -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:37:44.227 12:21:17 nvmf_identify_passthru -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:37:44.227 12:21:17 nvmf_identify_passthru -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:37:44.227 12:21:17 nvmf_identify_passthru -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:37:44.227 12:21:17 nvmf_identify_passthru -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:37:44.227 12:21:17 nvmf_identify_passthru -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:37:44.227 12:21:17 nvmf_identify_passthru -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:37:44.227 12:21:17 nvmf_identify_passthru -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:37:44.227 12:21:17 nvmf_identify_passthru -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:37:44.227 12:21:17 nvmf_identify_passthru -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:37:44.227 12:21:17 nvmf_identify_passthru -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:37:44.227 12:21:17 nvmf_identify_passthru -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:37:44.227 12:21:17 nvmf_identify_passthru -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:37:44.227 12:21:17 nvmf_identify_passthru -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:37:44.227 12:21:17 nvmf_identify_passthru -- nvmf/setup.sh@41 -- # _dev=0 00:37:44.227 12:21:17 nvmf_identify_passthru -- nvmf/setup.sh@41 -- # dev_map=() 00:37:44.227 12:21:17 nvmf_identify_passthru -- nvmf/setup.sh@284 -- # iptr 00:37:44.227 12:21:17 nvmf_identify_passthru -- nvmf/common.sh@542 -- # iptables-save 00:37:44.227 12:21:17 nvmf_identify_passthru -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:37:44.227 12:21:17 nvmf_identify_passthru -- nvmf/common.sh@542 -- # iptables-restore 00:37:44.227 00:37:44.227 real 0m23.625s 00:37:44.227 user 0m29.778s 00:37:44.227 sys 0m6.273s 00:37:44.227 12:21:17 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:44.227 12:21:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:44.227 ************************************ 00:37:44.227 END TEST nvmf_identify_passthru 00:37:44.227 ************************************ 00:37:44.227 12:21:17 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:37:44.227 12:21:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:44.227 12:21:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:44.227 12:21:17 -- common/autotest_common.sh@10 -- # set +x 00:37:44.227 ************************************ 00:37:44.227 START TEST nvmf_dif 00:37:44.227 ************************************ 00:37:44.227 12:21:17 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:37:44.227 * Looking for test storage... 00:37:44.227 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:44.227 12:21:18 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:44.227 12:21:18 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:37:44.227 12:21:18 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:44.227 12:21:18 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:44.227 12:21:18 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:44.227 12:21:18 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:44.227 12:21:18 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:44.227 12:21:18 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:37:44.227 12:21:18 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:37:44.227 12:21:18 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:37:44.227 12:21:18 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:37:44.227 12:21:18 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:37:44.227 12:21:18 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:37:44.227 12:21:18 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:37:44.227 12:21:18 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:44.227 12:21:18 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:37:44.227 12:21:18 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:37:44.227 12:21:18 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:44.227 12:21:18 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:44.227 12:21:18 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:37:44.227 12:21:18 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:37:44.227 12:21:18 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:44.227 12:21:18 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:37:44.227 12:21:18 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:37:44.227 12:21:18 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:37:44.227 12:21:18 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:37:44.227 12:21:18 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:44.227 12:21:18 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:37:44.227 12:21:18 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:37:44.227 12:21:18 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:44.227 12:21:18 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:44.227 12:21:18 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:37:44.227 12:21:18 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:44.227 12:21:18 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:44.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:44.227 --rc genhtml_branch_coverage=1 00:37:44.227 --rc genhtml_function_coverage=1 00:37:44.227 --rc genhtml_legend=1 00:37:44.227 --rc geninfo_all_blocks=1 00:37:44.227 --rc geninfo_unexecuted_blocks=1 00:37:44.227 00:37:44.227 ' 00:37:44.227 12:21:18 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:44.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:44.227 --rc genhtml_branch_coverage=1 00:37:44.227 --rc genhtml_function_coverage=1 00:37:44.227 --rc genhtml_legend=1 00:37:44.227 --rc geninfo_all_blocks=1 00:37:44.227 --rc geninfo_unexecuted_blocks=1 00:37:44.227 00:37:44.227 ' 00:37:44.227 12:21:18 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:44.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:44.227 --rc genhtml_branch_coverage=1 00:37:44.227 --rc genhtml_function_coverage=1 00:37:44.227 --rc genhtml_legend=1 00:37:44.227 --rc geninfo_all_blocks=1 00:37:44.227 --rc geninfo_unexecuted_blocks=1 00:37:44.227 00:37:44.227 ' 00:37:44.227 12:21:18 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:44.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:44.228 --rc genhtml_branch_coverage=1 00:37:44.228 --rc genhtml_function_coverage=1 00:37:44.228 --rc genhtml_legend=1 00:37:44.228 --rc geninfo_all_blocks=1 00:37:44.228 --rc geninfo_unexecuted_blocks=1 00:37:44.228 00:37:44.228 ' 00:37:44.228 12:21:18 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:44.228 12:21:18 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:37:44.228 12:21:18 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:44.228 12:21:18 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:44.228 12:21:18 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:44.228 12:21:18 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:44.228 12:21:18 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:44.228 12:21:18 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:37:44.228 12:21:18 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:44.228 12:21:18 nvmf_dif -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:37:44.228 12:21:18 nvmf_dif -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:37:44.228 12:21:18 nvmf_dif -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:37:44.228 12:21:18 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:44.228 12:21:18 nvmf_dif -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:37:44.228 12:21:18 nvmf_dif -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:37:44.228 12:21:18 nvmf_dif -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:44.228 12:21:18 nvmf_dif -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:44.228 12:21:18 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:37:44.228 12:21:18 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:44.228 12:21:18 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:44.228 12:21:18 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:44.228 12:21:18 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:44.228 12:21:18 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:44.228 12:21:18 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:44.228 12:21:18 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:37:44.228 12:21:18 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:44.228 12:21:18 nvmf_dif -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:37:44.228 12:21:18 nvmf_dif -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:37:44.228 12:21:18 nvmf_dif -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:37:44.228 12:21:18 nvmf_dif -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:37:44.228 12:21:18 nvmf_dif -- nvmf/common.sh@50 -- # : 0 00:37:44.228 12:21:18 nvmf_dif -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:37:44.228 12:21:18 nvmf_dif -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:37:44.228 12:21:18 nvmf_dif -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:37:44.228 12:21:18 nvmf_dif -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:44.228 12:21:18 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:44.228 12:21:18 nvmf_dif -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:37:44.228 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:37:44.228 12:21:18 nvmf_dif -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:37:44.228 12:21:18 nvmf_dif -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:37:44.228 12:21:18 nvmf_dif -- nvmf/common.sh@54 -- # have_pci_nics=0 00:37:44.228 12:21:18 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:37:44.228 12:21:18 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:37:44.228 12:21:18 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:37:44.228 12:21:18 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:37:44.228 12:21:18 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:37:44.228 12:21:18 nvmf_dif -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:37:44.228 12:21:18 nvmf_dif -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:44.228 12:21:18 nvmf_dif -- nvmf/common.sh@296 -- # prepare_net_devs 00:37:44.228 12:21:18 nvmf_dif -- nvmf/common.sh@258 -- # local -g is_hw=no 00:37:44.228 12:21:18 nvmf_dif -- nvmf/common.sh@260 -- # remove_target_ns 00:37:44.228 12:21:18 nvmf_dif -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:37:44.228 12:21:18 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 13> /dev/null' 00:37:44.228 12:21:18 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_target_ns 00:37:44.228 12:21:18 nvmf_dif -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:37:44.228 12:21:18 nvmf_dif -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:37:44.228 12:21:18 nvmf_dif -- nvmf/common.sh@125 -- # xtrace_disable 00:37:44.228 12:21:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:50.798 12:21:23 nvmf_dif -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:50.798 12:21:23 nvmf_dif -- nvmf/common.sh@131 -- # pci_devs=() 00:37:50.798 12:21:23 nvmf_dif -- nvmf/common.sh@131 -- # local -a pci_devs 00:37:50.798 12:21:23 nvmf_dif -- nvmf/common.sh@132 -- # pci_net_devs=() 00:37:50.798 12:21:23 nvmf_dif -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:37:50.798 12:21:23 nvmf_dif -- nvmf/common.sh@133 -- # pci_drivers=() 00:37:50.798 12:21:23 nvmf_dif -- nvmf/common.sh@133 -- # local -A pci_drivers 00:37:50.798 12:21:23 nvmf_dif -- nvmf/common.sh@135 -- # net_devs=() 00:37:50.798 12:21:23 nvmf_dif -- nvmf/common.sh@135 -- # local -ga net_devs 00:37:50.798 12:21:23 nvmf_dif -- nvmf/common.sh@136 -- # e810=() 00:37:50.798 12:21:23 nvmf_dif -- nvmf/common.sh@136 -- # local -ga e810 00:37:50.798 12:21:23 nvmf_dif -- nvmf/common.sh@137 -- # x722=() 00:37:50.798 12:21:23 nvmf_dif -- nvmf/common.sh@137 -- # local -ga x722 00:37:50.798 12:21:23 nvmf_dif -- nvmf/common.sh@138 -- # mlx=() 00:37:50.798 12:21:23 nvmf_dif -- nvmf/common.sh@138 -- # local -ga mlx 00:37:50.798 12:21:23 nvmf_dif -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:50.798 12:21:23 nvmf_dif -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:50.798 12:21:23 nvmf_dif -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:50.798 12:21:23 nvmf_dif -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:50.798 12:21:23 nvmf_dif -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:50.798 12:21:23 nvmf_dif -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:50.798 12:21:23 nvmf_dif -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:50.798 12:21:23 nvmf_dif -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:50.798 12:21:23 nvmf_dif -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:50.798 12:21:23 nvmf_dif -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:50.798 12:21:23 nvmf_dif -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:50.798 12:21:23 nvmf_dif -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:50.798 12:21:23 nvmf_dif -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:37:50.798 12:21:23 nvmf_dif -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:37:50.798 12:21:23 nvmf_dif -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:37:50.798 12:21:23 nvmf_dif -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:37:50.798 12:21:23 nvmf_dif -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:37:50.798 12:21:23 nvmf_dif -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:37:50.798 12:21:23 nvmf_dif -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:37:50.798 12:21:23 nvmf_dif -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:37:50.798 Found 0000:86:00.0 (0x8086 - 0x159b) 00:37:50.798 12:21:23 nvmf_dif -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:37:50.798 12:21:23 nvmf_dif -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:37:50.798 12:21:23 nvmf_dif -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:50.798 12:21:23 nvmf_dif -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:50.798 12:21:23 nvmf_dif -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:37:50.798 12:21:23 nvmf_dif -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:37:50.798 12:21:23 nvmf_dif -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:37:50.798 Found 0000:86:00.1 (0x8086 - 0x159b) 00:37:50.798 12:21:23 nvmf_dif -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:37:50.798 12:21:23 nvmf_dif -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:37:50.798 12:21:23 nvmf_dif -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:50.798 12:21:23 nvmf_dif -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:50.798 12:21:23 nvmf_dif -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:37:50.798 12:21:23 nvmf_dif -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:37:50.798 12:21:23 nvmf_dif -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:37:50.798 12:21:23 nvmf_dif -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:37:50.798 12:21:23 nvmf_dif -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:37:50.799 12:21:23 nvmf_dif -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:50.799 12:21:23 nvmf_dif -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:37:50.799 12:21:23 nvmf_dif -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:50.799 12:21:23 nvmf_dif -- nvmf/common.sh@234 -- # [[ up == up ]] 00:37:50.799 12:21:23 nvmf_dif -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:37:50.799 12:21:23 nvmf_dif -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:50.799 12:21:23 nvmf_dif -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:37:50.799 Found net devices under 0000:86:00.0: cvl_0_0 00:37:50.799 12:21:23 nvmf_dif -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:37:50.799 12:21:23 nvmf_dif -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:37:50.799 12:21:23 nvmf_dif -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:50.799 12:21:23 nvmf_dif -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:37:50.799 12:21:23 nvmf_dif -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:50.799 12:21:23 nvmf_dif -- nvmf/common.sh@234 -- # [[ up == up ]] 00:37:50.799 12:21:23 nvmf_dif -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:37:50.799 12:21:23 nvmf_dif -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:50.799 12:21:23 nvmf_dif -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:37:50.799 Found net devices under 0000:86:00.1: cvl_0_1 00:37:50.799 12:21:23 nvmf_dif -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:37:50.799 12:21:23 nvmf_dif -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:37:50.799 12:21:23 nvmf_dif -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:37:50.799 12:21:23 nvmf_dif -- nvmf/common.sh@262 -- # is_hw=yes 00:37:50.799 12:21:23 nvmf_dif -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:37:50.799 12:21:23 nvmf_dif -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:37:50.799 12:21:23 nvmf_dif -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@257 -- # create_target_ns 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@27 -- # local -gA dev_map 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@28 -- # local -g _dev 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@44 -- # ips=() 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@11 -- # local val=167772161 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:37:50.799 10.0.0.1 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@11 -- # local val=167772162 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:37:50.799 10.0.0.2 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:37:50.799 12:21:23 nvmf_dif -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:37:50.799 12:21:24 nvmf_dif -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:37:50.799 12:21:24 nvmf_dif -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:37:50.799 12:21:24 nvmf_dif -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:37:50.799 12:21:24 nvmf_dif -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:37:50.799 12:21:24 nvmf_dif -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:37:50.799 12:21:24 nvmf_dif -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:37:50.799 12:21:24 nvmf_dif -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:37:50.799 12:21:24 nvmf_dif -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:37:50.799 12:21:24 nvmf_dif -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:37:50.799 12:21:24 nvmf_dif -- nvmf/setup.sh@38 -- # ping_ips 1 00:37:50.799 12:21:24 nvmf_dif -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:37:50.799 12:21:24 nvmf_dif -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:37:50.799 12:21:24 nvmf_dif -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:37:50.799 12:21:24 nvmf_dif -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:37:50.799 12:21:24 nvmf_dif -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:37:50.799 12:21:24 nvmf_dif -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:37:50.799 12:21:24 nvmf_dif -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:37:50.799 12:21:24 nvmf_dif -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:37:50.799 12:21:24 nvmf_dif -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:37:50.799 12:21:24 nvmf_dif -- nvmf/setup.sh@107 -- # local dev=initiator0 00:37:50.799 12:21:24 nvmf_dif -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:37:50.799 12:21:24 nvmf_dif -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:37:50.799 12:21:24 nvmf_dif -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:37:50.799 12:21:24 nvmf_dif -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:37:50.799 12:21:24 nvmf_dif -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:37:50.799 12:21:24 nvmf_dif -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:37:50.799 12:21:24 nvmf_dif -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:37:50.799 12:21:24 nvmf_dif -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:37:50.799 12:21:24 nvmf_dif -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:37:50.799 12:21:24 nvmf_dif -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:37:50.799 12:21:24 nvmf_dif -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:37:50.799 12:21:24 nvmf_dif -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:50.799 12:21:24 nvmf_dif -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:50.799 12:21:24 nvmf_dif -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:37:50.799 12:21:24 nvmf_dif -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:37:50.799 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:50.799 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.425 ms 00:37:50.799 00:37:50.799 --- 10.0.0.1 ping statistics --- 00:37:50.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:50.799 rtt min/avg/max/mdev = 0.425/0.425/0.425/0.000 ms 00:37:50.799 12:21:24 nvmf_dif -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:37:50.799 12:21:24 nvmf_dif -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:37:50.800 12:21:24 nvmf_dif -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:37:50.800 12:21:24 nvmf_dif -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:37:50.800 12:21:24 nvmf_dif -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:50.800 12:21:24 nvmf_dif -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:50.800 12:21:24 nvmf_dif -- nvmf/setup.sh@168 -- # get_net_dev target0 00:37:50.800 12:21:24 nvmf_dif -- nvmf/setup.sh@107 -- # local dev=target0 00:37:50.800 12:21:24 nvmf_dif -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:37:50.800 12:21:24 nvmf_dif -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:37:50.800 12:21:24 nvmf_dif -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:37:50.800 12:21:24 nvmf_dif -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:37:50.800 12:21:24 nvmf_dif -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:37:50.800 12:21:24 nvmf_dif -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:37:50.800 12:21:24 nvmf_dif -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:37:50.800 12:21:24 nvmf_dif -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:37:50.800 12:21:24 nvmf_dif -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:37:50.800 12:21:24 nvmf_dif -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:37:50.800 12:21:24 nvmf_dif -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:37:50.800 12:21:24 nvmf_dif -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:37:50.800 12:21:24 nvmf_dif -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:37:50.800 12:21:24 nvmf_dif -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:37:50.800 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:50.800 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:37:50.800 00:37:50.800 --- 10.0.0.2 ping statistics --- 00:37:50.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:50.800 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:37:50.800 12:21:24 nvmf_dif -- nvmf/setup.sh@98 -- # (( pair++ )) 00:37:50.800 12:21:24 nvmf_dif -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:37:50.800 12:21:24 nvmf_dif -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:50.800 12:21:24 nvmf_dif -- nvmf/common.sh@270 -- # return 0 00:37:50.800 12:21:24 nvmf_dif -- nvmf/common.sh@298 -- # '[' iso == iso ']' 00:37:50.800 12:21:24 nvmf_dif -- nvmf/common.sh@299 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:52.697 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:37:52.697 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:37:52.697 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:37:52.697 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:37:52.697 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:37:52.697 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:37:52.697 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:37:52.697 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:37:52.697 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:37:52.697 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:37:52.697 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:37:52.697 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:37:52.697 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:37:52.697 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:37:52.697 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:37:52.697 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:37:52.697 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:37:52.956 12:21:26 nvmf_dif -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:37:52.956 12:21:26 nvmf_dif -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:37:52.956 12:21:26 nvmf_dif -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:37:52.956 12:21:26 nvmf_dif -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:37:52.956 12:21:26 nvmf_dif -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:37:52.956 12:21:26 nvmf_dif -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:37:52.956 12:21:26 nvmf_dif -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:37:52.956 12:21:26 nvmf_dif -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:37:52.956 12:21:26 nvmf_dif -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:37:52.956 12:21:26 nvmf_dif -- nvmf/setup.sh@107 -- # local dev=initiator0 00:37:52.956 12:21:26 nvmf_dif -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:37:52.956 12:21:26 nvmf_dif -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:37:52.956 12:21:26 nvmf_dif -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:37:52.956 12:21:26 nvmf_dif -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:37:52.956 12:21:26 nvmf_dif -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:37:52.956 12:21:26 nvmf_dif -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:37:52.956 12:21:26 nvmf_dif -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:37:52.956 12:21:26 nvmf_dif -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:37:52.956 12:21:26 nvmf_dif -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:37:52.956 12:21:26 nvmf_dif -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:52.956 12:21:26 nvmf_dif -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:37:52.956 12:21:26 nvmf_dif -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:37:52.956 12:21:26 nvmf_dif -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:37:52.956 12:21:26 nvmf_dif -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:37:52.956 12:21:26 nvmf_dif -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:37:52.956 12:21:26 nvmf_dif -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:37:52.956 12:21:26 nvmf_dif -- nvmf/setup.sh@107 -- # local dev=initiator1 00:37:52.956 12:21:26 nvmf_dif -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:37:52.956 12:21:26 nvmf_dif -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:37:52.956 12:21:26 nvmf_dif -- nvmf/setup.sh@109 -- # return 1 00:37:52.956 12:21:26 nvmf_dif -- nvmf/setup.sh@168 -- # dev= 00:37:52.956 12:21:26 nvmf_dif -- nvmf/setup.sh@169 -- # return 0 00:37:52.956 12:21:26 nvmf_dif -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:37:52.956 12:21:26 nvmf_dif -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:37:52.956 12:21:26 nvmf_dif -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:37:52.956 12:21:26 nvmf_dif -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:37:52.956 12:21:26 nvmf_dif -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:37:52.956 12:21:26 nvmf_dif -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:52.956 12:21:26 nvmf_dif -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:52.956 12:21:26 nvmf_dif -- nvmf/setup.sh@168 -- # get_net_dev target0 00:37:52.956 12:21:26 nvmf_dif -- nvmf/setup.sh@107 -- # local dev=target0 00:37:52.956 12:21:26 nvmf_dif -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:37:52.956 12:21:26 nvmf_dif -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:37:52.956 12:21:26 nvmf_dif -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:37:52.956 12:21:26 nvmf_dif -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:37:52.956 12:21:26 nvmf_dif -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:37:52.956 12:21:26 nvmf_dif -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:37:52.956 12:21:27 nvmf_dif -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:37:52.956 12:21:27 nvmf_dif -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:37:52.956 12:21:27 nvmf_dif -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:37:52.956 12:21:27 nvmf_dif -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:52.956 12:21:27 nvmf_dif -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:37:52.956 12:21:27 nvmf_dif -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:37:52.956 12:21:27 nvmf_dif -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:37:52.956 12:21:27 nvmf_dif -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:37:52.956 12:21:27 nvmf_dif -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:52.956 12:21:27 nvmf_dif -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:52.956 12:21:27 nvmf_dif -- nvmf/setup.sh@168 -- # get_net_dev target1 00:37:52.956 12:21:27 nvmf_dif -- nvmf/setup.sh@107 -- # local dev=target1 00:37:52.956 12:21:27 nvmf_dif -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:37:52.956 12:21:27 nvmf_dif -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:37:52.956 12:21:27 nvmf_dif -- nvmf/setup.sh@109 -- # return 1 00:37:52.956 12:21:27 nvmf_dif -- nvmf/setup.sh@168 -- # dev= 00:37:52.956 12:21:27 nvmf_dif -- nvmf/setup.sh@169 -- # return 0 00:37:52.956 12:21:27 nvmf_dif -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:37:52.956 12:21:27 nvmf_dif -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:52.956 12:21:27 nvmf_dif -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:37:52.956 12:21:27 nvmf_dif -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:37:52.956 12:21:27 nvmf_dif -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:52.956 12:21:27 nvmf_dif -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:37:52.956 12:21:27 nvmf_dif -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:37:52.956 12:21:27 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:37:52.956 12:21:27 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:37:52.956 12:21:27 nvmf_dif -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:37:52.956 12:21:27 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:52.956 12:21:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:52.956 12:21:27 nvmf_dif -- nvmf/common.sh@328 -- # nvmfpid=338647 00:37:52.956 12:21:27 nvmf_dif -- nvmf/common.sh@329 -- # waitforlisten 338647 00:37:52.956 12:21:27 nvmf_dif -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:37:52.956 12:21:27 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 338647 ']' 00:37:52.956 12:21:27 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:52.956 12:21:27 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:52.956 12:21:27 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:52.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:52.956 12:21:27 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:52.956 12:21:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:52.956 [2024-12-05 12:21:27.113704] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:37:52.956 [2024-12-05 12:21:27.113753] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:53.215 [2024-12-05 12:21:27.191433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:53.215 [2024-12-05 12:21:27.232094] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:53.215 [2024-12-05 12:21:27.232129] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:53.215 [2024-12-05 12:21:27.232137] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:53.216 [2024-12-05 12:21:27.232143] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:53.216 [2024-12-05 12:21:27.232148] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:53.216 [2024-12-05 12:21:27.232696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:53.216 12:21:27 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:53.216 12:21:27 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:37:53.216 12:21:27 nvmf_dif -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:37:53.216 12:21:27 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:53.216 12:21:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:53.216 12:21:27 nvmf_dif -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:53.216 12:21:27 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:37:53.216 12:21:27 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:37:53.216 12:21:27 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:53.216 12:21:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:53.216 [2024-12-05 12:21:27.370137] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:53.216 12:21:27 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:53.216 12:21:27 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:37:53.216 12:21:27 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:53.216 12:21:27 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:53.216 12:21:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:53.216 ************************************ 00:37:53.216 START TEST fio_dif_1_default 00:37:53.216 ************************************ 00:37:53.216 12:21:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:37:53.216 12:21:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:37:53.216 12:21:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:37:53.216 12:21:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:37:53.216 12:21:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:37:53.216 12:21:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:37:53.216 12:21:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:53.216 12:21:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:53.216 12:21:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:53.497 bdev_null0 00:37:53.497 12:21:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:53.497 12:21:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:53.497 12:21:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:53.497 12:21:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:53.497 12:21:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:53.497 12:21:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:53.497 12:21:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:53.497 12:21:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:53.497 12:21:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:53.497 12:21:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:53.497 12:21:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:53.497 12:21:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:53.497 [2024-12-05 12:21:27.438465] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:53.497 12:21:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:53.497 12:21:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:37:53.497 12:21:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:37:53.497 12:21:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:53.497 12:21:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@372 -- # config=() 00:37:53.497 12:21:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:53.497 12:21:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@372 -- # local subsystem config 00:37:53.497 12:21:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:53.497 12:21:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:37:53.497 12:21:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:37:53.497 12:21:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:37:53.497 { 00:37:53.497 "params": { 00:37:53.497 "name": "Nvme$subsystem", 00:37:53.497 "trtype": "$TEST_TRANSPORT", 00:37:53.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:53.497 "adrfam": "ipv4", 00:37:53.497 "trsvcid": "$NVMF_PORT", 00:37:53.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:53.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:53.497 "hdgst": ${hdgst:-false}, 00:37:53.497 "ddgst": ${ddgst:-false} 00:37:53.497 }, 00:37:53.497 "method": "bdev_nvme_attach_controller" 00:37:53.497 } 00:37:53.497 EOF 00:37:53.497 )") 00:37:53.497 12:21:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:53.497 12:21:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:37:53.497 12:21:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:53.497 12:21:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:37:53.497 12:21:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:53.497 12:21:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:53.497 12:21:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:37:53.497 12:21:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:53.497 12:21:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:53.497 12:21:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@394 -- # cat 00:37:53.497 12:21:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:37:53.497 12:21:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:53.497 12:21:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:37:53.497 12:21:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:37:53.497 12:21:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:53.497 12:21:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@396 -- # jq . 00:37:53.497 12:21:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@397 -- # IFS=, 00:37:53.497 12:21:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:37:53.497 "params": { 00:37:53.497 "name": "Nvme0", 00:37:53.497 "trtype": "tcp", 00:37:53.497 "traddr": "10.0.0.2", 00:37:53.497 "adrfam": "ipv4", 00:37:53.497 "trsvcid": "4420", 00:37:53.497 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:53.497 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:53.497 "hdgst": false, 00:37:53.497 "ddgst": false 00:37:53.497 }, 00:37:53.497 "method": "bdev_nvme_attach_controller" 00:37:53.497 }' 00:37:53.497 12:21:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:53.497 12:21:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:53.497 12:21:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:53.497 12:21:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:53.497 12:21:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:53.497 12:21:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:53.497 12:21:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:53.497 12:21:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:53.497 12:21:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:53.497 12:21:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:53.757 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:37:53.757 fio-3.35 00:37:53.757 Starting 1 thread 00:38:05.976 00:38:05.976 filename0: (groupid=0, jobs=1): err= 0: pid=339018: Thu Dec 5 12:21:38 2024 00:38:05.976 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10011msec) 00:38:05.976 slat (nsec): min=5886, max=25576, avg=6221.41, stdev=1100.20 00:38:05.976 clat (usec): min=40785, max=45354, avg=41008.78, stdev=297.70 00:38:05.976 lat (usec): min=40791, max=45379, avg=41015.00, stdev=298.17 00:38:05.976 clat percentiles (usec): 00:38:05.976 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:38:05.976 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:38:05.976 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:38:05.976 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45351], 99.95th=[45351], 00:38:05.976 | 99.99th=[45351] 00:38:05.976 bw ( KiB/s): min= 384, max= 416, per=99.49%, avg=388.80, stdev=11.72, samples=20 00:38:05.976 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:38:05.976 lat (msec) : 50=100.00% 00:38:05.976 cpu : usr=92.96%, sys=6.80%, ctx=17, majf=0, minf=0 00:38:05.976 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:05.976 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:05.976 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:05.976 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:05.976 latency : target=0, window=0, percentile=100.00%, depth=4 00:38:05.976 00:38:05.976 Run status group 0 (all jobs): 00:38:05.976 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10011-10011msec 00:38:05.976 12:21:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:38:05.976 12:21:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:38:05.976 12:21:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:38:05.976 12:21:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:05.976 12:21:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:38:05.976 12:21:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:05.976 12:21:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:05.976 12:21:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:05.976 12:21:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:05.976 12:21:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:05.976 12:21:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:05.976 12:21:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:05.976 12:21:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:05.976 00:38:05.976 real 0m11.253s 00:38:05.976 user 0m16.173s 00:38:05.976 sys 0m0.987s 00:38:05.976 12:21:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:05.976 12:21:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:05.976 ************************************ 00:38:05.976 END TEST fio_dif_1_default 00:38:05.976 ************************************ 00:38:05.976 12:21:38 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:38:05.976 12:21:38 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:05.976 12:21:38 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:05.976 12:21:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:05.976 ************************************ 00:38:05.976 START TEST fio_dif_1_multi_subsystems 00:38:05.976 ************************************ 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:05.977 bdev_null0 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:05.977 [2024-12-05 12:21:38.768981] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:05.977 bdev_null1 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@372 -- # config=() 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@372 -- # local subsystem config 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:38:05.977 { 00:38:05.977 "params": { 00:38:05.977 "name": "Nvme$subsystem", 00:38:05.977 "trtype": "$TEST_TRANSPORT", 00:38:05.977 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:05.977 "adrfam": "ipv4", 00:38:05.977 "trsvcid": "$NVMF_PORT", 00:38:05.977 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:05.977 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:05.977 "hdgst": ${hdgst:-false}, 00:38:05.977 "ddgst": ${ddgst:-false} 00:38:05.977 }, 00:38:05.977 "method": "bdev_nvme_attach_controller" 00:38:05.977 } 00:38:05.977 EOF 00:38:05.977 )") 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@394 -- # cat 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:38:05.977 { 00:38:05.977 "params": { 00:38:05.977 "name": "Nvme$subsystem", 00:38:05.977 "trtype": "$TEST_TRANSPORT", 00:38:05.977 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:05.977 "adrfam": "ipv4", 00:38:05.977 "trsvcid": "$NVMF_PORT", 00:38:05.977 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:05.977 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:05.977 "hdgst": ${hdgst:-false}, 00:38:05.977 "ddgst": ${ddgst:-false} 00:38:05.977 }, 00:38:05.977 "method": "bdev_nvme_attach_controller" 00:38:05.977 } 00:38:05.977 EOF 00:38:05.977 )") 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@394 -- # cat 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@396 -- # jq . 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@397 -- # IFS=, 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:38:05.977 "params": { 00:38:05.977 "name": "Nvme0", 00:38:05.977 "trtype": "tcp", 00:38:05.977 "traddr": "10.0.0.2", 00:38:05.977 "adrfam": "ipv4", 00:38:05.977 "trsvcid": "4420", 00:38:05.977 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:05.977 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:05.977 "hdgst": false, 00:38:05.977 "ddgst": false 00:38:05.977 }, 00:38:05.977 "method": "bdev_nvme_attach_controller" 00:38:05.977 },{ 00:38:05.977 "params": { 00:38:05.977 "name": "Nvme1", 00:38:05.977 "trtype": "tcp", 00:38:05.977 "traddr": "10.0.0.2", 00:38:05.977 "adrfam": "ipv4", 00:38:05.977 "trsvcid": "4420", 00:38:05.977 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:05.977 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:05.977 "hdgst": false, 00:38:05.977 "ddgst": false 00:38:05.977 }, 00:38:05.977 "method": "bdev_nvme_attach_controller" 00:38:05.977 }' 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:05.977 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:05.978 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:05.978 12:21:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:05.978 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:38:05.978 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:38:05.978 fio-3.35 00:38:05.978 Starting 2 threads 00:38:15.956 00:38:15.956 filename0: (groupid=0, jobs=1): err= 0: pid=340983: Thu Dec 5 12:21:49 2024 00:38:15.956 read: IOPS=97, BW=392KiB/s (401kB/s)(3920KiB/10010msec) 00:38:15.956 slat (nsec): min=5904, max=43411, avg=7620.55, stdev=2696.28 00:38:15.956 clat (usec): min=420, max=42071, avg=40834.10, stdev=2592.95 00:38:15.956 lat (usec): min=426, max=42082, avg=40841.72, stdev=2592.96 00:38:15.956 clat percentiles (usec): 00:38:15.956 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:38:15.956 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:38:15.956 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:38:15.956 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:38:15.956 | 99.99th=[42206] 00:38:15.956 bw ( KiB/s): min= 384, max= 416, per=49.90%, avg=390.40, stdev=13.13, samples=20 00:38:15.956 iops : min= 96, max= 104, avg=97.60, stdev= 3.28, samples=20 00:38:15.956 lat (usec) : 500=0.41% 00:38:15.956 lat (msec) : 50=99.59% 00:38:15.956 cpu : usr=96.87%, sys=2.88%, ctx=6, majf=0, minf=111 00:38:15.956 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:15.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:15.956 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:15.956 issued rwts: total=980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:15.957 latency : target=0, window=0, percentile=100.00%, depth=4 00:38:15.957 filename1: (groupid=0, jobs=1): err= 0: pid=340984: Thu Dec 5 12:21:49 2024 00:38:15.957 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10007msec) 00:38:15.957 slat (nsec): min=5936, max=43132, avg=7704.96, stdev=3019.23 00:38:15.957 clat (usec): min=40753, max=42001, avg=40988.66, stdev=119.05 00:38:15.957 lat (usec): min=40760, max=42013, avg=40996.37, stdev=119.60 00:38:15.957 clat percentiles (usec): 00:38:15.957 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:38:15.957 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:38:15.957 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:38:15.957 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:38:15.957 | 99.99th=[42206] 00:38:15.957 bw ( KiB/s): min= 384, max= 416, per=49.64%, avg=388.80, stdev=11.72, samples=20 00:38:15.957 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:38:15.957 lat (msec) : 50=100.00% 00:38:15.957 cpu : usr=96.92%, sys=2.83%, ctx=22, majf=0, minf=189 00:38:15.957 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:15.957 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:15.957 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:15.957 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:15.957 latency : target=0, window=0, percentile=100.00%, depth=4 00:38:15.957 00:38:15.957 Run status group 0 (all jobs): 00:38:15.957 READ: bw=782KiB/s (800kB/s), 390KiB/s-392KiB/s (399kB/s-401kB/s), io=7824KiB (8012kB), run=10007-10010msec 00:38:15.957 12:21:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:38:15.957 12:21:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:38:15.957 12:21:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:38:15.957 12:21:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:15.957 12:21:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:38:15.957 12:21:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:15.957 12:21:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:15.957 12:21:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:15.957 12:21:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:15.957 12:21:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:15.957 12:21:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:15.957 12:21:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:15.957 12:21:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:15.957 12:21:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:38:15.957 12:21:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:38:15.957 12:21:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:38:15.957 12:21:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:15.957 12:21:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:15.957 12:21:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:15.957 12:21:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:15.957 12:21:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:38:15.957 12:21:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:15.957 12:21:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:15.957 12:21:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:15.957 00:38:15.957 real 0m11.393s 00:38:15.957 user 0m26.394s 00:38:15.957 sys 0m0.994s 00:38:15.957 12:21:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:15.957 12:21:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:15.957 ************************************ 00:38:15.957 END TEST fio_dif_1_multi_subsystems 00:38:15.957 ************************************ 00:38:16.217 12:21:50 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:38:16.217 12:21:50 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:16.217 12:21:50 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:16.217 12:21:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:16.217 ************************************ 00:38:16.217 START TEST fio_dif_rand_params 00:38:16.217 ************************************ 00:38:16.217 12:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:38:16.217 12:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:38:16.217 12:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:38:16.217 12:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:38:16.217 12:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:38:16.217 12:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:38:16.217 12:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:38:16.217 12:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:38:16.217 12:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:38:16.217 12:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:38:16.217 12:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:16.217 12:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:38:16.217 12:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:38:16.217 12:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:38:16.217 12:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:16.217 12:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:16.217 bdev_null0 00:38:16.217 12:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:16.217 12:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:16.217 12:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:16.217 12:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:16.217 12:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:16.217 12:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:16.217 12:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:16.217 12:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:16.217 12:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:16.217 12:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:16.217 12:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:16.217 12:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:16.217 [2024-12-05 12:21:50.240272] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:16.217 12:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:16.217 12:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:38:16.217 12:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:38:16.217 12:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:38:16.217 12:21:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@372 -- # config=() 00:38:16.217 12:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:16.217 12:21:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@372 -- # local subsystem config 00:38:16.217 12:21:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:38:16.217 12:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:16.217 12:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:38:16.217 12:21:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:38:16.217 { 00:38:16.217 "params": { 00:38:16.217 "name": "Nvme$subsystem", 00:38:16.217 "trtype": "$TEST_TRANSPORT", 00:38:16.217 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:16.217 "adrfam": "ipv4", 00:38:16.217 "trsvcid": "$NVMF_PORT", 00:38:16.217 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:16.217 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:16.217 "hdgst": ${hdgst:-false}, 00:38:16.217 "ddgst": ${ddgst:-false} 00:38:16.217 }, 00:38:16.217 "method": "bdev_nvme_attach_controller" 00:38:16.217 } 00:38:16.217 EOF 00:38:16.217 )") 00:38:16.217 12:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:16.217 12:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:38:16.217 12:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:16.217 12:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:38:16.217 12:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:16.217 12:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:16.217 12:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:38:16.217 12:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:16.217 12:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:16.217 12:21:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # cat 00:38:16.217 12:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:38:16.217 12:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:16.217 12:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:16.217 12:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:38:16.217 12:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:16.217 12:21:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@396 -- # jq . 00:38:16.217 12:21:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@397 -- # IFS=, 00:38:16.217 12:21:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:38:16.217 "params": { 00:38:16.217 "name": "Nvme0", 00:38:16.217 "trtype": "tcp", 00:38:16.217 "traddr": "10.0.0.2", 00:38:16.217 "adrfam": "ipv4", 00:38:16.217 "trsvcid": "4420", 00:38:16.217 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:16.217 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:16.217 "hdgst": false, 00:38:16.217 "ddgst": false 00:38:16.217 }, 00:38:16.217 "method": "bdev_nvme_attach_controller" 00:38:16.217 }' 00:38:16.217 12:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:16.217 12:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:16.217 12:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:16.217 12:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:16.217 12:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:38:16.217 12:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:16.217 12:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:16.217 12:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:16.217 12:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:16.218 12:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:16.476 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:38:16.476 ... 00:38:16.476 fio-3.35 00:38:16.476 Starting 3 threads 00:38:23.039 00:38:23.039 filename0: (groupid=0, jobs=1): err= 0: pid=342948: Thu Dec 5 12:21:56 2024 00:38:23.039 read: IOPS=317, BW=39.7MiB/s (41.6MB/s)(199MiB/5005msec) 00:38:23.039 slat (nsec): min=6205, max=39948, avg=11148.56, stdev=2205.90 00:38:23.039 clat (usec): min=4604, max=89184, avg=9438.98, stdev=6087.42 00:38:23.039 lat (usec): min=4614, max=89194, avg=9450.13, stdev=6087.32 00:38:23.039 clat percentiles (usec): 00:38:23.039 | 1.00th=[ 5604], 5.00th=[ 6325], 10.00th=[ 6915], 20.00th=[ 7635], 00:38:23.039 | 30.00th=[ 8094], 40.00th=[ 8455], 50.00th=[ 8717], 60.00th=[ 8979], 00:38:23.039 | 70.00th=[ 9241], 80.00th=[ 9765], 90.00th=[10290], 95.00th=[10683], 00:38:23.039 | 99.00th=[49021], 99.50th=[50070], 99.90th=[51119], 99.95th=[89654], 00:38:23.039 | 99.99th=[89654] 00:38:23.039 bw ( KiB/s): min=32512, max=46080, per=34.28%, avg=41130.67, stdev=4806.40, samples=9 00:38:23.039 iops : min= 254, max= 360, avg=321.33, stdev=37.55, samples=9 00:38:23.039 lat (msec) : 10=85.58%, 20=12.41%, 50=1.57%, 100=0.44% 00:38:23.039 cpu : usr=94.08%, sys=5.60%, ctx=10, majf=0, minf=75 00:38:23.039 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:23.039 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:23.039 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:23.039 issued rwts: total=1588,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:23.039 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:23.039 filename0: (groupid=0, jobs=1): err= 0: pid=342949: Thu Dec 5 12:21:56 2024 00:38:23.039 read: IOPS=315, BW=39.4MiB/s (41.3MB/s)(197MiB/5003msec) 00:38:23.039 slat (nsec): min=6312, max=31026, avg=10907.19, stdev=1998.69 00:38:23.039 clat (usec): min=3147, max=51025, avg=9510.13, stdev=4658.54 00:38:23.039 lat (usec): min=3154, max=51049, avg=9521.03, stdev=4658.79 00:38:23.039 clat percentiles (usec): 00:38:23.039 | 1.00th=[ 3720], 5.00th=[ 5997], 10.00th=[ 6456], 20.00th=[ 7635], 00:38:23.040 | 30.00th=[ 8356], 40.00th=[ 8848], 50.00th=[ 9241], 60.00th=[ 9634], 00:38:23.040 | 70.00th=[10159], 80.00th=[10683], 90.00th=[11207], 95.00th=[11994], 00:38:23.040 | 99.00th=[47973], 99.50th=[49546], 99.90th=[51119], 99.95th=[51119], 00:38:23.040 | 99.99th=[51119] 00:38:23.040 bw ( KiB/s): min=36096, max=46080, per=33.81%, avg=40561.78, stdev=3355.24, samples=9 00:38:23.040 iops : min= 282, max= 360, avg=316.89, stdev=26.21, samples=9 00:38:23.040 lat (msec) : 4=1.65%, 10=66.37%, 20=30.84%, 50=0.82%, 100=0.32% 00:38:23.040 cpu : usr=94.16%, sys=5.56%, ctx=9, majf=0, minf=60 00:38:23.040 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:23.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:23.040 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:23.040 issued rwts: total=1576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:23.040 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:23.040 filename0: (groupid=0, jobs=1): err= 0: pid=342950: Thu Dec 5 12:21:56 2024 00:38:23.040 read: IOPS=305, BW=38.2MiB/s (40.0MB/s)(191MiB/5002msec) 00:38:23.040 slat (nsec): min=6263, max=33622, avg=11137.31, stdev=2008.22 00:38:23.040 clat (usec): min=3265, max=92057, avg=9813.01, stdev=5382.50 00:38:23.040 lat (usec): min=3272, max=92069, avg=9824.15, stdev=5382.51 00:38:23.040 clat percentiles (usec): 00:38:23.040 | 1.00th=[ 4621], 5.00th=[ 6259], 10.00th=[ 6587], 20.00th=[ 7767], 00:38:23.040 | 30.00th=[ 8586], 40.00th=[ 9241], 50.00th=[ 9634], 60.00th=[10028], 00:38:23.040 | 70.00th=[10552], 80.00th=[10945], 90.00th=[11469], 95.00th=[11994], 00:38:23.040 | 99.00th=[13435], 99.50th=[50070], 99.90th=[91751], 99.95th=[91751], 00:38:23.040 | 99.99th=[91751] 00:38:23.040 bw ( KiB/s): min=30720, max=44544, per=31.91%, avg=38286.22, stdev=4064.10, samples=9 00:38:23.040 iops : min= 240, max= 348, avg=299.11, stdev=31.75, samples=9 00:38:23.040 lat (msec) : 4=0.79%, 10=58.81%, 20=39.42%, 50=0.46%, 100=0.52% 00:38:23.040 cpu : usr=94.08%, sys=5.64%, ctx=10, majf=0, minf=28 00:38:23.040 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:23.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:23.040 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:23.040 issued rwts: total=1527,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:23.040 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:23.040 00:38:23.040 Run status group 0 (all jobs): 00:38:23.040 READ: bw=117MiB/s (123MB/s), 38.2MiB/s-39.7MiB/s (40.0MB/s-41.6MB/s), io=586MiB (615MB), run=5002-5005msec 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:23.040 bdev_null0 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:23.040 [2024-12-05 12:21:56.577567] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:23.040 bdev_null1 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:23.040 bdev_null2 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@372 -- # config=() 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:23.040 12:21:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@372 -- # local subsystem config 00:38:23.041 12:21:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:23.041 12:21:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:38:23.041 12:21:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:38:23.041 12:21:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:38:23.041 { 00:38:23.041 "params": { 00:38:23.041 "name": "Nvme$subsystem", 00:38:23.041 "trtype": "$TEST_TRANSPORT", 00:38:23.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:23.041 "adrfam": "ipv4", 00:38:23.041 "trsvcid": "$NVMF_PORT", 00:38:23.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:23.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:23.041 "hdgst": ${hdgst:-false}, 00:38:23.041 "ddgst": ${ddgst:-false} 00:38:23.041 }, 00:38:23.041 "method": "bdev_nvme_attach_controller" 00:38:23.041 } 00:38:23.041 EOF 00:38:23.041 )") 00:38:23.041 12:21:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:23.041 12:21:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:38:23.041 12:21:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:23.041 12:21:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:38:23.041 12:21:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:23.041 12:21:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:23.041 12:21:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:38:23.041 12:21:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:23.041 12:21:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:23.041 12:21:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # cat 00:38:23.041 12:21:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:23.041 12:21:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:38:23.041 12:21:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:38:23.041 12:21:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:23.041 12:21:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:23.041 12:21:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:38:23.041 12:21:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:38:23.041 12:21:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:38:23.041 { 00:38:23.041 "params": { 00:38:23.041 "name": "Nvme$subsystem", 00:38:23.041 "trtype": "$TEST_TRANSPORT", 00:38:23.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:23.041 "adrfam": "ipv4", 00:38:23.041 "trsvcid": "$NVMF_PORT", 00:38:23.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:23.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:23.041 "hdgst": ${hdgst:-false}, 00:38:23.041 "ddgst": ${ddgst:-false} 00:38:23.041 }, 00:38:23.041 "method": "bdev_nvme_attach_controller" 00:38:23.041 } 00:38:23.041 EOF 00:38:23.041 )") 00:38:23.041 12:21:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:38:23.041 12:21:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # cat 00:38:23.041 12:21:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:23.041 12:21:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:38:23.041 12:21:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:38:23.041 12:21:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:23.041 12:21:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:38:23.041 12:21:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:38:23.041 { 00:38:23.041 "params": { 00:38:23.041 "name": "Nvme$subsystem", 00:38:23.041 "trtype": "$TEST_TRANSPORT", 00:38:23.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:23.041 "adrfam": "ipv4", 00:38:23.041 "trsvcid": "$NVMF_PORT", 00:38:23.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:23.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:23.041 "hdgst": ${hdgst:-false}, 00:38:23.041 "ddgst": ${ddgst:-false} 00:38:23.041 }, 00:38:23.041 "method": "bdev_nvme_attach_controller" 00:38:23.041 } 00:38:23.041 EOF 00:38:23.041 )") 00:38:23.041 12:21:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # cat 00:38:23.041 12:21:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@396 -- # jq . 00:38:23.041 12:21:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@397 -- # IFS=, 00:38:23.041 12:21:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:38:23.041 "params": { 00:38:23.041 "name": "Nvme0", 00:38:23.041 "trtype": "tcp", 00:38:23.041 "traddr": "10.0.0.2", 00:38:23.041 "adrfam": "ipv4", 00:38:23.041 "trsvcid": "4420", 00:38:23.041 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:23.041 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:23.041 "hdgst": false, 00:38:23.041 "ddgst": false 00:38:23.041 }, 00:38:23.041 "method": "bdev_nvme_attach_controller" 00:38:23.041 },{ 00:38:23.041 "params": { 00:38:23.041 "name": "Nvme1", 00:38:23.041 "trtype": "tcp", 00:38:23.041 "traddr": "10.0.0.2", 00:38:23.041 "adrfam": "ipv4", 00:38:23.041 "trsvcid": "4420", 00:38:23.041 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:23.041 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:23.041 "hdgst": false, 00:38:23.041 "ddgst": false 00:38:23.041 }, 00:38:23.041 "method": "bdev_nvme_attach_controller" 00:38:23.041 },{ 00:38:23.041 "params": { 00:38:23.041 "name": "Nvme2", 00:38:23.041 "trtype": "tcp", 00:38:23.041 "traddr": "10.0.0.2", 00:38:23.041 "adrfam": "ipv4", 00:38:23.041 "trsvcid": "4420", 00:38:23.041 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:38:23.041 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:38:23.041 "hdgst": false, 00:38:23.041 "ddgst": false 00:38:23.041 }, 00:38:23.041 "method": "bdev_nvme_attach_controller" 00:38:23.041 }' 00:38:23.041 12:21:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:23.041 12:21:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:23.041 12:21:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:23.041 12:21:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:23.041 12:21:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:38:23.041 12:21:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:23.041 12:21:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:23.041 12:21:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:23.041 12:21:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:23.041 12:21:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:23.041 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:38:23.041 ... 00:38:23.041 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:38:23.041 ... 00:38:23.041 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:38:23.041 ... 00:38:23.041 fio-3.35 00:38:23.041 Starting 24 threads 00:38:35.251 00:38:35.251 filename0: (groupid=0, jobs=1): err= 0: pid=344005: Thu Dec 5 12:22:07 2024 00:38:35.251 read: IOPS=521, BW=2086KiB/s (2136kB/s)(20.4MiB/10003msec) 00:38:35.251 slat (nsec): min=8532, max=65463, avg=28465.26, stdev=11516.59 00:38:35.251 clat (usec): min=26938, max=37300, avg=30439.49, stdev=735.19 00:38:35.251 lat (usec): min=26970, max=37334, avg=30467.95, stdev=735.19 00:38:35.251 clat percentiles (usec): 00:38:35.251 | 1.00th=[28181], 5.00th=[30016], 10.00th=[30016], 20.00th=[30278], 00:38:35.251 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30540], 00:38:35.251 | 70.00th=[30540], 80.00th=[30540], 90.00th=[31065], 95.00th=[31327], 00:38:35.251 | 99.00th=[32637], 99.50th=[33817], 99.90th=[36963], 99.95th=[37487], 00:38:35.251 | 99.99th=[37487] 00:38:35.251 bw ( KiB/s): min= 2048, max= 2176, per=4.13%, avg=2081.68, stdev=57.91, samples=19 00:38:35.251 iops : min= 512, max= 544, avg=520.42, stdev=14.48, samples=19 00:38:35.251 lat (msec) : 50=100.00% 00:38:35.251 cpu : usr=98.78%, sys=0.84%, ctx=24, majf=0, minf=9 00:38:35.251 IO depths : 1=4.8%, 2=10.7%, 4=23.6%, 8=53.2%, 16=7.7%, 32=0.0%, >=64=0.0% 00:38:35.251 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:35.251 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:35.251 issued rwts: total=5216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:35.251 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:35.251 filename0: (groupid=0, jobs=1): err= 0: pid=344006: Thu Dec 5 12:22:07 2024 00:38:35.251 read: IOPS=522, BW=2090KiB/s (2140kB/s)(20.4MiB/10015msec) 00:38:35.251 slat (nsec): min=5068, max=78539, avg=36008.60, stdev=10814.99 00:38:35.251 clat (usec): min=17386, max=39590, avg=30324.40, stdev=840.47 00:38:35.251 lat (usec): min=17434, max=39638, avg=30360.41, stdev=839.82 00:38:35.251 clat percentiles (usec): 00:38:35.251 | 1.00th=[29492], 5.00th=[30016], 10.00th=[30016], 20.00th=[30016], 00:38:35.251 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:38:35.251 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:38:35.251 | 99.00th=[31327], 99.50th=[31851], 99.90th=[33162], 99.95th=[33162], 00:38:35.251 | 99.99th=[39584] 00:38:35.251 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2088.42, stdev=61.13, samples=19 00:38:35.251 iops : min= 512, max= 544, avg=522.11, stdev=15.28, samples=19 00:38:35.251 lat (msec) : 20=0.31%, 50=99.69% 00:38:35.251 cpu : usr=98.38%, sys=1.23%, ctx=12, majf=0, minf=9 00:38:35.251 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:35.251 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:35.251 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:35.251 issued rwts: total=5232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:35.251 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:35.251 filename0: (groupid=0, jobs=1): err= 0: pid=344007: Thu Dec 5 12:22:07 2024 00:38:35.251 read: IOPS=550, BW=2200KiB/s (2253kB/s)(21.5MiB/10022msec) 00:38:35.251 slat (usec): min=7, max=355, avg=14.45, stdev=13.60 00:38:35.251 clat (usec): min=5577, max=53205, avg=28975.56, stdev=4835.34 00:38:35.251 lat (usec): min=5586, max=53215, avg=28990.01, stdev=4834.51 00:38:35.251 clat percentiles (usec): 00:38:35.251 | 1.00th=[ 7701], 5.00th=[17957], 10.00th=[25560], 20.00th=[30016], 00:38:35.251 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30540], 60.00th=[30540], 00:38:35.251 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:38:35.251 | 99.00th=[32637], 99.50th=[42730], 99.90th=[53216], 99.95th=[53216], 00:38:35.251 | 99.99th=[53216] 00:38:35.251 bw ( KiB/s): min= 2048, max= 2544, per=4.37%, avg=2198.55, stdev=152.00, samples=20 00:38:35.251 iops : min= 512, max= 636, avg=549.60, stdev=37.98, samples=20 00:38:35.251 lat (msec) : 10=2.30%, 20=5.71%, 50=91.76%, 100=0.22% 00:38:35.251 cpu : usr=98.65%, sys=0.96%, ctx=13, majf=0, minf=9 00:38:35.251 IO depths : 1=2.7%, 2=7.6%, 4=20.8%, 8=59.0%, 16=9.9%, 32=0.0%, >=64=0.0% 00:38:35.251 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:35.251 complete : 0=0.0%, 4=93.1%, 8=1.4%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:35.251 issued rwts: total=5513,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:35.251 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:35.251 filename0: (groupid=0, jobs=1): err= 0: pid=344008: Thu Dec 5 12:22:07 2024 00:38:35.251 read: IOPS=536, BW=2145KiB/s (2196kB/s)(21.0MiB/10026msec) 00:38:35.251 slat (nsec): min=7407, max=47114, avg=9450.55, stdev=3035.30 00:38:35.251 clat (usec): min=2728, max=32163, avg=29748.34, stdev=4419.81 00:38:35.251 lat (usec): min=2743, max=32177, avg=29757.79, stdev=4418.92 00:38:35.251 clat percentiles (usec): 00:38:35.251 | 1.00th=[ 2966], 5.00th=[30278], 10.00th=[30278], 20.00th=[30278], 00:38:35.251 | 30.00th=[30540], 40.00th=[30540], 50.00th=[30540], 60.00th=[30540], 00:38:35.251 | 70.00th=[30540], 80.00th=[30802], 90.00th=[31065], 95.00th=[31065], 00:38:35.251 | 99.00th=[31589], 99.50th=[31851], 99.90th=[32113], 99.95th=[32113], 00:38:35.251 | 99.99th=[32113] 00:38:35.251 bw ( KiB/s): min= 2043, max= 3072, per=4.26%, avg=2143.50, stdev=227.07, samples=20 00:38:35.251 iops : min= 510, max= 768, avg=535.80, stdev=56.78, samples=20 00:38:35.251 lat (msec) : 4=1.79%, 10=0.56%, 20=1.23%, 50=96.43% 00:38:35.251 cpu : usr=98.48%, sys=1.12%, ctx=30, majf=0, minf=0 00:38:35.251 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:35.251 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:35.251 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:35.251 issued rwts: total=5376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:35.251 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:35.251 filename0: (groupid=0, jobs=1): err= 0: pid=344009: Thu Dec 5 12:22:07 2024 00:38:35.251 read: IOPS=527, BW=2111KiB/s (2162kB/s)(20.7MiB/10022msec) 00:38:35.251 slat (nsec): min=7505, max=81063, avg=23492.89, stdev=15935.99 00:38:35.251 clat (usec): min=7693, max=33596, avg=30139.09, stdev=2347.39 00:38:35.251 lat (usec): min=7702, max=33634, avg=30162.59, stdev=2347.80 00:38:35.251 clat percentiles (usec): 00:38:35.251 | 1.00th=[15533], 5.00th=[29754], 10.00th=[30016], 20.00th=[30278], 00:38:35.251 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30540], 60.00th=[30540], 00:38:35.251 | 70.00th=[30540], 80.00th=[30802], 90.00th=[31065], 95.00th=[31065], 00:38:35.251 | 99.00th=[31589], 99.50th=[32113], 99.90th=[32375], 99.95th=[32375], 00:38:35.251 | 99.99th=[33817] 00:38:35.251 bw ( KiB/s): min= 2048, max= 2299, per=4.19%, avg=2108.95, stdev=81.01, samples=20 00:38:35.251 iops : min= 512, max= 574, avg=527.20, stdev=20.16, samples=20 00:38:35.251 lat (msec) : 10=0.40%, 20=1.40%, 50=98.20% 00:38:35.251 cpu : usr=98.47%, sys=1.13%, ctx=16, majf=0, minf=9 00:38:35.251 IO depths : 1=6.0%, 2=12.1%, 4=24.4%, 8=51.0%, 16=6.5%, 32=0.0%, >=64=0.0% 00:38:35.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:35.252 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:35.252 issued rwts: total=5289,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:35.252 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:35.252 filename0: (groupid=0, jobs=1): err= 0: pid=344010: Thu Dec 5 12:22:07 2024 00:38:35.252 read: IOPS=521, BW=2086KiB/s (2136kB/s)(20.4MiB/10003msec) 00:38:35.252 slat (nsec): min=4227, max=69044, avg=37467.95, stdev=10579.40 00:38:35.252 clat (usec): min=17547, max=51802, avg=30343.86, stdev=1419.04 00:38:35.252 lat (usec): min=17579, max=51817, avg=30381.33, stdev=1417.98 00:38:35.252 clat percentiles (usec): 00:38:35.252 | 1.00th=[29492], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:38:35.252 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:38:35.252 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:38:35.252 | 99.00th=[31327], 99.50th=[31589], 99.90th=[51643], 99.95th=[51643], 00:38:35.252 | 99.99th=[51643] 00:38:35.252 bw ( KiB/s): min= 1920, max= 2176, per=4.13%, avg=2081.37, stdev=71.62, samples=19 00:38:35.252 iops : min= 480, max= 544, avg=520.26, stdev=17.88, samples=19 00:38:35.252 lat (msec) : 20=0.31%, 50=99.39%, 100=0.31% 00:38:35.252 cpu : usr=98.49%, sys=1.13%, ctx=21, majf=0, minf=9 00:38:35.252 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:38:35.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:35.252 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:35.252 issued rwts: total=5216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:35.252 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:35.252 filename0: (groupid=0, jobs=1): err= 0: pid=344011: Thu Dec 5 12:22:07 2024 00:38:35.252 read: IOPS=521, BW=2086KiB/s (2136kB/s)(20.4MiB/10003msec) 00:38:35.252 slat (nsec): min=4139, max=80854, avg=33606.69, stdev=19460.97 00:38:35.252 clat (usec): min=16982, max=50104, avg=30326.86, stdev=1355.52 00:38:35.252 lat (usec): min=16990, max=50116, avg=30360.47, stdev=1356.16 00:38:35.252 clat percentiles (usec): 00:38:35.252 | 1.00th=[29754], 5.00th=[30016], 10.00th=[30016], 20.00th=[30016], 00:38:35.252 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:38:35.252 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:38:35.252 | 99.00th=[31327], 99.50th=[31589], 99.90th=[50070], 99.95th=[50070], 00:38:35.252 | 99.99th=[50070] 00:38:35.252 bw ( KiB/s): min= 1920, max= 2176, per=4.13%, avg=2081.16, stdev=72.21, samples=19 00:38:35.252 iops : min= 480, max= 544, avg=520.21, stdev=18.10, samples=19 00:38:35.252 lat (msec) : 20=0.31%, 50=99.39%, 100=0.31% 00:38:35.252 cpu : usr=98.73%, sys=0.88%, ctx=11, majf=0, minf=9 00:38:35.252 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:38:35.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:35.252 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:35.252 issued rwts: total=5216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:35.252 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:35.252 filename0: (groupid=0, jobs=1): err= 0: pid=344012: Thu Dec 5 12:22:07 2024 00:38:35.252 read: IOPS=525, BW=2101KiB/s (2151kB/s)(20.6MiB/10023msec) 00:38:35.252 slat (nsec): min=5393, max=68534, avg=19354.92, stdev=10417.86 00:38:35.252 clat (usec): min=10657, max=32027, avg=30312.77, stdev=1799.56 00:38:35.252 lat (usec): min=10666, max=32041, avg=30332.12, stdev=1799.10 00:38:35.252 clat percentiles (usec): 00:38:35.252 | 1.00th=[19530], 5.00th=[30016], 10.00th=[30278], 20.00th=[30278], 00:38:35.252 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30540], 60.00th=[30540], 00:38:35.252 | 70.00th=[30540], 80.00th=[30802], 90.00th=[31065], 95.00th=[31065], 00:38:35.252 | 99.00th=[31327], 99.50th=[31589], 99.90th=[31851], 99.95th=[31851], 00:38:35.252 | 99.99th=[32113] 00:38:35.252 bw ( KiB/s): min= 2048, max= 2299, per=4.17%, avg=2098.95, stdev=75.88, samples=20 00:38:35.252 iops : min= 512, max= 574, avg=524.70, stdev=18.87, samples=20 00:38:35.252 lat (msec) : 20=1.22%, 50=98.78% 00:38:35.252 cpu : usr=98.67%, sys=0.94%, ctx=8, majf=0, minf=9 00:38:35.252 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:38:35.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:35.252 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:35.252 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:35.252 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:35.252 filename1: (groupid=0, jobs=1): err= 0: pid=344013: Thu Dec 5 12:22:07 2024 00:38:35.252 read: IOPS=522, BW=2090KiB/s (2140kB/s)(20.4MiB/10015msec) 00:38:35.252 slat (nsec): min=10076, max=70071, avg=34641.96, stdev=12092.99 00:38:35.252 clat (usec): min=17352, max=33053, avg=30353.62, stdev=803.36 00:38:35.252 lat (usec): min=17386, max=33080, avg=30388.26, stdev=801.76 00:38:35.252 clat percentiles (usec): 00:38:35.252 | 1.00th=[29754], 5.00th=[30016], 10.00th=[30016], 20.00th=[30016], 00:38:35.252 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30540], 00:38:35.252 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:38:35.252 | 99.00th=[31589], 99.50th=[31851], 99.90th=[32900], 99.95th=[32900], 00:38:35.252 | 99.99th=[33162] 00:38:35.252 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2088.42, stdev=61.13, samples=19 00:38:35.252 iops : min= 512, max= 544, avg=522.11, stdev=15.28, samples=19 00:38:35.252 lat (msec) : 20=0.31%, 50=99.69% 00:38:35.252 cpu : usr=98.49%, sys=1.13%, ctx=14, majf=0, minf=9 00:38:35.252 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:38:35.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:35.252 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:35.252 issued rwts: total=5232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:35.252 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:35.252 filename1: (groupid=0, jobs=1): err= 0: pid=344014: Thu Dec 5 12:22:07 2024 00:38:35.252 read: IOPS=523, BW=2094KiB/s (2145kB/s)(20.5MiB/10004msec) 00:38:35.252 slat (nsec): min=6648, max=76927, avg=27132.60, stdev=12880.25 00:38:35.252 clat (usec): min=8769, max=39749, avg=30345.15, stdev=1316.65 00:38:35.252 lat (usec): min=8779, max=39768, avg=30372.29, stdev=1317.17 00:38:35.252 clat percentiles (usec): 00:38:35.252 | 1.00th=[25297], 5.00th=[30016], 10.00th=[30016], 20.00th=[30278], 00:38:35.252 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30540], 00:38:35.252 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:38:35.252 | 99.00th=[31327], 99.50th=[33162], 99.90th=[38011], 99.95th=[38011], 00:38:35.252 | 99.99th=[39584] 00:38:35.252 bw ( KiB/s): min= 2048, max= 2224, per=4.15%, avg=2090.95, stdev=65.77, samples=19 00:38:35.252 iops : min= 512, max= 556, avg=522.74, stdev=16.44, samples=19 00:38:35.252 lat (msec) : 10=0.11%, 20=0.48%, 50=99.41% 00:38:35.252 cpu : usr=98.63%, sys=1.00%, ctx=8, majf=0, minf=9 00:38:35.252 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:38:35.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:35.252 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:35.252 issued rwts: total=5238,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:35.252 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:35.252 filename1: (groupid=0, jobs=1): err= 0: pid=344015: Thu Dec 5 12:22:07 2024 00:38:35.252 read: IOPS=538, BW=2155KiB/s (2207kB/s)(21.1MiB/10010msec) 00:38:35.252 slat (nsec): min=7426, max=77563, avg=15378.53, stdev=9462.91 00:38:35.252 clat (usec): min=2685, max=50586, avg=29565.76, stdev=4502.08 00:38:35.252 lat (usec): min=2699, max=50599, avg=29581.14, stdev=4501.89 00:38:35.252 clat percentiles (usec): 00:38:35.252 | 1.00th=[ 3097], 5.00th=[25560], 10.00th=[30016], 20.00th=[30278], 00:38:35.252 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30540], 60.00th=[30540], 00:38:35.252 | 70.00th=[30540], 80.00th=[30802], 90.00th=[31065], 95.00th=[31065], 00:38:35.252 | 99.00th=[31327], 99.50th=[32113], 99.90th=[50594], 99.95th=[50594], 00:38:35.252 | 99.99th=[50594] 00:38:35.252 bw ( KiB/s): min= 2043, max= 2944, per=4.28%, avg=2156.37, stdev=211.37, samples=19 00:38:35.252 iops : min= 510, max= 736, avg=539.05, stdev=52.87, samples=19 00:38:35.252 lat (msec) : 4=1.74%, 10=0.32%, 20=2.69%, 50=95.11%, 100=0.15% 00:38:35.252 cpu : usr=98.52%, sys=1.10%, ctx=13, majf=0, minf=9 00:38:35.252 IO depths : 1=5.8%, 2=11.8%, 4=24.1%, 8=51.6%, 16=6.7%, 32=0.0%, >=64=0.0% 00:38:35.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:35.252 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:35.252 issued rwts: total=5394,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:35.252 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:35.252 filename1: (groupid=0, jobs=1): err= 0: pid=344016: Thu Dec 5 12:22:07 2024 00:38:35.252 read: IOPS=521, BW=2086KiB/s (2136kB/s)(20.4MiB/10004msec) 00:38:35.252 slat (nsec): min=4256, max=70512, avg=37339.18, stdev=11213.98 00:38:35.252 clat (usec): min=17605, max=52685, avg=30339.28, stdev=1459.85 00:38:35.252 lat (usec): min=17653, max=52697, avg=30376.62, stdev=1459.00 00:38:35.252 clat percentiles (usec): 00:38:35.252 | 1.00th=[29492], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:38:35.252 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:38:35.252 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:38:35.252 | 99.00th=[31327], 99.50th=[31589], 99.90th=[52691], 99.95th=[52691], 00:38:35.252 | 99.99th=[52691] 00:38:35.252 bw ( KiB/s): min= 1923, max= 2176, per=4.13%, avg=2081.32, stdev=71.84, samples=19 00:38:35.252 iops : min= 480, max= 544, avg=520.21, stdev=18.10, samples=19 00:38:35.252 lat (msec) : 20=0.31%, 50=99.39%, 100=0.31% 00:38:35.252 cpu : usr=98.53%, sys=1.09%, ctx=9, majf=0, minf=9 00:38:35.252 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:38:35.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:35.252 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:35.252 issued rwts: total=5216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:35.252 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:35.252 filename1: (groupid=0, jobs=1): err= 0: pid=344017: Thu Dec 5 12:22:07 2024 00:38:35.252 read: IOPS=521, BW=2086KiB/s (2136kB/s)(20.4MiB/10004msec) 00:38:35.252 slat (nsec): min=4175, max=75848, avg=36736.78, stdev=9912.93 00:38:35.252 clat (usec): min=17492, max=52109, avg=30352.06, stdev=1433.34 00:38:35.252 lat (usec): min=17523, max=52124, avg=30388.79, stdev=1432.39 00:38:35.252 clat percentiles (usec): 00:38:35.253 | 1.00th=[29492], 5.00th=[30016], 10.00th=[30016], 20.00th=[30016], 00:38:35.253 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:38:35.253 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:38:35.253 | 99.00th=[31327], 99.50th=[31589], 99.90th=[52167], 99.95th=[52167], 00:38:35.253 | 99.99th=[52167] 00:38:35.253 bw ( KiB/s): min= 1923, max= 2176, per=4.13%, avg=2081.32, stdev=71.84, samples=19 00:38:35.253 iops : min= 480, max= 544, avg=520.21, stdev=18.10, samples=19 00:38:35.253 lat (msec) : 20=0.31%, 50=99.39%, 100=0.31% 00:38:35.253 cpu : usr=98.55%, sys=1.06%, ctx=14, majf=0, minf=9 00:38:35.253 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:38:35.253 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:35.253 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:35.253 issued rwts: total=5216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:35.253 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:35.253 filename1: (groupid=0, jobs=1): err= 0: pid=344018: Thu Dec 5 12:22:07 2024 00:38:35.253 read: IOPS=521, BW=2086KiB/s (2136kB/s)(20.4MiB/10004msec) 00:38:35.253 slat (nsec): min=4296, max=79619, avg=34976.75, stdev=11812.95 00:38:35.253 clat (usec): min=17565, max=52232, avg=30345.23, stdev=1437.36 00:38:35.253 lat (usec): min=17579, max=52244, avg=30380.21, stdev=1437.02 00:38:35.253 clat percentiles (usec): 00:38:35.253 | 1.00th=[29492], 5.00th=[30016], 10.00th=[30016], 20.00th=[30016], 00:38:35.253 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:38:35.253 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:38:35.253 | 99.00th=[31327], 99.50th=[31589], 99.90th=[52167], 99.95th=[52167], 00:38:35.253 | 99.99th=[52167] 00:38:35.253 bw ( KiB/s): min= 1923, max= 2176, per=4.13%, avg=2081.32, stdev=71.84, samples=19 00:38:35.253 iops : min= 480, max= 544, avg=520.21, stdev=18.10, samples=19 00:38:35.253 lat (msec) : 20=0.31%, 50=99.39%, 100=0.31% 00:38:35.253 cpu : usr=98.34%, sys=1.27%, ctx=11, majf=0, minf=9 00:38:35.253 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:38:35.253 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:35.253 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:35.253 issued rwts: total=5216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:35.253 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:35.253 filename1: (groupid=0, jobs=1): err= 0: pid=344019: Thu Dec 5 12:22:07 2024 00:38:35.253 read: IOPS=525, BW=2101KiB/s (2151kB/s)(20.6MiB/10024msec) 00:38:35.253 slat (nsec): min=7913, max=70355, avg=28753.54, stdev=12532.40 00:38:35.253 clat (usec): min=9221, max=32063, avg=30245.09, stdev=1785.63 00:38:35.253 lat (usec): min=9238, max=32090, avg=30273.84, stdev=1786.18 00:38:35.253 clat percentiles (usec): 00:38:35.253 | 1.00th=[19530], 5.00th=[30016], 10.00th=[30016], 20.00th=[30278], 00:38:35.253 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30540], 00:38:35.253 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:38:35.253 | 99.00th=[31327], 99.50th=[31589], 99.90th=[31851], 99.95th=[32113], 00:38:35.253 | 99.99th=[32113] 00:38:35.253 bw ( KiB/s): min= 2048, max= 2299, per=4.17%, avg=2098.95, stdev=75.88, samples=20 00:38:35.253 iops : min= 512, max= 574, avg=524.70, stdev=18.87, samples=20 00:38:35.253 lat (msec) : 10=0.13%, 20=1.04%, 50=98.82% 00:38:35.253 cpu : usr=98.47%, sys=1.14%, ctx=12, majf=0, minf=9 00:38:35.253 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:35.253 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:35.253 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:35.253 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:35.253 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:35.253 filename1: (groupid=0, jobs=1): err= 0: pid=344020: Thu Dec 5 12:22:07 2024 00:38:35.253 read: IOPS=521, BW=2086KiB/s (2136kB/s)(20.4MiB/10003msec) 00:38:35.253 slat (nsec): min=4184, max=70792, avg=36639.10, stdev=11504.12 00:38:35.253 clat (usec): min=17545, max=51669, avg=30336.26, stdev=1412.07 00:38:35.253 lat (usec): min=17574, max=51681, avg=30372.90, stdev=1411.50 00:38:35.253 clat percentiles (usec): 00:38:35.253 | 1.00th=[29492], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:38:35.253 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:38:35.253 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:38:35.253 | 99.00th=[31327], 99.50th=[31589], 99.90th=[51643], 99.95th=[51643], 00:38:35.253 | 99.99th=[51643] 00:38:35.253 bw ( KiB/s): min= 1920, max= 2176, per=4.13%, avg=2081.37, stdev=71.62, samples=19 00:38:35.253 iops : min= 480, max= 544, avg=520.26, stdev=17.88, samples=19 00:38:35.253 lat (msec) : 20=0.31%, 50=99.39%, 100=0.31% 00:38:35.253 cpu : usr=98.55%, sys=1.07%, ctx=18, majf=0, minf=9 00:38:35.253 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:38:35.253 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:35.253 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:35.253 issued rwts: total=5216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:35.253 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:35.253 filename2: (groupid=0, jobs=1): err= 0: pid=344021: Thu Dec 5 12:22:07 2024 00:38:35.253 read: IOPS=522, BW=2090KiB/s (2140kB/s)(20.4MiB/10006msec) 00:38:35.253 slat (nsec): min=4215, max=77138, avg=34622.09, stdev=13174.72 00:38:35.253 clat (usec): min=15685, max=58479, avg=30318.76, stdev=2148.33 00:38:35.253 lat (usec): min=15694, max=58493, avg=30353.39, stdev=2148.32 00:38:35.253 clat percentiles (usec): 00:38:35.253 | 1.00th=[21365], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:38:35.253 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:38:35.253 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:38:35.253 | 99.00th=[39060], 99.50th=[39584], 99.90th=[50070], 99.95th=[50070], 00:38:35.253 | 99.99th=[58459] 00:38:35.253 bw ( KiB/s): min= 1968, max= 2176, per=4.14%, avg=2086.21, stdev=65.23, samples=19 00:38:35.253 iops : min= 492, max= 544, avg=521.47, stdev=16.35, samples=19 00:38:35.253 lat (msec) : 20=0.46%, 50=99.48%, 100=0.06% 00:38:35.253 cpu : usr=98.52%, sys=1.09%, ctx=14, majf=0, minf=10 00:38:35.253 IO depths : 1=3.2%, 2=9.1%, 4=23.9%, 8=54.2%, 16=9.5%, 32=0.0%, >=64=0.0% 00:38:35.253 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:35.253 complete : 0=0.0%, 4=94.0%, 8=0.5%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:35.253 issued rwts: total=5228,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:35.253 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:35.253 filename2: (groupid=0, jobs=1): err= 0: pid=344022: Thu Dec 5 12:22:07 2024 00:38:35.253 read: IOPS=521, BW=2086KiB/s (2136kB/s)(20.4MiB/10003msec) 00:38:35.253 slat (nsec): min=6997, max=80833, avg=34884.92, stdev=19133.67 00:38:35.253 clat (usec): min=27094, max=39847, avg=30325.78, stdev=559.15 00:38:35.253 lat (usec): min=27107, max=39863, avg=30360.67, stdev=560.73 00:38:35.253 clat percentiles (usec): 00:38:35.253 | 1.00th=[29754], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:38:35.253 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:38:35.253 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:38:35.253 | 99.00th=[31327], 99.50th=[31589], 99.90th=[38011], 99.95th=[38011], 00:38:35.253 | 99.99th=[40109] 00:38:35.253 bw ( KiB/s): min= 2048, max= 2176, per=4.13%, avg=2081.68, stdev=57.91, samples=19 00:38:35.253 iops : min= 512, max= 544, avg=520.42, stdev=14.48, samples=19 00:38:35.253 lat (msec) : 50=100.00% 00:38:35.253 cpu : usr=98.55%, sys=1.06%, ctx=21, majf=0, minf=9 00:38:35.253 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:35.253 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:35.253 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:35.253 issued rwts: total=5216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:35.253 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:35.253 filename2: (groupid=0, jobs=1): err= 0: pid=344023: Thu Dec 5 12:22:07 2024 00:38:35.253 read: IOPS=522, BW=2090KiB/s (2140kB/s)(20.4MiB/10013msec) 00:38:35.253 slat (nsec): min=6568, max=72420, avg=37768.53, stdev=11113.73 00:38:35.253 clat (usec): min=17418, max=32049, avg=30287.28, stdev=785.29 00:38:35.253 lat (usec): min=17449, max=32073, avg=30325.05, stdev=785.53 00:38:35.253 clat percentiles (usec): 00:38:35.253 | 1.00th=[29492], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:38:35.253 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:38:35.253 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:38:35.253 | 99.00th=[31327], 99.50th=[31589], 99.90th=[31851], 99.95th=[31851], 00:38:35.253 | 99.99th=[32113] 00:38:35.253 bw ( KiB/s): min= 2043, max= 2176, per=4.15%, avg=2088.16, stdev=61.32, samples=19 00:38:35.253 iops : min= 510, max= 544, avg=522.00, stdev=15.36, samples=19 00:38:35.253 lat (msec) : 20=0.31%, 50=99.69% 00:38:35.253 cpu : usr=98.67%, sys=0.94%, ctx=14, majf=0, minf=9 00:38:35.253 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:38:35.253 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:35.253 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:35.253 issued rwts: total=5232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:35.253 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:35.253 filename2: (groupid=0, jobs=1): err= 0: pid=344024: Thu Dec 5 12:22:07 2024 00:38:35.253 read: IOPS=522, BW=2090KiB/s (2140kB/s)(20.4MiB/10014msec) 00:38:35.253 slat (nsec): min=4901, max=79571, avg=34788.37, stdev=19035.73 00:38:35.253 clat (usec): min=16964, max=34271, avg=30265.46, stdev=817.16 00:38:35.253 lat (usec): min=16978, max=34287, avg=30300.25, stdev=819.89 00:38:35.253 clat percentiles (usec): 00:38:35.253 | 1.00th=[29754], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:38:35.253 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:38:35.253 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:38:35.253 | 99.00th=[31327], 99.50th=[31589], 99.90th=[32375], 99.95th=[32375], 00:38:35.253 | 99.99th=[34341] 00:38:35.253 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2088.42, stdev=61.13, samples=19 00:38:35.253 iops : min= 512, max= 544, avg=522.11, stdev=15.28, samples=19 00:38:35.253 lat (msec) : 20=0.31%, 50=99.69% 00:38:35.253 cpu : usr=98.69%, sys=0.92%, ctx=14, majf=0, minf=9 00:38:35.253 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:35.253 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:35.253 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:35.254 issued rwts: total=5232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:35.254 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:35.254 filename2: (groupid=0, jobs=1): err= 0: pid=344026: Thu Dec 5 12:22:07 2024 00:38:35.254 read: IOPS=525, BW=2101KiB/s (2151kB/s)(20.6MiB/10022msec) 00:38:35.254 slat (nsec): min=7883, max=80022, avg=33059.30, stdev=18200.63 00:38:35.254 clat (usec): min=10742, max=33160, avg=30190.38, stdev=1799.81 00:38:35.254 lat (usec): min=10756, max=33221, avg=30223.44, stdev=1800.30 00:38:35.254 clat percentiles (usec): 00:38:35.254 | 1.00th=[19268], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:38:35.254 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30540], 00:38:35.254 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:38:35.254 | 99.00th=[31589], 99.50th=[31589], 99.90th=[32113], 99.95th=[32113], 00:38:35.254 | 99.99th=[33162] 00:38:35.254 bw ( KiB/s): min= 2048, max= 2299, per=4.17%, avg=2098.95, stdev=75.88, samples=20 00:38:35.254 iops : min= 512, max= 574, avg=524.70, stdev=18.87, samples=20 00:38:35.254 lat (msec) : 20=1.22%, 50=98.78% 00:38:35.254 cpu : usr=98.67%, sys=0.93%, ctx=12, majf=0, minf=9 00:38:35.254 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:35.254 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:35.254 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:35.254 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:35.254 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:35.254 filename2: (groupid=0, jobs=1): err= 0: pid=344027: Thu Dec 5 12:22:07 2024 00:38:35.254 read: IOPS=521, BW=2085KiB/s (2135kB/s)(20.4MiB/10003msec) 00:38:35.254 slat (usec): min=4, max=101, avg=33.89, stdev=13.65 00:38:35.254 clat (usec): min=17542, max=59957, avg=30375.31, stdev=2254.93 00:38:35.254 lat (usec): min=17559, max=59970, avg=30409.21, stdev=2253.72 00:38:35.254 clat percentiles (usec): 00:38:35.254 | 1.00th=[26346], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:38:35.254 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:38:35.254 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:38:35.254 | 99.00th=[33817], 99.50th=[42206], 99.90th=[60031], 99.95th=[60031], 00:38:35.254 | 99.99th=[60031] 00:38:35.254 bw ( KiB/s): min= 1840, max= 2176, per=4.13%, avg=2080.53, stdev=83.12, samples=19 00:38:35.254 iops : min= 460, max= 544, avg=520.05, stdev=20.75, samples=19 00:38:35.254 lat (msec) : 20=0.48%, 50=99.06%, 100=0.46% 00:38:35.254 cpu : usr=98.40%, sys=1.22%, ctx=9, majf=0, minf=9 00:38:35.254 IO depths : 1=5.8%, 2=11.6%, 4=23.6%, 8=52.1%, 16=7.0%, 32=0.0%, >=64=0.0% 00:38:35.254 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:35.254 complete : 0=0.0%, 4=93.8%, 8=0.6%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:35.254 issued rwts: total=5214,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:35.254 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:35.254 filename2: (groupid=0, jobs=1): err= 0: pid=344028: Thu Dec 5 12:22:07 2024 00:38:35.254 read: IOPS=521, BW=2086KiB/s (2136kB/s)(20.4MiB/10004msec) 00:38:35.254 slat (nsec): min=3644, max=76591, avg=37342.97, stdev=10420.54 00:38:35.254 clat (usec): min=17607, max=52320, avg=30353.89, stdev=1442.09 00:38:35.254 lat (usec): min=17646, max=52331, avg=30391.23, stdev=1440.91 00:38:35.254 clat percentiles (usec): 00:38:35.254 | 1.00th=[29492], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:38:35.254 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:38:35.254 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:38:35.254 | 99.00th=[31327], 99.50th=[31589], 99.90th=[52167], 99.95th=[52167], 00:38:35.254 | 99.99th=[52167] 00:38:35.254 bw ( KiB/s): min= 1923, max= 2176, per=4.13%, avg=2081.32, stdev=71.84, samples=19 00:38:35.254 iops : min= 480, max= 544, avg=520.21, stdev=18.10, samples=19 00:38:35.254 lat (msec) : 20=0.31%, 50=99.39%, 100=0.31% 00:38:35.254 cpu : usr=98.35%, sys=1.26%, ctx=12, majf=0, minf=9 00:38:35.254 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:38:35.254 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:35.254 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:35.254 issued rwts: total=5216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:35.254 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:35.254 filename2: (groupid=0, jobs=1): err= 0: pid=344029: Thu Dec 5 12:22:07 2024 00:38:35.254 read: IOPS=525, BW=2101KiB/s (2151kB/s)(20.6MiB/10023msec) 00:38:35.254 slat (nsec): min=7489, max=68995, avg=23058.35, stdev=10604.27 00:38:35.254 clat (usec): min=10747, max=32005, avg=30282.91, stdev=1770.50 00:38:35.254 lat (usec): min=10762, max=32034, avg=30305.97, stdev=1770.58 00:38:35.254 clat percentiles (usec): 00:38:35.254 | 1.00th=[19530], 5.00th=[30016], 10.00th=[30016], 20.00th=[30278], 00:38:35.254 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30540], 60.00th=[30540], 00:38:35.254 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:38:35.254 | 99.00th=[31327], 99.50th=[31589], 99.90th=[31851], 99.95th=[31851], 00:38:35.254 | 99.99th=[32113] 00:38:35.254 bw ( KiB/s): min= 2048, max= 2299, per=4.17%, avg=2098.95, stdev=75.88, samples=20 00:38:35.254 iops : min= 512, max= 574, avg=524.70, stdev=18.87, samples=20 00:38:35.254 lat (msec) : 20=1.14%, 50=98.86% 00:38:35.254 cpu : usr=98.67%, sys=0.90%, ctx=13, majf=0, minf=9 00:38:35.254 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:35.254 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:35.254 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:35.254 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:35.254 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:35.254 00:38:35.254 Run status group 0 (all jobs): 00:38:35.254 READ: bw=49.2MiB/s (51.5MB/s), 2085KiB/s-2200KiB/s (2135kB/s-2253kB/s), io=493MiB (517MB), run=10003-10026msec 00:38:35.254 12:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:38:35.254 12:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:38:35.254 12:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:35.254 12:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:35.254 12:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:38:35.254 12:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:35.254 12:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:35.254 12:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:35.254 12:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:35.254 12:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:35.254 12:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:35.254 12:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:35.254 12:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:35.254 12:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:35.254 12:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:38:35.254 12:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:38:35.254 12:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:35.254 12:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:35.254 12:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:35.254 12:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:35.254 12:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:38:35.254 12:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:35.254 12:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:35.254 12:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:35.254 12:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:35.254 12:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:38:35.254 12:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:38:35.254 12:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:38:35.254 12:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:35.254 12:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:35.254 12:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:35.254 12:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:38:35.254 12:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:35.254 12:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:35.254 12:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:35.254 12:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:38:35.254 12:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:38:35.254 12:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:38:35.254 12:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:38:35.254 12:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:38:35.254 12:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:38:35.254 12:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:38:35.254 12:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:38:35.254 12:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:35.254 12:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:38:35.254 12:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:38:35.254 12:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:38:35.254 12:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:35.254 12:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:35.254 bdev_null0 00:38:35.254 12:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:35.254 12:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:35.254 12:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:35.254 12:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:35.254 12:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:35.255 12:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:35.255 12:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:35.255 12:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:35.255 12:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:35.255 12:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:35.255 12:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:35.255 12:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:35.255 [2024-12-05 12:22:08.330273] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:35.255 12:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:35.255 12:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:35.255 12:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:38:35.255 12:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:38:35.255 12:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:38:35.255 12:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:35.255 12:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:35.255 bdev_null1 00:38:35.255 12:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:35.255 12:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:38:35.255 12:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:35.255 12:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:35.255 12:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:35.255 12:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:38:35.255 12:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:35.255 12:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:35.255 12:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:35.255 12:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:35.255 12:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:35.255 12:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:35.255 12:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:35.255 12:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:38:35.255 12:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:38:35.255 12:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:38:35.255 12:22:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@372 -- # config=() 00:38:35.255 12:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:35.255 12:22:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@372 -- # local subsystem config 00:38:35.255 12:22:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:38:35.255 12:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:35.255 12:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:38:35.255 12:22:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:38:35.255 { 00:38:35.255 "params": { 00:38:35.255 "name": "Nvme$subsystem", 00:38:35.255 "trtype": "$TEST_TRANSPORT", 00:38:35.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:35.255 "adrfam": "ipv4", 00:38:35.255 "trsvcid": "$NVMF_PORT", 00:38:35.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:35.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:35.255 "hdgst": ${hdgst:-false}, 00:38:35.255 "ddgst": ${ddgst:-false} 00:38:35.255 }, 00:38:35.255 "method": "bdev_nvme_attach_controller" 00:38:35.255 } 00:38:35.255 EOF 00:38:35.255 )") 00:38:35.255 12:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:35.255 12:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:38:35.255 12:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:38:35.255 12:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:35.255 12:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:35.255 12:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:35.255 12:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:38:35.255 12:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:35.255 12:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:35.255 12:22:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # cat 00:38:35.255 12:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:38:35.255 12:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:35.255 12:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:35.255 12:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:38:35.255 12:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:38:35.255 12:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:35.255 12:22:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:38:35.255 12:22:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:38:35.255 { 00:38:35.255 "params": { 00:38:35.255 "name": "Nvme$subsystem", 00:38:35.255 "trtype": "$TEST_TRANSPORT", 00:38:35.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:35.255 "adrfam": "ipv4", 00:38:35.255 "trsvcid": "$NVMF_PORT", 00:38:35.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:35.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:35.255 "hdgst": ${hdgst:-false}, 00:38:35.255 "ddgst": ${ddgst:-false} 00:38:35.255 }, 00:38:35.255 "method": "bdev_nvme_attach_controller" 00:38:35.255 } 00:38:35.255 EOF 00:38:35.255 )") 00:38:35.255 12:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:38:35.255 12:22:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:35.255 12:22:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # cat 00:38:35.255 12:22:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@396 -- # jq . 00:38:35.255 12:22:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@397 -- # IFS=, 00:38:35.255 12:22:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:38:35.255 "params": { 00:38:35.255 "name": "Nvme0", 00:38:35.255 "trtype": "tcp", 00:38:35.255 "traddr": "10.0.0.2", 00:38:35.255 "adrfam": "ipv4", 00:38:35.255 "trsvcid": "4420", 00:38:35.255 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:35.255 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:35.255 "hdgst": false, 00:38:35.255 "ddgst": false 00:38:35.255 }, 00:38:35.255 "method": "bdev_nvme_attach_controller" 00:38:35.255 },{ 00:38:35.255 "params": { 00:38:35.255 "name": "Nvme1", 00:38:35.255 "trtype": "tcp", 00:38:35.255 "traddr": "10.0.0.2", 00:38:35.255 "adrfam": "ipv4", 00:38:35.255 "trsvcid": "4420", 00:38:35.255 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:35.255 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:35.255 "hdgst": false, 00:38:35.255 "ddgst": false 00:38:35.255 }, 00:38:35.255 "method": "bdev_nvme_attach_controller" 00:38:35.255 }' 00:38:35.255 12:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:35.255 12:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:35.255 12:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:35.255 12:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:35.255 12:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:38:35.255 12:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:35.255 12:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:35.255 12:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:35.255 12:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:35.255 12:22:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:35.255 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:38:35.255 ... 00:38:35.255 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:38:35.255 ... 00:38:35.255 fio-3.35 00:38:35.255 Starting 4 threads 00:38:40.525 00:38:40.525 filename0: (groupid=0, jobs=1): err= 0: pid=345969: Thu Dec 5 12:22:14 2024 00:38:40.525 read: IOPS=2758, BW=21.6MiB/s (22.6MB/s)(108MiB/5003msec) 00:38:40.525 slat (nsec): min=6094, max=39111, avg=8504.68, stdev=2848.10 00:38:40.525 clat (usec): min=926, max=5561, avg=2874.95, stdev=391.13 00:38:40.525 lat (usec): min=937, max=5571, avg=2883.46, stdev=391.00 00:38:40.525 clat percentiles (usec): 00:38:40.525 | 1.00th=[ 1893], 5.00th=[ 2245], 10.00th=[ 2409], 20.00th=[ 2573], 00:38:40.525 | 30.00th=[ 2704], 40.00th=[ 2802], 50.00th=[ 2933], 60.00th=[ 2966], 00:38:40.525 | 70.00th=[ 2999], 80.00th=[ 3064], 90.00th=[ 3261], 95.00th=[ 3490], 00:38:40.525 | 99.00th=[ 4080], 99.50th=[ 4359], 99.90th=[ 4883], 99.95th=[ 4948], 00:38:40.525 | 99.99th=[ 5407] 00:38:40.525 bw ( KiB/s): min=21232, max=22944, per=26.03%, avg=22073.60, stdev=537.12, samples=10 00:38:40.525 iops : min= 2654, max= 2868, avg=2759.20, stdev=67.14, samples=10 00:38:40.525 lat (usec) : 1000=0.01% 00:38:40.525 lat (msec) : 2=1.30%, 4=97.43%, 10=1.26% 00:38:40.525 cpu : usr=96.06%, sys=3.62%, ctx=7, majf=0, minf=0 00:38:40.525 IO depths : 1=0.2%, 2=3.7%, 4=67.9%, 8=28.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:40.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:40.525 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:40.525 issued rwts: total=13801,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:40.525 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:40.525 filename0: (groupid=0, jobs=1): err= 0: pid=345970: Thu Dec 5 12:22:14 2024 00:38:40.525 read: IOPS=2663, BW=20.8MiB/s (21.8MB/s)(104MiB/5001msec) 00:38:40.525 slat (nsec): min=6107, max=28399, avg=8613.33, stdev=2847.99 00:38:40.525 clat (usec): min=519, max=5804, avg=2978.38, stdev=449.80 00:38:40.525 lat (usec): min=525, max=5830, avg=2987.00, stdev=449.76 00:38:40.525 clat percentiles (usec): 00:38:40.525 | 1.00th=[ 1991], 5.00th=[ 2311], 10.00th=[ 2507], 20.00th=[ 2704], 00:38:40.525 | 30.00th=[ 2802], 40.00th=[ 2933], 50.00th=[ 2966], 60.00th=[ 2999], 00:38:40.525 | 70.00th=[ 3064], 80.00th=[ 3228], 90.00th=[ 3458], 95.00th=[ 3785], 00:38:40.525 | 99.00th=[ 4555], 99.50th=[ 4817], 99.90th=[ 5276], 99.95th=[ 5538], 00:38:40.525 | 99.99th=[ 5604] 00:38:40.525 bw ( KiB/s): min=21002, max=21760, per=25.17%, avg=21343.33, stdev=218.20, samples=9 00:38:40.525 iops : min= 2625, max= 2720, avg=2667.89, stdev=27.32, samples=9 00:38:40.525 lat (usec) : 750=0.02%, 1000=0.02% 00:38:40.525 lat (msec) : 2=1.04%, 4=95.46%, 10=3.45% 00:38:40.525 cpu : usr=95.82%, sys=3.84%, ctx=7, majf=0, minf=0 00:38:40.525 IO depths : 1=0.1%, 2=4.1%, 4=66.9%, 8=28.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:40.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:40.525 complete : 0=0.0%, 4=93.3%, 8=6.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:40.525 issued rwts: total=13320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:40.525 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:40.525 filename1: (groupid=0, jobs=1): err= 0: pid=345971: Thu Dec 5 12:22:14 2024 00:38:40.525 read: IOPS=2605, BW=20.4MiB/s (21.3MB/s)(102MiB/5002msec) 00:38:40.525 slat (nsec): min=6106, max=37444, avg=8696.93, stdev=3034.61 00:38:40.525 clat (usec): min=533, max=5787, avg=3044.25, stdev=438.74 00:38:40.525 lat (usec): min=545, max=5794, avg=3052.94, stdev=438.62 00:38:40.525 clat percentiles (usec): 00:38:40.525 | 1.00th=[ 2089], 5.00th=[ 2409], 10.00th=[ 2573], 20.00th=[ 2802], 00:38:40.525 | 30.00th=[ 2933], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 3032], 00:38:40.525 | 70.00th=[ 3097], 80.00th=[ 3261], 90.00th=[ 3523], 95.00th=[ 3884], 00:38:40.525 | 99.00th=[ 4555], 99.50th=[ 4883], 99.90th=[ 5211], 99.95th=[ 5407], 00:38:40.525 | 99.99th=[ 5800] 00:38:40.525 bw ( KiB/s): min=19616, max=22000, per=24.59%, avg=20853.80, stdev=633.51, samples=10 00:38:40.525 iops : min= 2452, max= 2750, avg=2606.70, stdev=79.18, samples=10 00:38:40.525 lat (usec) : 750=0.02%, 1000=0.01% 00:38:40.525 lat (msec) : 2=0.60%, 4=95.44%, 10=3.93% 00:38:40.525 cpu : usr=95.68%, sys=4.00%, ctx=7, majf=0, minf=0 00:38:40.525 IO depths : 1=0.2%, 2=2.5%, 4=69.7%, 8=27.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:40.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:40.525 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:40.525 issued rwts: total=13034,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:40.526 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:40.526 filename1: (groupid=0, jobs=1): err= 0: pid=345972: Thu Dec 5 12:22:14 2024 00:38:40.526 read: IOPS=2574, BW=20.1MiB/s (21.1MB/s)(101MiB/5001msec) 00:38:40.526 slat (nsec): min=6130, max=37288, avg=8511.75, stdev=2893.43 00:38:40.526 clat (usec): min=635, max=5525, avg=3082.77, stdev=454.36 00:38:40.526 lat (usec): min=642, max=5531, avg=3091.28, stdev=454.19 00:38:40.526 clat percentiles (usec): 00:38:40.526 | 1.00th=[ 2147], 5.00th=[ 2474], 10.00th=[ 2671], 20.00th=[ 2835], 00:38:40.526 | 30.00th=[ 2933], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 3032], 00:38:40.526 | 70.00th=[ 3130], 80.00th=[ 3294], 90.00th=[ 3589], 95.00th=[ 4047], 00:38:40.526 | 99.00th=[ 4752], 99.50th=[ 5014], 99.90th=[ 5211], 99.95th=[ 5211], 00:38:40.526 | 99.99th=[ 5538] 00:38:40.526 bw ( KiB/s): min=20048, max=20976, per=24.29%, avg=20595.56, stdev=325.82, samples=9 00:38:40.526 iops : min= 2506, max= 2622, avg=2574.44, stdev=40.73, samples=9 00:38:40.526 lat (usec) : 750=0.02%, 1000=0.01% 00:38:40.526 lat (msec) : 2=0.36%, 4=94.24%, 10=5.37% 00:38:40.526 cpu : usr=96.06%, sys=3.62%, ctx=6, majf=0, minf=0 00:38:40.526 IO depths : 1=0.1%, 2=2.9%, 4=69.3%, 8=27.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:40.526 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:40.526 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:40.526 issued rwts: total=12874,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:40.526 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:40.526 00:38:40.526 Run status group 0 (all jobs): 00:38:40.526 READ: bw=82.8MiB/s (86.8MB/s), 20.1MiB/s-21.6MiB/s (21.1MB/s-22.6MB/s), io=414MiB (434MB), run=5001-5003msec 00:38:40.526 12:22:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:38:40.526 12:22:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:38:40.526 12:22:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:40.526 12:22:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:40.526 12:22:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:38:40.526 12:22:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:40.526 12:22:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:40.526 12:22:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:40.526 12:22:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:40.526 12:22:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:40.526 12:22:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:40.526 12:22:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:40.526 12:22:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:40.526 12:22:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:40.526 12:22:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:38:40.526 12:22:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:38:40.526 12:22:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:40.526 12:22:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:40.526 12:22:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:40.526 12:22:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:40.785 12:22:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:38:40.785 12:22:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:40.785 12:22:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:40.785 12:22:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:40.785 00:38:40.785 real 0m24.527s 00:38:40.785 user 4m52.918s 00:38:40.785 sys 0m5.243s 00:38:40.785 12:22:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:40.785 12:22:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:40.785 ************************************ 00:38:40.785 END TEST fio_dif_rand_params 00:38:40.785 ************************************ 00:38:40.785 12:22:14 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:38:40.785 12:22:14 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:40.785 12:22:14 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:40.785 12:22:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:40.785 ************************************ 00:38:40.785 START TEST fio_dif_digest 00:38:40.785 ************************************ 00:38:40.786 12:22:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:38:40.786 12:22:14 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:38:40.786 12:22:14 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:38:40.786 12:22:14 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:38:40.786 12:22:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:38:40.786 12:22:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:38:40.786 12:22:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:38:40.786 12:22:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:38:40.786 12:22:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:38:40.786 12:22:14 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:38:40.786 12:22:14 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:38:40.786 12:22:14 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:38:40.786 12:22:14 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:38:40.786 12:22:14 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:38:40.786 12:22:14 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:38:40.786 12:22:14 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:38:40.786 12:22:14 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:38:40.786 12:22:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:40.786 12:22:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:40.786 bdev_null0 00:38:40.786 12:22:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:40.786 12:22:14 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:40.786 12:22:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:40.786 12:22:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:40.786 12:22:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:40.786 12:22:14 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:40.786 12:22:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:40.786 12:22:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:40.786 12:22:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:40.786 12:22:14 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:40.786 12:22:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:40.786 12:22:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:40.786 [2024-12-05 12:22:14.838306] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:40.786 12:22:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:40.786 12:22:14 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:38:40.786 12:22:14 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:38:40.786 12:22:14 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:38:40.786 12:22:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@372 -- # config=() 00:38:40.786 12:22:14 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:40.786 12:22:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@372 -- # local subsystem config 00:38:40.786 12:22:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:38:40.786 12:22:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:40.786 12:22:14 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:38:40.786 12:22:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:38:40.786 { 00:38:40.786 "params": { 00:38:40.786 "name": "Nvme$subsystem", 00:38:40.786 "trtype": "$TEST_TRANSPORT", 00:38:40.786 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:40.786 "adrfam": "ipv4", 00:38:40.786 "trsvcid": "$NVMF_PORT", 00:38:40.786 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:40.786 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:40.786 "hdgst": ${hdgst:-false}, 00:38:40.786 "ddgst": ${ddgst:-false} 00:38:40.786 }, 00:38:40.786 "method": "bdev_nvme_attach_controller" 00:38:40.786 } 00:38:40.786 EOF 00:38:40.786 )") 00:38:40.786 12:22:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:40.786 12:22:14 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:38:40.786 12:22:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:40.786 12:22:14 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:38:40.786 12:22:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:40.786 12:22:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:40.786 12:22:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:38:40.786 12:22:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:40.786 12:22:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:40.786 12:22:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@394 -- # cat 00:38:40.786 12:22:14 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:38:40.786 12:22:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:40.786 12:22:14 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:38:40.786 12:22:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:38:40.786 12:22:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:40.786 12:22:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@396 -- # jq . 00:38:40.786 12:22:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@397 -- # IFS=, 00:38:40.786 12:22:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:38:40.786 "params": { 00:38:40.786 "name": "Nvme0", 00:38:40.786 "trtype": "tcp", 00:38:40.786 "traddr": "10.0.0.2", 00:38:40.786 "adrfam": "ipv4", 00:38:40.786 "trsvcid": "4420", 00:38:40.786 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:40.786 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:40.786 "hdgst": true, 00:38:40.786 "ddgst": true 00:38:40.786 }, 00:38:40.786 "method": "bdev_nvme_attach_controller" 00:38:40.786 }' 00:38:40.786 12:22:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:40.786 12:22:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:40.786 12:22:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:40.786 12:22:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:40.786 12:22:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:38:40.786 12:22:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:40.786 12:22:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:40.786 12:22:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:40.786 12:22:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:40.786 12:22:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:41.056 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:38:41.056 ... 00:38:41.056 fio-3.35 00:38:41.056 Starting 3 threads 00:38:53.266 00:38:53.266 filename0: (groupid=0, jobs=1): err= 0: pid=347241: Thu Dec 5 12:22:25 2024 00:38:53.266 read: IOPS=289, BW=36.2MiB/s (37.9MB/s)(363MiB/10044msec) 00:38:53.266 slat (nsec): min=6374, max=32220, avg=11346.70, stdev=2056.93 00:38:53.266 clat (usec): min=8175, max=51243, avg=10344.53, stdev=1233.58 00:38:53.266 lat (usec): min=8185, max=51255, avg=10355.87, stdev=1233.53 00:38:53.266 clat percentiles (usec): 00:38:53.266 | 1.00th=[ 8717], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[ 9765], 00:38:53.266 | 30.00th=[10028], 40.00th=[10159], 50.00th=[10421], 60.00th=[10552], 00:38:53.266 | 70.00th=[10683], 80.00th=[10814], 90.00th=[11207], 95.00th=[11338], 00:38:53.266 | 99.00th=[11994], 99.50th=[12125], 99.90th=[13042], 99.95th=[47449], 00:38:53.266 | 99.99th=[51119] 00:38:53.266 bw ( KiB/s): min=36608, max=37888, per=35.33%, avg=37158.40, stdev=355.06, samples=20 00:38:53.266 iops : min= 286, max= 296, avg=290.30, stdev= 2.77, samples=20 00:38:53.266 lat (msec) : 10=30.81%, 20=69.12%, 50=0.03%, 100=0.03% 00:38:53.266 cpu : usr=94.31%, sys=5.28%, ctx=23, majf=0, minf=117 00:38:53.266 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:53.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:53.266 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:53.266 issued rwts: total=2905,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:53.266 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:53.266 filename0: (groupid=0, jobs=1): err= 0: pid=347242: Thu Dec 5 12:22:25 2024 00:38:53.266 read: IOPS=265, BW=33.2MiB/s (34.8MB/s)(333MiB/10044msec) 00:38:53.266 slat (nsec): min=6428, max=44785, avg=11555.07, stdev=2013.74 00:38:53.266 clat (usec): min=8744, max=46339, avg=11273.41, stdev=1202.96 00:38:53.266 lat (usec): min=8756, max=46353, avg=11284.97, stdev=1202.93 00:38:53.266 clat percentiles (usec): 00:38:53.266 | 1.00th=[ 9372], 5.00th=[10028], 10.00th=[10290], 20.00th=[10683], 00:38:53.266 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11207], 60.00th=[11469], 00:38:53.266 | 70.00th=[11600], 80.00th=[11863], 90.00th=[12256], 95.00th=[12518], 00:38:53.266 | 99.00th=[13173], 99.50th=[13435], 99.90th=[13960], 99.95th=[44827], 00:38:53.266 | 99.99th=[46400] 00:38:53.266 bw ( KiB/s): min=33024, max=34816, per=32.42%, avg=34099.20, stdev=488.56, samples=20 00:38:53.266 iops : min= 258, max= 272, avg=266.40, stdev= 3.82, samples=20 00:38:53.266 lat (msec) : 10=4.99%, 20=94.94%, 50=0.08% 00:38:53.266 cpu : usr=94.33%, sys=5.35%, ctx=17, majf=0, minf=86 00:38:53.266 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:53.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:53.266 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:53.266 issued rwts: total=2666,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:53.266 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:53.266 filename0: (groupid=0, jobs=1): err= 0: pid=347243: Thu Dec 5 12:22:25 2024 00:38:53.266 read: IOPS=267, BW=33.4MiB/s (35.0MB/s)(335MiB/10043msec) 00:38:53.266 slat (nsec): min=6322, max=28637, avg=11075.05, stdev=2281.85 00:38:53.266 clat (usec): min=8137, max=47714, avg=11201.26, stdev=1262.97 00:38:53.266 lat (usec): min=8150, max=47722, avg=11212.34, stdev=1262.91 00:38:53.266 clat percentiles (usec): 00:38:53.266 | 1.00th=[ 9372], 5.00th=[ 9896], 10.00th=[10159], 20.00th=[10552], 00:38:53.266 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11076], 60.00th=[11338], 00:38:53.266 | 70.00th=[11600], 80.00th=[11863], 90.00th=[12256], 95.00th=[12518], 00:38:53.266 | 99.00th=[13173], 99.50th=[13435], 99.90th=[14353], 99.95th=[46924], 00:38:53.266 | 99.99th=[47973] 00:38:53.266 bw ( KiB/s): min=33536, max=34816, per=32.62%, avg=34316.80, stdev=419.21, samples=20 00:38:53.266 iops : min= 262, max= 272, avg=268.10, stdev= 3.28, samples=20 00:38:53.266 lat (msec) : 10=5.59%, 20=94.33%, 50=0.07% 00:38:53.266 cpu : usr=95.00%, sys=4.66%, ctx=14, majf=0, minf=80 00:38:53.266 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:53.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:53.266 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:53.266 issued rwts: total=2683,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:53.266 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:53.266 00:38:53.266 Run status group 0 (all jobs): 00:38:53.266 READ: bw=103MiB/s (108MB/s), 33.2MiB/s-36.2MiB/s (34.8MB/s-37.9MB/s), io=1032MiB (1082MB), run=10043-10044msec 00:38:53.266 12:22:26 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:38:53.266 12:22:26 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:38:53.266 12:22:26 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:38:53.266 12:22:26 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:53.266 12:22:26 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:38:53.266 12:22:26 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:53.266 12:22:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:53.266 12:22:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:53.266 12:22:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:53.266 12:22:26 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:53.266 12:22:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:53.266 12:22:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:53.266 12:22:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:53.266 00:38:53.266 real 0m11.358s 00:38:53.266 user 0m35.534s 00:38:53.266 sys 0m1.898s 00:38:53.266 12:22:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:53.266 12:22:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:53.267 ************************************ 00:38:53.267 END TEST fio_dif_digest 00:38:53.267 ************************************ 00:38:53.267 12:22:26 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:38:53.267 12:22:26 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:38:53.267 12:22:26 nvmf_dif -- nvmf/common.sh@335 -- # nvmfcleanup 00:38:53.267 12:22:26 nvmf_dif -- nvmf/common.sh@99 -- # sync 00:38:53.267 12:22:26 nvmf_dif -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:38:53.267 12:22:26 nvmf_dif -- nvmf/common.sh@102 -- # set +e 00:38:53.267 12:22:26 nvmf_dif -- nvmf/common.sh@103 -- # for i in {1..20} 00:38:53.267 12:22:26 nvmf_dif -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:38:53.267 rmmod nvme_tcp 00:38:53.267 rmmod nvme_fabrics 00:38:53.267 rmmod nvme_keyring 00:38:53.267 12:22:26 nvmf_dif -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:38:53.267 12:22:26 nvmf_dif -- nvmf/common.sh@106 -- # set -e 00:38:53.267 12:22:26 nvmf_dif -- nvmf/common.sh@107 -- # return 0 00:38:53.267 12:22:26 nvmf_dif -- nvmf/common.sh@336 -- # '[' -n 338647 ']' 00:38:53.267 12:22:26 nvmf_dif -- nvmf/common.sh@337 -- # killprocess 338647 00:38:53.267 12:22:26 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 338647 ']' 00:38:53.267 12:22:26 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 338647 00:38:53.267 12:22:26 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:38:53.267 12:22:26 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:53.267 12:22:26 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 338647 00:38:53.267 12:22:26 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:53.267 12:22:26 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:53.267 12:22:26 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 338647' 00:38:53.267 killing process with pid 338647 00:38:53.267 12:22:26 nvmf_dif -- common/autotest_common.sh@973 -- # kill 338647 00:38:53.267 12:22:26 nvmf_dif -- common/autotest_common.sh@978 -- # wait 338647 00:38:53.267 12:22:26 nvmf_dif -- nvmf/common.sh@339 -- # '[' iso == iso ']' 00:38:53.267 12:22:26 nvmf_dif -- nvmf/common.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:55.174 Waiting for block devices as requested 00:38:55.174 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:38:55.174 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:38:55.433 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:38:55.433 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:38:55.433 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:38:55.433 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:38:55.693 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:38:55.693 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:38:55.693 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:38:55.953 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:38:55.953 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:38:55.953 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:38:56.212 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:38:56.212 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:38:56.212 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:38:56.212 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:38:56.472 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:38:56.472 12:22:30 nvmf_dif -- nvmf/common.sh@342 -- # nvmf_fini 00:38:56.472 12:22:30 nvmf_dif -- nvmf/setup.sh@264 -- # local dev 00:38:56.472 12:22:30 nvmf_dif -- nvmf/setup.sh@267 -- # remove_target_ns 00:38:56.472 12:22:30 nvmf_dif -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:38:56.472 12:22:30 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 13> /dev/null' 00:38:56.472 12:22:30 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_target_ns 00:38:59.008 12:22:32 nvmf_dif -- nvmf/setup.sh@268 -- # delete_main_bridge 00:38:59.008 12:22:32 nvmf_dif -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:38:59.008 12:22:32 nvmf_dif -- nvmf/setup.sh@130 -- # return 0 00:38:59.008 12:22:32 nvmf_dif -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:38:59.008 12:22:32 nvmf_dif -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:38:59.008 12:22:32 nvmf_dif -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:38:59.008 12:22:32 nvmf_dif -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:38:59.008 12:22:32 nvmf_dif -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:38:59.008 12:22:32 nvmf_dif -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:38:59.008 12:22:32 nvmf_dif -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:38:59.008 12:22:32 nvmf_dif -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:38:59.008 12:22:32 nvmf_dif -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:38:59.008 12:22:32 nvmf_dif -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:38:59.008 12:22:32 nvmf_dif -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:38:59.008 12:22:32 nvmf_dif -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:38:59.008 12:22:32 nvmf_dif -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:38:59.008 12:22:32 nvmf_dif -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:38:59.008 12:22:32 nvmf_dif -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:38:59.008 12:22:32 nvmf_dif -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:38:59.008 12:22:32 nvmf_dif -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:38:59.008 12:22:32 nvmf_dif -- nvmf/setup.sh@41 -- # _dev=0 00:38:59.008 12:22:32 nvmf_dif -- nvmf/setup.sh@41 -- # dev_map=() 00:38:59.008 12:22:32 nvmf_dif -- nvmf/setup.sh@284 -- # iptr 00:38:59.008 12:22:32 nvmf_dif -- nvmf/common.sh@542 -- # iptables-save 00:38:59.008 12:22:32 nvmf_dif -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:38:59.008 12:22:32 nvmf_dif -- nvmf/common.sh@542 -- # iptables-restore 00:38:59.008 00:38:59.008 real 1m14.650s 00:38:59.008 user 7m10.990s 00:38:59.008 sys 0m21.121s 00:38:59.008 12:22:32 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:59.008 12:22:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:59.008 ************************************ 00:38:59.008 END TEST nvmf_dif 00:38:59.008 ************************************ 00:38:59.008 12:22:32 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:38:59.008 12:22:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:59.008 12:22:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:59.008 12:22:32 -- common/autotest_common.sh@10 -- # set +x 00:38:59.008 ************************************ 00:38:59.008 START TEST nvmf_abort_qd_sizes 00:38:59.008 ************************************ 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:38:59.009 * Looking for test storage... 00:38:59.009 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:59.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:59.009 --rc genhtml_branch_coverage=1 00:38:59.009 --rc genhtml_function_coverage=1 00:38:59.009 --rc genhtml_legend=1 00:38:59.009 --rc geninfo_all_blocks=1 00:38:59.009 --rc geninfo_unexecuted_blocks=1 00:38:59.009 00:38:59.009 ' 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:59.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:59.009 --rc genhtml_branch_coverage=1 00:38:59.009 --rc genhtml_function_coverage=1 00:38:59.009 --rc genhtml_legend=1 00:38:59.009 --rc geninfo_all_blocks=1 00:38:59.009 --rc geninfo_unexecuted_blocks=1 00:38:59.009 00:38:59.009 ' 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:59.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:59.009 --rc genhtml_branch_coverage=1 00:38:59.009 --rc genhtml_function_coverage=1 00:38:59.009 --rc genhtml_legend=1 00:38:59.009 --rc geninfo_all_blocks=1 00:38:59.009 --rc geninfo_unexecuted_blocks=1 00:38:59.009 00:38:59.009 ' 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:59.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:59.009 --rc genhtml_branch_coverage=1 00:38:59.009 --rc genhtml_function_coverage=1 00:38:59.009 --rc genhtml_legend=1 00:38:59.009 --rc geninfo_all_blocks=1 00:38:59.009 --rc geninfo_unexecuted_blocks=1 00:38:59.009 00:38:59.009 ' 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- nvmf/common.sh@50 -- # : 0 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:38:59.009 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- nvmf/common.sh@54 -- # have_pci_nics=0 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # prepare_net_devs 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # local -g is_hw=no 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # remove_target_ns 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 13> /dev/null' 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_target_ns 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # xtrace_disable 00:38:59.009 12:22:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:04.408 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:04.408 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@131 -- # pci_devs=() 00:39:04.408 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@131 -- # local -a pci_devs 00:39:04.408 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@132 -- # pci_net_devs=() 00:39:04.408 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:39:04.408 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@133 -- # pci_drivers=() 00:39:04.408 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@133 -- # local -A pci_drivers 00:39:04.408 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@135 -- # net_devs=() 00:39:04.408 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@135 -- # local -ga net_devs 00:39:04.408 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@136 -- # e810=() 00:39:04.408 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@136 -- # local -ga e810 00:39:04.408 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@137 -- # x722=() 00:39:04.408 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@137 -- # local -ga x722 00:39:04.408 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@138 -- # mlx=() 00:39:04.408 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@138 -- # local -ga mlx 00:39:04.408 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:04.408 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:04.408 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:04.408 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:04.408 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:04.408 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:04.408 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:04.408 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:04.408 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:04.408 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:04.408 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:04.408 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:04.408 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:39:04.408 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:39:04.408 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:39:04.408 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:39:04.408 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:39:04.408 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:39:04.408 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:39:04.408 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:39:04.408 Found 0000:86:00.0 (0x8086 - 0x159b) 00:39:04.408 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:39:04.408 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:39:04.408 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:04.408 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:04.408 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:39:04.408 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:39:04.408 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:39:04.408 Found 0000:86:00.1 (0x8086 - 0x159b) 00:39:04.408 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:39:04.408 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:39:04.408 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:04.408 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:04.408 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:39:04.408 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:39:04.408 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:39:04.408 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:39:04.408 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:39:04.408 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:04.408 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:39:04.408 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:04.408 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # [[ up == up ]] 00:39:04.408 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:39:04.408 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:04.408 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:39:04.408 Found net devices under 0000:86:00.0: cvl_0_0 00:39:04.408 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:39:04.408 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:39:04.408 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:04.408 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:39:04.408 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:04.408 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # [[ up == up ]] 00:39:04.408 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:39:04.408 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:04.408 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:39:04.408 Found net devices under 0000:86:00.1: cvl_0_1 00:39:04.408 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:39:04.408 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:39:04.408 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:39:04.408 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # is_hw=yes 00:39:04.408 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:39:04.409 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:39:04.409 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:39:04.409 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:39:04.409 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@257 -- # create_target_ns 00:39:04.409 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:39:04.409 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:39:04.409 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:39:04.409 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:04.409 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:39:04.409 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:39:04.409 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:04.409 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:04.409 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:39:04.409 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:39:04.409 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:39:04.409 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:39:04.409 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@27 -- # local -gA dev_map 00:39:04.409 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@28 -- # local -g _dev 00:39:04.409 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:39:04.409 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:39:04.409 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:39:04.409 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:39:04.409 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@44 -- # ips=() 00:39:04.409 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:39:04.409 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:39:04.409 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:39:04.409 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:39:04.409 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:39:04.409 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:39:04.409 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:39:04.409 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:39:04.409 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:39:04.409 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:39:04.409 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:39:04.409 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:39:04.409 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:39:04.409 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:39:04.409 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:39:04.409 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@11 -- # local val=167772161 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:39:04.667 10.0.0.1 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@11 -- # local val=167772162 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:39:04.667 10.0.0.2 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@38 -- # ping_ips 1 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@107 -- # local dev=initiator0 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:39:04.667 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:04.667 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.461 ms 00:39:04.667 00:39:04.667 --- 10.0.0.1 ping statistics --- 00:39:04.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:04.667 rtt min/avg/max/mdev = 0.461/0.461/0.461/0.000 ms 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@168 -- # get_net_dev target0 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@107 -- # local dev=target0 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:39:04.667 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:39:04.668 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:39:04.668 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:39:04.668 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:39:04.668 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:39:04.668 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:39:04.668 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:39:04.668 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:39:04.668 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:39:04.668 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:39:04.668 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:39:04.668 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:04.668 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:39:04.668 00:39:04.668 --- 10.0.0.2 ping statistics --- 00:39:04.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:04.668 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:39:04.668 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@98 -- # (( pair++ )) 00:39:04.668 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:39:04.668 12:22:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:04.668 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # return 0 00:39:04.668 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # '[' iso == iso ']' 00:39:04.668 12:22:38 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:39:07.950 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:39:07.950 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:39:07.950 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:39:07.950 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:39:07.950 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:39:07.950 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:39:07.950 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:39:07.950 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:39:07.950 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:39:07.950 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:39:07.950 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:39:07.950 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:39:07.950 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:39:07.950 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:39:07.950 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:39:07.950 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:39:09.347 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:39:09.347 12:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:39:09.347 12:22:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:39:09.347 12:22:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:39:09.347 12:22:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:39:09.347 12:22:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:39:09.347 12:22:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:39:09.347 12:22:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:39:09.347 12:22:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:39:09.347 12:22:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:39:09.347 12:22:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@107 -- # local dev=initiator0 00:39:09.347 12:22:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:39:09.347 12:22:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:39:09.347 12:22:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:39:09.347 12:22:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:39:09.347 12:22:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:39:09.347 12:22:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:39:09.347 12:22:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:39:09.347 12:22:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:39:09.347 12:22:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:39:09.347 12:22:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:09.347 12:22:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:39:09.347 12:22:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:39:09.347 12:22:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:39:09.347 12:22:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:39:09.347 12:22:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:39:09.347 12:22:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:39:09.347 12:22:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@107 -- # local dev=initiator1 00:39:09.347 12:22:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:39:09.347 12:22:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:39:09.347 12:22:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@109 -- # return 1 00:39:09.347 12:22:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@168 -- # dev= 00:39:09.347 12:22:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@169 -- # return 0 00:39:09.347 12:22:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:39:09.347 12:22:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:39:09.347 12:22:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:39:09.347 12:22:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:39:09.347 12:22:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:39:09.347 12:22:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:09.348 12:22:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:09.348 12:22:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@168 -- # get_net_dev target0 00:39:09.348 12:22:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@107 -- # local dev=target0 00:39:09.348 12:22:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:39:09.348 12:22:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:39:09.348 12:22:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:39:09.348 12:22:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:39:09.348 12:22:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:39:09.348 12:22:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:39:09.348 12:22:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:39:09.348 12:22:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:39:09.348 12:22:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:39:09.348 12:22:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:09.348 12:22:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:39:09.348 12:22:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:39:09.348 12:22:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:39:09.348 12:22:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:39:09.348 12:22:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:09.348 12:22:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:09.348 12:22:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@168 -- # get_net_dev target1 00:39:09.348 12:22:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@107 -- # local dev=target1 00:39:09.348 12:22:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:39:09.348 12:22:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:39:09.348 12:22:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@109 -- # return 1 00:39:09.348 12:22:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@168 -- # dev= 00:39:09.348 12:22:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@169 -- # return 0 00:39:09.348 12:22:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:39:09.348 12:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:09.348 12:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:39:09.348 12:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:39:09.348 12:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:09.348 12:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:39:09.348 12:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:39:09.348 12:22:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:39:09.348 12:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:39:09.348 12:22:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:09.348 12:22:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:09.348 12:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # nvmfpid=355106 00:39:09.348 12:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # waitforlisten 355106 00:39:09.348 12:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:39:09.348 12:22:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 355106 ']' 00:39:09.348 12:22:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:09.348 12:22:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:09.348 12:22:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:09.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:09.348 12:22:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:09.348 12:22:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:09.348 [2024-12-05 12:22:43.447821] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:39:09.348 [2024-12-05 12:22:43.447868] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:09.348 [2024-12-05 12:22:43.526831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:09.608 [2024-12-05 12:22:43.570345] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:09.608 [2024-12-05 12:22:43.570387] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:09.608 [2024-12-05 12:22:43.570395] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:09.608 [2024-12-05 12:22:43.570401] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:09.608 [2024-12-05 12:22:43.570406] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:09.608 [2024-12-05 12:22:43.572024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:09.608 [2024-12-05 12:22:43.572138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:09.608 [2024-12-05 12:22:43.572152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:09.608 [2024-12-05 12:22:43.572156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:09.608 12:22:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:09.608 12:22:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:39:09.608 12:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:39:09.608 12:22:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:09.608 12:22:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:09.608 12:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:09.608 12:22:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:39:09.608 12:22:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:39:09.608 12:22:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:39:09.608 12:22:43 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:39:09.608 12:22:43 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:39:09.608 12:22:43 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:39:09.608 12:22:43 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:39:09.608 12:22:43 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:39:09.608 12:22:43 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:39:09.608 12:22:43 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:39:09.608 12:22:43 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:39:09.608 12:22:43 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:39:09.608 12:22:43 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:39:09.608 12:22:43 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:39:09.608 12:22:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:39:09.608 12:22:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:39:09.608 12:22:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:39:09.608 12:22:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:09.608 12:22:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:09.608 12:22:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:09.608 ************************************ 00:39:09.608 START TEST spdk_target_abort 00:39:09.608 ************************************ 00:39:09.608 12:22:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:39:09.608 12:22:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:39:09.608 12:22:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:39:09.608 12:22:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:09.608 12:22:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:12.899 spdk_targetn1 00:39:12.899 12:22:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:12.899 12:22:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:12.899 12:22:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:12.899 12:22:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:12.899 [2024-12-05 12:22:46.596670] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:12.899 12:22:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:12.899 12:22:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:39:12.899 12:22:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:12.899 12:22:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:12.899 12:22:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:12.899 12:22:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:39:12.899 12:22:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:12.899 12:22:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:12.899 12:22:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:12.899 12:22:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:39:12.899 12:22:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:12.899 12:22:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:12.899 [2024-12-05 12:22:46.648991] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:12.899 12:22:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:12.899 12:22:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:39:12.899 12:22:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:39:12.899 12:22:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:39:12.899 12:22:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:39:12.899 12:22:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:39:12.899 12:22:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:39:12.899 12:22:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:39:12.899 12:22:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:39:12.899 12:22:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:39:12.899 12:22:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:12.899 12:22:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:39:12.899 12:22:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:12.899 12:22:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:39:12.899 12:22:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:12.900 12:22:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:39:12.900 12:22:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:12.900 12:22:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:39:12.900 12:22:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:12.900 12:22:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:12.900 12:22:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:12.900 12:22:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:16.198 Initializing NVMe Controllers 00:39:16.198 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:39:16.198 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:16.198 Initialization complete. Launching workers. 00:39:16.198 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 16306, failed: 0 00:39:16.198 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1413, failed to submit 14893 00:39:16.198 success 704, unsuccessful 709, failed 0 00:39:16.198 12:22:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:16.198 12:22:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:19.488 Initializing NVMe Controllers 00:39:19.488 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:39:19.488 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:19.488 Initialization complete. Launching workers. 00:39:19.488 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8391, failed: 0 00:39:19.488 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1236, failed to submit 7155 00:39:19.488 success 351, unsuccessful 885, failed 0 00:39:19.488 12:22:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:19.488 12:22:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:22.776 Initializing NVMe Controllers 00:39:22.776 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:39:22.776 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:22.776 Initialization complete. Launching workers. 00:39:22.776 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38541, failed: 0 00:39:22.776 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2868, failed to submit 35673 00:39:22.776 success 583, unsuccessful 2285, failed 0 00:39:22.776 12:22:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:39:22.776 12:22:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:22.776 12:22:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:22.776 12:22:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:22.776 12:22:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:39:22.776 12:22:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:22.776 12:22:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:24.151 12:22:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:24.151 12:22:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 355106 00:39:24.151 12:22:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 355106 ']' 00:39:24.151 12:22:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 355106 00:39:24.151 12:22:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:39:24.151 12:22:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:24.151 12:22:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 355106 00:39:24.151 12:22:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:24.151 12:22:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:24.151 12:22:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 355106' 00:39:24.151 killing process with pid 355106 00:39:24.151 12:22:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 355106 00:39:24.151 12:22:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 355106 00:39:24.151 00:39:24.151 real 0m14.565s 00:39:24.151 user 0m55.698s 00:39:24.151 sys 0m2.520s 00:39:24.151 12:22:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:24.151 12:22:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:24.151 ************************************ 00:39:24.151 END TEST spdk_target_abort 00:39:24.151 ************************************ 00:39:24.408 12:22:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:39:24.408 12:22:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:24.408 12:22:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:24.408 12:22:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:24.408 ************************************ 00:39:24.408 START TEST kernel_target_abort 00:39:24.408 ************************************ 00:39:24.408 12:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:39:24.409 12:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:39:24.409 12:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@434 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:39:24.409 12:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@436 -- # nvmet=/sys/kernel/config/nvmet 00:39:24.409 12:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@437 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:24.409 12:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@438 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:39:24.409 12:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@439 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:39:24.409 12:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@441 -- # local block nvme 00:39:24.409 12:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@443 -- # [[ ! -e /sys/module/nvmet ]] 00:39:24.409 12:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@444 -- # modprobe nvmet 00:39:24.409 12:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@447 -- # [[ -e /sys/kernel/config/nvmet ]] 00:39:24.409 12:22:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@449 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:39:26.942 Waiting for block devices as requested 00:39:26.942 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:39:27.201 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:39:27.201 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:39:27.460 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:39:27.460 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:39:27.460 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:39:27.460 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:39:27.720 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:39:27.720 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:39:27.720 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:39:27.979 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:39:27.979 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:39:27.979 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:39:27.979 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:39:28.238 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:39:28.238 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:39:28.238 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:39:28.496 12:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:39:28.496 12:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme0n1 ]] 00:39:28.496 12:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@454 -- # is_block_zoned nvme0n1 00:39:28.496 12:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:39:28.496 12:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:39:28.496 12:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:39:28.496 12:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@455 -- # block_in_use nvme0n1 00:39:28.496 12:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:39:28.496 12:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:39:28.496 No valid GPT data, bailing 00:39:28.496 12:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:39:28.496 12:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:39:28.496 12:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:39:28.496 12:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@455 -- # nvme=/dev/nvme0n1 00:39:28.496 12:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@458 -- # [[ -b /dev/nvme0n1 ]] 00:39:28.496 12:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@460 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:28.497 12:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@461 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:39:28.497 12:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@462 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:39:28.497 12:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@467 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:39:28.497 12:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@469 -- # echo 1 00:39:28.497 12:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@470 -- # echo /dev/nvme0n1 00:39:28.497 12:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@471 -- # echo 1 00:39:28.497 12:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@473 -- # echo 10.0.0.1 00:39:28.497 12:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@474 -- # echo tcp 00:39:28.497 12:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@475 -- # echo 4420 00:39:28.497 12:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@476 -- # echo ipv4 00:39:28.497 12:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@479 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:39:28.497 12:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@482 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:39:28.497 00:39:28.497 Discovery Log Number of Records 2, Generation counter 2 00:39:28.497 =====Discovery Log Entry 0====== 00:39:28.497 trtype: tcp 00:39:28.497 adrfam: ipv4 00:39:28.497 subtype: current discovery subsystem 00:39:28.497 treq: not specified, sq flow control disable supported 00:39:28.497 portid: 1 00:39:28.497 trsvcid: 4420 00:39:28.497 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:39:28.497 traddr: 10.0.0.1 00:39:28.497 eflags: none 00:39:28.497 sectype: none 00:39:28.497 =====Discovery Log Entry 1====== 00:39:28.497 trtype: tcp 00:39:28.497 adrfam: ipv4 00:39:28.497 subtype: nvme subsystem 00:39:28.497 treq: not specified, sq flow control disable supported 00:39:28.497 portid: 1 00:39:28.497 trsvcid: 4420 00:39:28.497 subnqn: nqn.2016-06.io.spdk:testnqn 00:39:28.497 traddr: 10.0.0.1 00:39:28.497 eflags: none 00:39:28.497 sectype: none 00:39:28.497 12:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:39:28.497 12:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:39:28.497 12:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:39:28.497 12:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:39:28.497 12:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:39:28.497 12:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:39:28.497 12:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:39:28.497 12:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:39:28.497 12:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:39:28.497 12:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:28.497 12:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:39:28.497 12:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:28.497 12:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:39:28.497 12:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:28.497 12:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:39:28.497 12:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:28.497 12:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:39:28.497 12:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:28.497 12:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:28.497 12:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:28.497 12:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:31.793 Initializing NVMe Controllers 00:39:31.793 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:39:31.793 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:31.793 Initialization complete. Launching workers. 00:39:31.793 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 94529, failed: 0 00:39:31.793 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 94529, failed to submit 0 00:39:31.793 success 0, unsuccessful 94529, failed 0 00:39:31.793 12:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:31.793 12:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:35.082 Initializing NVMe Controllers 00:39:35.082 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:39:35.082 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:35.082 Initialization complete. Launching workers. 00:39:35.082 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 150785, failed: 0 00:39:35.082 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 38058, failed to submit 112727 00:39:35.082 success 0, unsuccessful 38058, failed 0 00:39:35.082 12:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:35.082 12:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:38.371 Initializing NVMe Controllers 00:39:38.371 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:39:38.371 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:38.371 Initialization complete. Launching workers. 00:39:38.371 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 142252, failed: 0 00:39:38.371 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35626, failed to submit 106626 00:39:38.371 success 0, unsuccessful 35626, failed 0 00:39:38.371 12:23:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:39:38.371 12:23:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@486 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:39:38.371 12:23:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@488 -- # echo 0 00:39:38.371 12:23:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@490 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:38.371 12:23:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@491 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:39:38.371 12:23:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@492 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:39:38.371 12:23:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@493 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:38.371 12:23:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@495 -- # modules=(/sys/module/nvmet/holders/*) 00:39:38.371 12:23:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@497 -- # modprobe -r nvmet_tcp nvmet 00:39:38.371 12:23:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:39:40.908 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:39:40.908 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:39:40.908 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:39:40.908 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:39:40.908 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:39:40.908 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:39:40.908 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:39:40.908 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:39:40.908 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:39:40.908 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:39:40.908 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:39:40.908 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:39:40.908 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:39:40.908 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:39:40.908 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:39:40.908 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:39:42.281 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:39:42.281 00:39:42.281 real 0m18.020s 00:39:42.281 user 0m9.175s 00:39:42.281 sys 0m5.048s 00:39:42.281 12:23:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:42.281 12:23:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:42.281 ************************************ 00:39:42.281 END TEST kernel_target_abort 00:39:42.281 ************************************ 00:39:42.281 12:23:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:39:42.281 12:23:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:39:42.281 12:23:16 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # nvmfcleanup 00:39:42.281 12:23:16 nvmf_abort_qd_sizes -- nvmf/common.sh@99 -- # sync 00:39:42.281 12:23:16 nvmf_abort_qd_sizes -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:39:42.281 12:23:16 nvmf_abort_qd_sizes -- nvmf/common.sh@102 -- # set +e 00:39:42.281 12:23:16 nvmf_abort_qd_sizes -- nvmf/common.sh@103 -- # for i in {1..20} 00:39:42.281 12:23:16 nvmf_abort_qd_sizes -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:39:42.281 rmmod nvme_tcp 00:39:42.281 rmmod nvme_fabrics 00:39:42.540 rmmod nvme_keyring 00:39:42.540 12:23:16 nvmf_abort_qd_sizes -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:39:42.540 12:23:16 nvmf_abort_qd_sizes -- nvmf/common.sh@106 -- # set -e 00:39:42.540 12:23:16 nvmf_abort_qd_sizes -- nvmf/common.sh@107 -- # return 0 00:39:42.540 12:23:16 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # '[' -n 355106 ']' 00:39:42.540 12:23:16 nvmf_abort_qd_sizes -- nvmf/common.sh@337 -- # killprocess 355106 00:39:42.540 12:23:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 355106 ']' 00:39:42.540 12:23:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 355106 00:39:42.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (355106) - No such process 00:39:42.540 12:23:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 355106 is not found' 00:39:42.540 Process with pid 355106 is not found 00:39:42.540 12:23:16 nvmf_abort_qd_sizes -- nvmf/common.sh@339 -- # '[' iso == iso ']' 00:39:42.540 12:23:16 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:39:45.076 Waiting for block devices as requested 00:39:45.076 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:39:45.335 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:39:45.335 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:39:45.593 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:39:45.593 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:39:45.593 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:39:45.593 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:39:45.852 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:39:45.852 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:39:45.852 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:39:46.111 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:39:46.111 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:39:46.111 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:39:46.111 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:39:46.370 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:39:46.370 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:39:46.370 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:39:46.629 12:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # nvmf_fini 00:39:46.629 12:23:20 nvmf_abort_qd_sizes -- nvmf/setup.sh@264 -- # local dev 00:39:46.629 12:23:20 nvmf_abort_qd_sizes -- nvmf/setup.sh@267 -- # remove_target_ns 00:39:46.629 12:23:20 nvmf_abort_qd_sizes -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:39:46.629 12:23:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 13> /dev/null' 00:39:46.629 12:23:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_target_ns 00:39:48.533 12:23:22 nvmf_abort_qd_sizes -- nvmf/setup.sh@268 -- # delete_main_bridge 00:39:48.533 12:23:22 nvmf_abort_qd_sizes -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:39:48.533 12:23:22 nvmf_abort_qd_sizes -- nvmf/setup.sh@130 -- # return 0 00:39:48.533 12:23:22 nvmf_abort_qd_sizes -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:39:48.533 12:23:22 nvmf_abort_qd_sizes -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:39:48.533 12:23:22 nvmf_abort_qd_sizes -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:39:48.533 12:23:22 nvmf_abort_qd_sizes -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:39:48.533 12:23:22 nvmf_abort_qd_sizes -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:39:48.533 12:23:22 nvmf_abort_qd_sizes -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:39:48.533 12:23:22 nvmf_abort_qd_sizes -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:39:48.533 12:23:22 nvmf_abort_qd_sizes -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:39:48.533 12:23:22 nvmf_abort_qd_sizes -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:39:48.533 12:23:22 nvmf_abort_qd_sizes -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:39:48.533 12:23:22 nvmf_abort_qd_sizes -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:39:48.533 12:23:22 nvmf_abort_qd_sizes -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:39:48.533 12:23:22 nvmf_abort_qd_sizes -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:39:48.533 12:23:22 nvmf_abort_qd_sizes -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:39:48.533 12:23:22 nvmf_abort_qd_sizes -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:39:48.533 12:23:22 nvmf_abort_qd_sizes -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:39:48.533 12:23:22 nvmf_abort_qd_sizes -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:39:48.533 12:23:22 nvmf_abort_qd_sizes -- nvmf/setup.sh@41 -- # _dev=0 00:39:48.533 12:23:22 nvmf_abort_qd_sizes -- nvmf/setup.sh@41 -- # dev_map=() 00:39:48.533 12:23:22 nvmf_abort_qd_sizes -- nvmf/setup.sh@284 -- # iptr 00:39:48.533 12:23:22 nvmf_abort_qd_sizes -- nvmf/common.sh@542 -- # iptables-save 00:39:48.533 12:23:22 nvmf_abort_qd_sizes -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:39:48.533 12:23:22 nvmf_abort_qd_sizes -- nvmf/common.sh@542 -- # iptables-restore 00:39:48.533 00:39:48.533 real 0m49.977s 00:39:48.533 user 1m9.340s 00:39:48.533 sys 0m16.363s 00:39:48.533 12:23:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:48.533 12:23:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:48.533 ************************************ 00:39:48.533 END TEST nvmf_abort_qd_sizes 00:39:48.533 ************************************ 00:39:48.533 12:23:22 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:39:48.533 12:23:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:48.533 12:23:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:48.533 12:23:22 -- common/autotest_common.sh@10 -- # set +x 00:39:48.792 ************************************ 00:39:48.792 START TEST keyring_file 00:39:48.792 ************************************ 00:39:48.792 12:23:22 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:39:48.792 * Looking for test storage... 00:39:48.792 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:39:48.792 12:23:22 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:48.792 12:23:22 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:39:48.792 12:23:22 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:48.792 12:23:22 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:48.792 12:23:22 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:48.792 12:23:22 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:48.792 12:23:22 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:48.792 12:23:22 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:39:48.792 12:23:22 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:39:48.792 12:23:22 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:39:48.792 12:23:22 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:39:48.793 12:23:22 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:39:48.793 12:23:22 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:39:48.793 12:23:22 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:39:48.793 12:23:22 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:48.793 12:23:22 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:39:48.793 12:23:22 keyring_file -- scripts/common.sh@345 -- # : 1 00:39:48.793 12:23:22 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:48.793 12:23:22 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:48.793 12:23:22 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:39:48.793 12:23:22 keyring_file -- scripts/common.sh@353 -- # local d=1 00:39:48.793 12:23:22 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:48.793 12:23:22 keyring_file -- scripts/common.sh@355 -- # echo 1 00:39:48.793 12:23:22 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:39:48.793 12:23:22 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:39:48.793 12:23:22 keyring_file -- scripts/common.sh@353 -- # local d=2 00:39:48.793 12:23:22 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:48.793 12:23:22 keyring_file -- scripts/common.sh@355 -- # echo 2 00:39:48.793 12:23:22 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:39:48.793 12:23:22 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:48.793 12:23:22 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:48.793 12:23:22 keyring_file -- scripts/common.sh@368 -- # return 0 00:39:48.793 12:23:22 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:48.793 12:23:22 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:48.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:48.793 --rc genhtml_branch_coverage=1 00:39:48.793 --rc genhtml_function_coverage=1 00:39:48.793 --rc genhtml_legend=1 00:39:48.793 --rc geninfo_all_blocks=1 00:39:48.793 --rc geninfo_unexecuted_blocks=1 00:39:48.793 00:39:48.793 ' 00:39:48.793 12:23:22 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:48.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:48.793 --rc genhtml_branch_coverage=1 00:39:48.793 --rc genhtml_function_coverage=1 00:39:48.793 --rc genhtml_legend=1 00:39:48.793 --rc geninfo_all_blocks=1 00:39:48.793 --rc geninfo_unexecuted_blocks=1 00:39:48.793 00:39:48.793 ' 00:39:48.793 12:23:22 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:48.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:48.793 --rc genhtml_branch_coverage=1 00:39:48.793 --rc genhtml_function_coverage=1 00:39:48.793 --rc genhtml_legend=1 00:39:48.793 --rc geninfo_all_blocks=1 00:39:48.793 --rc geninfo_unexecuted_blocks=1 00:39:48.793 00:39:48.793 ' 00:39:48.793 12:23:22 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:48.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:48.793 --rc genhtml_branch_coverage=1 00:39:48.793 --rc genhtml_function_coverage=1 00:39:48.793 --rc genhtml_legend=1 00:39:48.793 --rc geninfo_all_blocks=1 00:39:48.793 --rc geninfo_unexecuted_blocks=1 00:39:48.793 00:39:48.793 ' 00:39:48.793 12:23:22 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:39:48.793 12:23:22 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:48.793 12:23:22 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:39:48.793 12:23:22 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:48.793 12:23:22 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:48.793 12:23:22 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:48.793 12:23:22 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:48.793 12:23:22 keyring_file -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:48.793 12:23:22 keyring_file -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:39:48.793 12:23:22 keyring_file -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:48.793 12:23:22 keyring_file -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:39:48.793 12:23:22 keyring_file -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:39:48.793 12:23:22 keyring_file -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:39:48.793 12:23:22 keyring_file -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:48.793 12:23:22 keyring_file -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:39:48.793 12:23:22 keyring_file -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:39:48.793 12:23:22 keyring_file -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:48.793 12:23:22 keyring_file -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:48.793 12:23:22 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:39:48.793 12:23:22 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:48.793 12:23:22 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:48.793 12:23:22 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:48.793 12:23:22 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:48.793 12:23:22 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:48.793 12:23:22 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:48.793 12:23:22 keyring_file -- paths/export.sh@5 -- # export PATH 00:39:48.793 12:23:22 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:48.793 12:23:22 keyring_file -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:39:48.793 12:23:22 keyring_file -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:39:48.793 12:23:22 keyring_file -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:39:48.793 12:23:22 keyring_file -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:39:48.793 12:23:22 keyring_file -- nvmf/common.sh@50 -- # : 0 00:39:48.793 12:23:22 keyring_file -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:39:48.793 12:23:22 keyring_file -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:39:48.793 12:23:22 keyring_file -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:39:48.793 12:23:22 keyring_file -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:48.793 12:23:22 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:48.793 12:23:22 keyring_file -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:39:48.793 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:39:48.793 12:23:22 keyring_file -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:39:48.793 12:23:22 keyring_file -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:39:48.793 12:23:22 keyring_file -- nvmf/common.sh@54 -- # have_pci_nics=0 00:39:48.793 12:23:22 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:39:48.793 12:23:22 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:39:48.793 12:23:22 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:39:48.793 12:23:22 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:39:48.793 12:23:22 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:39:48.793 12:23:22 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:39:48.793 12:23:22 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:39:48.793 12:23:22 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:39:48.793 12:23:22 keyring_file -- keyring/common.sh@17 -- # name=key0 00:39:48.793 12:23:22 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:39:48.793 12:23:22 keyring_file -- keyring/common.sh@17 -- # digest=0 00:39:48.793 12:23:22 keyring_file -- keyring/common.sh@18 -- # mktemp 00:39:48.793 12:23:22 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.BI4p0Bvdvg 00:39:48.793 12:23:22 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:39:48.793 12:23:22 keyring_file -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:39:48.793 12:23:22 keyring_file -- nvmf/common.sh@504 -- # local prefix key digest 00:39:48.793 12:23:22 keyring_file -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:39:48.793 12:23:22 keyring_file -- nvmf/common.sh@506 -- # key=00112233445566778899aabbccddeeff 00:39:48.793 12:23:22 keyring_file -- nvmf/common.sh@506 -- # digest=0 00:39:48.793 12:23:22 keyring_file -- nvmf/common.sh@507 -- # python - 00:39:49.053 12:23:22 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.BI4p0Bvdvg 00:39:49.053 12:23:22 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.BI4p0Bvdvg 00:39:49.053 12:23:22 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.BI4p0Bvdvg 00:39:49.053 12:23:23 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:39:49.053 12:23:23 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:39:49.053 12:23:23 keyring_file -- keyring/common.sh@17 -- # name=key1 00:39:49.053 12:23:23 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:39:49.053 12:23:23 keyring_file -- keyring/common.sh@17 -- # digest=0 00:39:49.053 12:23:23 keyring_file -- keyring/common.sh@18 -- # mktemp 00:39:49.053 12:23:23 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.ksU97cfYvV 00:39:49.053 12:23:23 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:39:49.053 12:23:23 keyring_file -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:39:49.053 12:23:23 keyring_file -- nvmf/common.sh@504 -- # local prefix key digest 00:39:49.053 12:23:23 keyring_file -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:39:49.053 12:23:23 keyring_file -- nvmf/common.sh@506 -- # key=112233445566778899aabbccddeeff00 00:39:49.053 12:23:23 keyring_file -- nvmf/common.sh@506 -- # digest=0 00:39:49.053 12:23:23 keyring_file -- nvmf/common.sh@507 -- # python - 00:39:49.053 12:23:23 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.ksU97cfYvV 00:39:49.053 12:23:23 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.ksU97cfYvV 00:39:49.053 12:23:23 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.ksU97cfYvV 00:39:49.053 12:23:23 keyring_file -- keyring/file.sh@30 -- # tgtpid=363861 00:39:49.053 12:23:23 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:39:49.053 12:23:23 keyring_file -- keyring/file.sh@32 -- # waitforlisten 363861 00:39:49.053 12:23:23 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 363861 ']' 00:39:49.053 12:23:23 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:49.053 12:23:23 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:49.053 12:23:23 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:49.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:49.053 12:23:23 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:49.053 12:23:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:49.053 [2024-12-05 12:23:23.105100] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:39:49.053 [2024-12-05 12:23:23.105158] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid363861 ] 00:39:49.053 [2024-12-05 12:23:23.180532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:49.053 [2024-12-05 12:23:23.222181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:49.312 12:23:23 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:49.312 12:23:23 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:39:49.312 12:23:23 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:39:49.312 12:23:23 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:49.312 12:23:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:49.312 [2024-12-05 12:23:23.428440] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:49.312 null0 00:39:49.312 [2024-12-05 12:23:23.460497] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:39:49.312 [2024-12-05 12:23:23.460855] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:39:49.312 12:23:23 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:49.312 12:23:23 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:39:49.312 12:23:23 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:39:49.312 12:23:23 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:39:49.312 12:23:23 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:39:49.312 12:23:23 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:49.312 12:23:23 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:39:49.312 12:23:23 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:49.312 12:23:23 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:39:49.312 12:23:23 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:49.312 12:23:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:49.312 [2024-12-05 12:23:23.488558] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:39:49.312 request: 00:39:49.312 { 00:39:49.312 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:39:49.312 "secure_channel": false, 00:39:49.312 "listen_address": { 00:39:49.312 "trtype": "tcp", 00:39:49.312 "traddr": "127.0.0.1", 00:39:49.312 "trsvcid": "4420" 00:39:49.312 }, 00:39:49.312 "method": "nvmf_subsystem_add_listener", 00:39:49.312 "req_id": 1 00:39:49.312 } 00:39:49.312 Got JSON-RPC error response 00:39:49.312 response: 00:39:49.312 { 00:39:49.312 "code": -32602, 00:39:49.312 "message": "Invalid parameters" 00:39:49.312 } 00:39:49.312 12:23:23 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:39:49.312 12:23:23 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:39:49.312 12:23:23 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:49.312 12:23:23 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:49.312 12:23:23 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:49.312 12:23:23 keyring_file -- keyring/file.sh@47 -- # bperfpid=363866 00:39:49.312 12:23:23 keyring_file -- keyring/file.sh@49 -- # waitforlisten 363866 /var/tmp/bperf.sock 00:39:49.312 12:23:23 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:39:49.312 12:23:23 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 363866 ']' 00:39:49.312 12:23:23 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:49.312 12:23:23 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:49.312 12:23:23 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:49.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:49.312 12:23:23 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:49.312 12:23:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:49.571 [2024-12-05 12:23:23.541528] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:39:49.571 [2024-12-05 12:23:23.541570] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid363866 ] 00:39:49.571 [2024-12-05 12:23:23.615010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:49.571 [2024-12-05 12:23:23.654870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:49.571 12:23:23 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:49.571 12:23:23 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:39:49.571 12:23:23 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.BI4p0Bvdvg 00:39:49.571 12:23:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.BI4p0Bvdvg 00:39:49.830 12:23:23 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.ksU97cfYvV 00:39:49.830 12:23:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.ksU97cfYvV 00:39:50.088 12:23:24 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:39:50.088 12:23:24 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:39:50.088 12:23:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:50.088 12:23:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:50.088 12:23:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:50.346 12:23:24 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.BI4p0Bvdvg == \/\t\m\p\/\t\m\p\.\B\I\4\p\0\B\v\d\v\g ]] 00:39:50.346 12:23:24 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:39:50.346 12:23:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:50.346 12:23:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:50.346 12:23:24 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:39:50.346 12:23:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:50.346 12:23:24 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.ksU97cfYvV == \/\t\m\p\/\t\m\p\.\k\s\U\9\7\c\f\Y\v\V ]] 00:39:50.346 12:23:24 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:39:50.346 12:23:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:50.346 12:23:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:50.346 12:23:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:50.346 12:23:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:50.346 12:23:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:50.604 12:23:24 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:39:50.604 12:23:24 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:39:50.604 12:23:24 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:50.604 12:23:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:50.604 12:23:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:50.604 12:23:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:50.604 12:23:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:50.863 12:23:24 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:39:50.863 12:23:24 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:50.863 12:23:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:51.120 [2024-12-05 12:23:25.070241] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:39:51.120 nvme0n1 00:39:51.120 12:23:25 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:39:51.120 12:23:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:51.120 12:23:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:51.120 12:23:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:51.120 12:23:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:51.120 12:23:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:51.378 12:23:25 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:39:51.378 12:23:25 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:39:51.378 12:23:25 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:51.378 12:23:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:51.378 12:23:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:51.378 12:23:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:51.378 12:23:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:51.378 12:23:25 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:39:51.378 12:23:25 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:51.637 Running I/O for 1 seconds... 00:39:52.570 19511.00 IOPS, 76.21 MiB/s 00:39:52.570 Latency(us) 00:39:52.571 [2024-12-05T11:23:26.767Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:52.571 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:39:52.571 nvme0n1 : 1.00 19554.63 76.39 0.00 0.00 6533.62 2761.87 12795.12 00:39:52.571 [2024-12-05T11:23:26.767Z] =================================================================================================================== 00:39:52.571 [2024-12-05T11:23:26.767Z] Total : 19554.63 76.39 0.00 0.00 6533.62 2761.87 12795.12 00:39:52.571 { 00:39:52.571 "results": [ 00:39:52.571 { 00:39:52.571 "job": "nvme0n1", 00:39:52.571 "core_mask": "0x2", 00:39:52.571 "workload": "randrw", 00:39:52.571 "percentage": 50, 00:39:52.571 "status": "finished", 00:39:52.571 "queue_depth": 128, 00:39:52.571 "io_size": 4096, 00:39:52.571 "runtime": 1.004417, 00:39:52.571 "iops": 19554.627211606334, 00:39:52.571 "mibps": 76.38526254533724, 00:39:52.571 "io_failed": 0, 00:39:52.571 "io_timeout": 0, 00:39:52.571 "avg_latency_us": 6533.6155883829015, 00:39:52.571 "min_latency_us": 2761.8742857142856, 00:39:52.571 "max_latency_us": 12795.12380952381 00:39:52.571 } 00:39:52.571 ], 00:39:52.571 "core_count": 1 00:39:52.571 } 00:39:52.571 12:23:26 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:52.571 12:23:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:52.828 12:23:26 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:39:52.828 12:23:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:52.828 12:23:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:52.828 12:23:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:52.828 12:23:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:52.828 12:23:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:53.087 12:23:27 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:39:53.087 12:23:27 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:39:53.087 12:23:27 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:53.087 12:23:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:53.087 12:23:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:53.087 12:23:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:53.087 12:23:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:53.087 12:23:27 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:39:53.087 12:23:27 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:53.087 12:23:27 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:39:53.087 12:23:27 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:53.087 12:23:27 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:39:53.087 12:23:27 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:53.087 12:23:27 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:39:53.087 12:23:27 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:53.087 12:23:27 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:53.087 12:23:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:53.346 [2024-12-05 12:23:27.432442] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:39:53.346 [2024-12-05 12:23:27.432971] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1182210 (107): Transport endpoint is not connected 00:39:53.346 [2024-12-05 12:23:27.433965] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1182210 (9): Bad file descriptor 00:39:53.346 [2024-12-05 12:23:27.434966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:39:53.346 [2024-12-05 12:23:27.434975] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:39:53.346 [2024-12-05 12:23:27.434982] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:39:53.346 [2024-12-05 12:23:27.434991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:39:53.346 request: 00:39:53.346 { 00:39:53.346 "name": "nvme0", 00:39:53.346 "trtype": "tcp", 00:39:53.346 "traddr": "127.0.0.1", 00:39:53.346 "adrfam": "ipv4", 00:39:53.346 "trsvcid": "4420", 00:39:53.346 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:53.346 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:53.346 "prchk_reftag": false, 00:39:53.346 "prchk_guard": false, 00:39:53.346 "hdgst": false, 00:39:53.346 "ddgst": false, 00:39:53.346 "psk": "key1", 00:39:53.346 "allow_unrecognized_csi": false, 00:39:53.346 "method": "bdev_nvme_attach_controller", 00:39:53.346 "req_id": 1 00:39:53.346 } 00:39:53.346 Got JSON-RPC error response 00:39:53.346 response: 00:39:53.346 { 00:39:53.346 "code": -5, 00:39:53.346 "message": "Input/output error" 00:39:53.346 } 00:39:53.346 12:23:27 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:39:53.346 12:23:27 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:53.346 12:23:27 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:53.346 12:23:27 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:53.346 12:23:27 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:39:53.346 12:23:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:53.346 12:23:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:53.346 12:23:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:53.346 12:23:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:53.346 12:23:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:53.605 12:23:27 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:39:53.605 12:23:27 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:39:53.605 12:23:27 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:53.605 12:23:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:53.605 12:23:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:53.605 12:23:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:53.605 12:23:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:53.865 12:23:27 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:39:53.865 12:23:27 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:39:53.865 12:23:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:53.865 12:23:28 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:39:53.865 12:23:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:39:54.124 12:23:28 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:39:54.124 12:23:28 keyring_file -- keyring/file.sh@78 -- # jq length 00:39:54.124 12:23:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:54.383 12:23:28 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:39:54.383 12:23:28 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.BI4p0Bvdvg 00:39:54.383 12:23:28 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.BI4p0Bvdvg 00:39:54.383 12:23:28 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:39:54.383 12:23:28 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.BI4p0Bvdvg 00:39:54.383 12:23:28 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:39:54.383 12:23:28 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:54.383 12:23:28 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:39:54.383 12:23:28 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:54.383 12:23:28 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.BI4p0Bvdvg 00:39:54.383 12:23:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.BI4p0Bvdvg 00:39:54.643 [2024-12-05 12:23:28.592020] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.BI4p0Bvdvg': 0100660 00:39:54.643 [2024-12-05 12:23:28.592047] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:39:54.643 request: 00:39:54.643 { 00:39:54.643 "name": "key0", 00:39:54.643 "path": "/tmp/tmp.BI4p0Bvdvg", 00:39:54.643 "method": "keyring_file_add_key", 00:39:54.643 "req_id": 1 00:39:54.643 } 00:39:54.643 Got JSON-RPC error response 00:39:54.643 response: 00:39:54.643 { 00:39:54.643 "code": -1, 00:39:54.643 "message": "Operation not permitted" 00:39:54.643 } 00:39:54.643 12:23:28 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:39:54.643 12:23:28 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:54.643 12:23:28 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:54.643 12:23:28 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:54.643 12:23:28 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.BI4p0Bvdvg 00:39:54.643 12:23:28 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.BI4p0Bvdvg 00:39:54.643 12:23:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.BI4p0Bvdvg 00:39:54.643 12:23:28 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.BI4p0Bvdvg 00:39:54.643 12:23:28 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:39:54.643 12:23:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:54.643 12:23:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:54.643 12:23:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:54.643 12:23:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:54.643 12:23:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:54.902 12:23:28 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:39:54.902 12:23:28 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:54.902 12:23:28 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:39:54.902 12:23:28 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:54.902 12:23:28 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:39:54.902 12:23:28 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:54.902 12:23:28 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:39:54.902 12:23:28 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:54.902 12:23:28 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:54.902 12:23:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:55.161 [2024-12-05 12:23:29.157524] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.BI4p0Bvdvg': No such file or directory 00:39:55.161 [2024-12-05 12:23:29.157550] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:39:55.161 [2024-12-05 12:23:29.157566] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:39:55.161 [2024-12-05 12:23:29.157573] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:39:55.161 [2024-12-05 12:23:29.157580] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:39:55.161 [2024-12-05 12:23:29.157586] bdev_nvme.c:6796:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:39:55.161 request: 00:39:55.161 { 00:39:55.161 "name": "nvme0", 00:39:55.161 "trtype": "tcp", 00:39:55.161 "traddr": "127.0.0.1", 00:39:55.161 "adrfam": "ipv4", 00:39:55.161 "trsvcid": "4420", 00:39:55.161 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:55.161 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:55.161 "prchk_reftag": false, 00:39:55.161 "prchk_guard": false, 00:39:55.161 "hdgst": false, 00:39:55.161 "ddgst": false, 00:39:55.161 "psk": "key0", 00:39:55.161 "allow_unrecognized_csi": false, 00:39:55.161 "method": "bdev_nvme_attach_controller", 00:39:55.161 "req_id": 1 00:39:55.161 } 00:39:55.161 Got JSON-RPC error response 00:39:55.161 response: 00:39:55.161 { 00:39:55.161 "code": -19, 00:39:55.161 "message": "No such device" 00:39:55.161 } 00:39:55.161 12:23:29 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:39:55.161 12:23:29 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:55.161 12:23:29 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:55.161 12:23:29 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:55.161 12:23:29 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:39:55.161 12:23:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:55.421 12:23:29 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:39:55.421 12:23:29 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:39:55.421 12:23:29 keyring_file -- keyring/common.sh@17 -- # name=key0 00:39:55.421 12:23:29 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:39:55.421 12:23:29 keyring_file -- keyring/common.sh@17 -- # digest=0 00:39:55.421 12:23:29 keyring_file -- keyring/common.sh@18 -- # mktemp 00:39:55.421 12:23:29 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.z3lJSr9QBD 00:39:55.421 12:23:29 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:39:55.421 12:23:29 keyring_file -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:39:55.421 12:23:29 keyring_file -- nvmf/common.sh@504 -- # local prefix key digest 00:39:55.421 12:23:29 keyring_file -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:39:55.421 12:23:29 keyring_file -- nvmf/common.sh@506 -- # key=00112233445566778899aabbccddeeff 00:39:55.421 12:23:29 keyring_file -- nvmf/common.sh@506 -- # digest=0 00:39:55.421 12:23:29 keyring_file -- nvmf/common.sh@507 -- # python - 00:39:55.421 12:23:29 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.z3lJSr9QBD 00:39:55.421 12:23:29 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.z3lJSr9QBD 00:39:55.421 12:23:29 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.z3lJSr9QBD 00:39:55.421 12:23:29 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.z3lJSr9QBD 00:39:55.421 12:23:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.z3lJSr9QBD 00:39:55.421 12:23:29 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:55.421 12:23:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:55.680 nvme0n1 00:39:55.680 12:23:29 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:39:55.680 12:23:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:55.680 12:23:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:55.680 12:23:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:55.680 12:23:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:55.680 12:23:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:55.955 12:23:30 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:39:55.955 12:23:30 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:39:55.955 12:23:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:56.241 12:23:30 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:39:56.241 12:23:30 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:39:56.241 12:23:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:56.241 12:23:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:56.241 12:23:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:56.511 12:23:30 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:39:56.512 12:23:30 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:39:56.512 12:23:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:56.512 12:23:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:56.512 12:23:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:56.512 12:23:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:56.512 12:23:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:56.512 12:23:30 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:39:56.512 12:23:30 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:56.512 12:23:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:56.770 12:23:30 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:39:56.770 12:23:30 keyring_file -- keyring/file.sh@105 -- # jq length 00:39:56.770 12:23:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:57.028 12:23:31 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:39:57.028 12:23:31 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.z3lJSr9QBD 00:39:57.029 12:23:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.z3lJSr9QBD 00:39:57.287 12:23:31 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.ksU97cfYvV 00:39:57.287 12:23:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.ksU97cfYvV 00:39:57.287 12:23:31 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:57.287 12:23:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:57.547 nvme0n1 00:39:57.547 12:23:31 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:39:57.547 12:23:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:39:57.806 12:23:31 keyring_file -- keyring/file.sh@113 -- # config='{ 00:39:57.806 "subsystems": [ 00:39:57.806 { 00:39:57.806 "subsystem": "keyring", 00:39:57.806 "config": [ 00:39:57.806 { 00:39:57.806 "method": "keyring_file_add_key", 00:39:57.806 "params": { 00:39:57.806 "name": "key0", 00:39:57.806 "path": "/tmp/tmp.z3lJSr9QBD" 00:39:57.806 } 00:39:57.806 }, 00:39:57.806 { 00:39:57.806 "method": "keyring_file_add_key", 00:39:57.806 "params": { 00:39:57.806 "name": "key1", 00:39:57.806 "path": "/tmp/tmp.ksU97cfYvV" 00:39:57.806 } 00:39:57.806 } 00:39:57.806 ] 00:39:57.806 }, 00:39:57.806 { 00:39:57.807 "subsystem": "iobuf", 00:39:57.807 "config": [ 00:39:57.807 { 00:39:57.807 "method": "iobuf_set_options", 00:39:57.807 "params": { 00:39:57.807 "small_pool_count": 8192, 00:39:57.807 "large_pool_count": 1024, 00:39:57.807 "small_bufsize": 8192, 00:39:57.807 "large_bufsize": 135168, 00:39:57.807 "enable_numa": false 00:39:57.807 } 00:39:57.807 } 00:39:57.807 ] 00:39:57.807 }, 00:39:57.807 { 00:39:57.807 "subsystem": "sock", 00:39:57.807 "config": [ 00:39:57.807 { 00:39:57.807 "method": "sock_set_default_impl", 00:39:57.807 "params": { 00:39:57.807 "impl_name": "posix" 00:39:57.807 } 00:39:57.807 }, 00:39:57.807 { 00:39:57.807 "method": "sock_impl_set_options", 00:39:57.807 "params": { 00:39:57.807 "impl_name": "ssl", 00:39:57.807 "recv_buf_size": 4096, 00:39:57.807 "send_buf_size": 4096, 00:39:57.807 "enable_recv_pipe": true, 00:39:57.807 "enable_quickack": false, 00:39:57.807 "enable_placement_id": 0, 00:39:57.807 "enable_zerocopy_send_server": true, 00:39:57.807 "enable_zerocopy_send_client": false, 00:39:57.807 "zerocopy_threshold": 0, 00:39:57.807 "tls_version": 0, 00:39:57.807 "enable_ktls": false 00:39:57.807 } 00:39:57.807 }, 00:39:57.807 { 00:39:57.807 "method": "sock_impl_set_options", 00:39:57.807 "params": { 00:39:57.807 "impl_name": "posix", 00:39:57.807 "recv_buf_size": 2097152, 00:39:57.807 "send_buf_size": 2097152, 00:39:57.807 "enable_recv_pipe": true, 00:39:57.807 "enable_quickack": false, 00:39:57.807 "enable_placement_id": 0, 00:39:57.807 "enable_zerocopy_send_server": true, 00:39:57.807 "enable_zerocopy_send_client": false, 00:39:57.807 "zerocopy_threshold": 0, 00:39:57.807 "tls_version": 0, 00:39:57.807 "enable_ktls": false 00:39:57.807 } 00:39:57.807 } 00:39:57.807 ] 00:39:57.807 }, 00:39:57.807 { 00:39:57.807 "subsystem": "vmd", 00:39:57.807 "config": [] 00:39:57.807 }, 00:39:57.807 { 00:39:57.807 "subsystem": "accel", 00:39:57.807 "config": [ 00:39:57.807 { 00:39:57.807 "method": "accel_set_options", 00:39:57.807 "params": { 00:39:57.807 "small_cache_size": 128, 00:39:57.807 "large_cache_size": 16, 00:39:57.807 "task_count": 2048, 00:39:57.807 "sequence_count": 2048, 00:39:57.807 "buf_count": 2048 00:39:57.807 } 00:39:57.807 } 00:39:57.807 ] 00:39:57.807 }, 00:39:57.807 { 00:39:57.807 "subsystem": "bdev", 00:39:57.807 "config": [ 00:39:57.807 { 00:39:57.807 "method": "bdev_set_options", 00:39:57.807 "params": { 00:39:57.807 "bdev_io_pool_size": 65535, 00:39:57.807 "bdev_io_cache_size": 256, 00:39:57.807 "bdev_auto_examine": true, 00:39:57.807 "iobuf_small_cache_size": 128, 00:39:57.807 "iobuf_large_cache_size": 16 00:39:57.807 } 00:39:57.807 }, 00:39:57.807 { 00:39:57.807 "method": "bdev_raid_set_options", 00:39:57.807 "params": { 00:39:57.807 "process_window_size_kb": 1024, 00:39:57.807 "process_max_bandwidth_mb_sec": 0 00:39:57.807 } 00:39:57.807 }, 00:39:57.807 { 00:39:57.807 "method": "bdev_iscsi_set_options", 00:39:57.807 "params": { 00:39:57.807 "timeout_sec": 30 00:39:57.807 } 00:39:57.807 }, 00:39:57.807 { 00:39:57.807 "method": "bdev_nvme_set_options", 00:39:57.807 "params": { 00:39:57.807 "action_on_timeout": "none", 00:39:57.807 "timeout_us": 0, 00:39:57.807 "timeout_admin_us": 0, 00:39:57.807 "keep_alive_timeout_ms": 10000, 00:39:57.807 "arbitration_burst": 0, 00:39:57.807 "low_priority_weight": 0, 00:39:57.807 "medium_priority_weight": 0, 00:39:57.807 "high_priority_weight": 0, 00:39:57.807 "nvme_adminq_poll_period_us": 10000, 00:39:57.807 "nvme_ioq_poll_period_us": 0, 00:39:57.807 "io_queue_requests": 512, 00:39:57.807 "delay_cmd_submit": true, 00:39:57.807 "transport_retry_count": 4, 00:39:57.807 "bdev_retry_count": 3, 00:39:57.807 "transport_ack_timeout": 0, 00:39:57.807 "ctrlr_loss_timeout_sec": 0, 00:39:57.807 "reconnect_delay_sec": 0, 00:39:57.807 "fast_io_fail_timeout_sec": 0, 00:39:57.807 "disable_auto_failback": false, 00:39:57.807 "generate_uuids": false, 00:39:57.807 "transport_tos": 0, 00:39:57.807 "nvme_error_stat": false, 00:39:57.807 "rdma_srq_size": 0, 00:39:57.807 "io_path_stat": false, 00:39:57.807 "allow_accel_sequence": false, 00:39:57.807 "rdma_max_cq_size": 0, 00:39:57.807 "rdma_cm_event_timeout_ms": 0, 00:39:57.807 "dhchap_digests": [ 00:39:57.807 "sha256", 00:39:57.807 "sha384", 00:39:57.807 "sha512" 00:39:57.807 ], 00:39:57.807 "dhchap_dhgroups": [ 00:39:57.807 "null", 00:39:57.807 "ffdhe2048", 00:39:57.807 "ffdhe3072", 00:39:57.807 "ffdhe4096", 00:39:57.807 "ffdhe6144", 00:39:57.807 "ffdhe8192" 00:39:57.807 ] 00:39:57.807 } 00:39:57.807 }, 00:39:57.807 { 00:39:57.807 "method": "bdev_nvme_attach_controller", 00:39:57.807 "params": { 00:39:57.807 "name": "nvme0", 00:39:57.807 "trtype": "TCP", 00:39:57.807 "adrfam": "IPv4", 00:39:57.807 "traddr": "127.0.0.1", 00:39:57.807 "trsvcid": "4420", 00:39:57.807 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:57.807 "prchk_reftag": false, 00:39:57.807 "prchk_guard": false, 00:39:57.807 "ctrlr_loss_timeout_sec": 0, 00:39:57.807 "reconnect_delay_sec": 0, 00:39:57.807 "fast_io_fail_timeout_sec": 0, 00:39:57.807 "psk": "key0", 00:39:57.807 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:57.807 "hdgst": false, 00:39:57.807 "ddgst": false, 00:39:57.807 "multipath": "multipath" 00:39:57.807 } 00:39:57.807 }, 00:39:57.807 { 00:39:57.807 "method": "bdev_nvme_set_hotplug", 00:39:57.807 "params": { 00:39:57.807 "period_us": 100000, 00:39:57.808 "enable": false 00:39:57.808 } 00:39:57.808 }, 00:39:57.808 { 00:39:57.808 "method": "bdev_wait_for_examine" 00:39:57.808 } 00:39:57.808 ] 00:39:57.808 }, 00:39:57.808 { 00:39:57.808 "subsystem": "nbd", 00:39:57.808 "config": [] 00:39:57.808 } 00:39:57.808 ] 00:39:57.808 }' 00:39:57.808 12:23:31 keyring_file -- keyring/file.sh@115 -- # killprocess 363866 00:39:57.808 12:23:31 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 363866 ']' 00:39:57.808 12:23:31 keyring_file -- common/autotest_common.sh@958 -- # kill -0 363866 00:39:57.808 12:23:31 keyring_file -- common/autotest_common.sh@959 -- # uname 00:39:57.808 12:23:31 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:57.808 12:23:31 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 363866 00:39:57.808 12:23:31 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:57.808 12:23:31 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:57.808 12:23:31 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 363866' 00:39:57.808 killing process with pid 363866 00:39:57.808 12:23:31 keyring_file -- common/autotest_common.sh@973 -- # kill 363866 00:39:57.808 Received shutdown signal, test time was about 1.000000 seconds 00:39:57.808 00:39:57.808 Latency(us) 00:39:57.808 [2024-12-05T11:23:32.004Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:57.808 [2024-12-05T11:23:32.004Z] =================================================================================================================== 00:39:57.808 [2024-12-05T11:23:32.004Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:57.808 12:23:31 keyring_file -- common/autotest_common.sh@978 -- # wait 363866 00:39:58.067 12:23:32 keyring_file -- keyring/file.sh@118 -- # bperfpid=365390 00:39:58.067 12:23:32 keyring_file -- keyring/file.sh@120 -- # waitforlisten 365390 /var/tmp/bperf.sock 00:39:58.067 12:23:32 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 365390 ']' 00:39:58.067 12:23:32 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:58.067 12:23:32 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:58.067 12:23:32 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:58.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:58.067 12:23:32 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:58.067 12:23:32 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:58.067 12:23:32 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:39:58.067 12:23:32 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:39:58.067 "subsystems": [ 00:39:58.067 { 00:39:58.067 "subsystem": "keyring", 00:39:58.067 "config": [ 00:39:58.067 { 00:39:58.067 "method": "keyring_file_add_key", 00:39:58.067 "params": { 00:39:58.067 "name": "key0", 00:39:58.067 "path": "/tmp/tmp.z3lJSr9QBD" 00:39:58.067 } 00:39:58.067 }, 00:39:58.067 { 00:39:58.067 "method": "keyring_file_add_key", 00:39:58.067 "params": { 00:39:58.067 "name": "key1", 00:39:58.067 "path": "/tmp/tmp.ksU97cfYvV" 00:39:58.068 } 00:39:58.068 } 00:39:58.068 ] 00:39:58.068 }, 00:39:58.068 { 00:39:58.068 "subsystem": "iobuf", 00:39:58.068 "config": [ 00:39:58.068 { 00:39:58.068 "method": "iobuf_set_options", 00:39:58.068 "params": { 00:39:58.068 "small_pool_count": 8192, 00:39:58.068 "large_pool_count": 1024, 00:39:58.068 "small_bufsize": 8192, 00:39:58.068 "large_bufsize": 135168, 00:39:58.068 "enable_numa": false 00:39:58.068 } 00:39:58.068 } 00:39:58.068 ] 00:39:58.068 }, 00:39:58.068 { 00:39:58.068 "subsystem": "sock", 00:39:58.068 "config": [ 00:39:58.068 { 00:39:58.068 "method": "sock_set_default_impl", 00:39:58.068 "params": { 00:39:58.068 "impl_name": "posix" 00:39:58.068 } 00:39:58.068 }, 00:39:58.068 { 00:39:58.068 "method": "sock_impl_set_options", 00:39:58.068 "params": { 00:39:58.068 "impl_name": "ssl", 00:39:58.068 "recv_buf_size": 4096, 00:39:58.068 "send_buf_size": 4096, 00:39:58.068 "enable_recv_pipe": true, 00:39:58.068 "enable_quickack": false, 00:39:58.068 "enable_placement_id": 0, 00:39:58.068 "enable_zerocopy_send_server": true, 00:39:58.068 "enable_zerocopy_send_client": false, 00:39:58.068 "zerocopy_threshold": 0, 00:39:58.068 "tls_version": 0, 00:39:58.068 "enable_ktls": false 00:39:58.068 } 00:39:58.068 }, 00:39:58.068 { 00:39:58.068 "method": "sock_impl_set_options", 00:39:58.068 "params": { 00:39:58.068 "impl_name": "posix", 00:39:58.068 "recv_buf_size": 2097152, 00:39:58.068 "send_buf_size": 2097152, 00:39:58.068 "enable_recv_pipe": true, 00:39:58.068 "enable_quickack": false, 00:39:58.068 "enable_placement_id": 0, 00:39:58.068 "enable_zerocopy_send_server": true, 00:39:58.068 "enable_zerocopy_send_client": false, 00:39:58.068 "zerocopy_threshold": 0, 00:39:58.068 "tls_version": 0, 00:39:58.068 "enable_ktls": false 00:39:58.068 } 00:39:58.068 } 00:39:58.068 ] 00:39:58.068 }, 00:39:58.068 { 00:39:58.068 "subsystem": "vmd", 00:39:58.068 "config": [] 00:39:58.068 }, 00:39:58.068 { 00:39:58.068 "subsystem": "accel", 00:39:58.068 "config": [ 00:39:58.068 { 00:39:58.068 "method": "accel_set_options", 00:39:58.068 "params": { 00:39:58.068 "small_cache_size": 128, 00:39:58.068 "large_cache_size": 16, 00:39:58.068 "task_count": 2048, 00:39:58.068 "sequence_count": 2048, 00:39:58.068 "buf_count": 2048 00:39:58.068 } 00:39:58.068 } 00:39:58.068 ] 00:39:58.068 }, 00:39:58.068 { 00:39:58.068 "subsystem": "bdev", 00:39:58.068 "config": [ 00:39:58.068 { 00:39:58.068 "method": "bdev_set_options", 00:39:58.068 "params": { 00:39:58.068 "bdev_io_pool_size": 65535, 00:39:58.068 "bdev_io_cache_size": 256, 00:39:58.068 "bdev_auto_examine": true, 00:39:58.068 "iobuf_small_cache_size": 128, 00:39:58.068 "iobuf_large_cache_size": 16 00:39:58.068 } 00:39:58.068 }, 00:39:58.068 { 00:39:58.068 "method": "bdev_raid_set_options", 00:39:58.068 "params": { 00:39:58.068 "process_window_size_kb": 1024, 00:39:58.068 "process_max_bandwidth_mb_sec": 0 00:39:58.068 } 00:39:58.068 }, 00:39:58.068 { 00:39:58.068 "method": "bdev_iscsi_set_options", 00:39:58.068 "params": { 00:39:58.068 "timeout_sec": 30 00:39:58.068 } 00:39:58.068 }, 00:39:58.068 { 00:39:58.068 "method": "bdev_nvme_set_options", 00:39:58.068 "params": { 00:39:58.068 "action_on_timeout": "none", 00:39:58.068 "timeout_us": 0, 00:39:58.068 "timeout_admin_us": 0, 00:39:58.068 "keep_alive_timeout_ms": 10000, 00:39:58.068 "arbitration_burst": 0, 00:39:58.068 "low_priority_weight": 0, 00:39:58.068 "medium_priority_weight": 0, 00:39:58.068 "high_priority_weight": 0, 00:39:58.068 "nvme_adminq_poll_period_us": 10000, 00:39:58.068 "nvme_ioq_poll_period_us": 0, 00:39:58.068 "io_queue_requests": 512, 00:39:58.068 "delay_cmd_submit": true, 00:39:58.068 "transport_retry_count": 4, 00:39:58.068 "bdev_retry_count": 3, 00:39:58.068 "transport_ack_timeout": 0, 00:39:58.068 "ctrlr_loss_timeout_sec": 0, 00:39:58.068 "reconnect_delay_sec": 0, 00:39:58.068 "fast_io_fail_timeout_sec": 0, 00:39:58.068 "disable_auto_failback": false, 00:39:58.068 "generate_uuids": false, 00:39:58.068 "transport_tos": 0, 00:39:58.068 "nvme_error_stat": false, 00:39:58.068 "rdma_srq_size": 0, 00:39:58.068 "io_path_stat": false, 00:39:58.068 "allow_accel_sequence": false, 00:39:58.068 "rdma_max_cq_size": 0, 00:39:58.068 "rdma_cm_event_timeout_ms": 0, 00:39:58.068 "dhchap_digests": [ 00:39:58.068 "sha256", 00:39:58.068 "sha384", 00:39:58.068 "sha512" 00:39:58.068 ], 00:39:58.068 "dhchap_dhgroups": [ 00:39:58.068 "null", 00:39:58.068 "ffdhe2048", 00:39:58.068 "ffdhe3072", 00:39:58.068 "ffdhe4096", 00:39:58.068 "ffdhe6144", 00:39:58.068 "ffdhe8192" 00:39:58.068 ] 00:39:58.068 } 00:39:58.068 }, 00:39:58.068 { 00:39:58.068 "method": "bdev_nvme_attach_controller", 00:39:58.068 "params": { 00:39:58.068 "name": "nvme0", 00:39:58.068 "trtype": "TCP", 00:39:58.068 "adrfam": "IPv4", 00:39:58.068 "traddr": "127.0.0.1", 00:39:58.068 "trsvcid": "4420", 00:39:58.068 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:58.068 "prchk_reftag": false, 00:39:58.068 "prchk_guard": false, 00:39:58.068 "ctrlr_loss_timeout_sec": 0, 00:39:58.068 "reconnect_delay_sec": 0, 00:39:58.068 "fast_io_fail_timeout_sec": 0, 00:39:58.068 "psk": "key0", 00:39:58.068 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:58.068 "hdgst": false, 00:39:58.068 "ddgst": false, 00:39:58.068 "multipath": "multipath" 00:39:58.068 } 00:39:58.068 }, 00:39:58.068 { 00:39:58.068 "method": "bdev_nvme_set_hotplug", 00:39:58.068 "params": { 00:39:58.068 "period_us": 100000, 00:39:58.068 "enable": false 00:39:58.068 } 00:39:58.068 }, 00:39:58.068 { 00:39:58.068 "method": "bdev_wait_for_examine" 00:39:58.068 } 00:39:58.068 ] 00:39:58.068 }, 00:39:58.068 { 00:39:58.068 "subsystem": "nbd", 00:39:58.068 "config": [] 00:39:58.068 } 00:39:58.068 ] 00:39:58.068 }' 00:39:58.068 [2024-12-05 12:23:32.193739] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:39:58.068 [2024-12-05 12:23:32.193787] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid365390 ] 00:39:58.328 [2024-12-05 12:23:32.267991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:58.328 [2024-12-05 12:23:32.307947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:58.328 [2024-12-05 12:23:32.469741] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:39:58.895 12:23:33 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:58.895 12:23:33 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:39:58.895 12:23:33 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:39:58.895 12:23:33 keyring_file -- keyring/file.sh@121 -- # jq length 00:39:58.895 12:23:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:59.153 12:23:33 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:39:59.153 12:23:33 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:39:59.153 12:23:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:59.153 12:23:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:59.153 12:23:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:59.153 12:23:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:59.153 12:23:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:59.412 12:23:33 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:39:59.412 12:23:33 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:39:59.412 12:23:33 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:59.412 12:23:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:59.412 12:23:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:59.412 12:23:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:59.412 12:23:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:59.412 12:23:33 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:39:59.412 12:23:33 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:39:59.412 12:23:33 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:39:59.412 12:23:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:39:59.671 12:23:33 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:39:59.671 12:23:33 keyring_file -- keyring/file.sh@1 -- # cleanup 00:39:59.671 12:23:33 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.z3lJSr9QBD /tmp/tmp.ksU97cfYvV 00:39:59.671 12:23:33 keyring_file -- keyring/file.sh@20 -- # killprocess 365390 00:39:59.671 12:23:33 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 365390 ']' 00:39:59.671 12:23:33 keyring_file -- common/autotest_common.sh@958 -- # kill -0 365390 00:39:59.671 12:23:33 keyring_file -- common/autotest_common.sh@959 -- # uname 00:39:59.671 12:23:33 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:59.671 12:23:33 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 365390 00:39:59.671 12:23:33 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:59.671 12:23:33 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:59.671 12:23:33 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 365390' 00:39:59.671 killing process with pid 365390 00:39:59.671 12:23:33 keyring_file -- common/autotest_common.sh@973 -- # kill 365390 00:39:59.671 Received shutdown signal, test time was about 1.000000 seconds 00:39:59.671 00:39:59.671 Latency(us) 00:39:59.671 [2024-12-05T11:23:33.867Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:59.671 [2024-12-05T11:23:33.867Z] =================================================================================================================== 00:39:59.671 [2024-12-05T11:23:33.868Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:39:59.672 12:23:33 keyring_file -- common/autotest_common.sh@978 -- # wait 365390 00:39:59.930 12:23:34 keyring_file -- keyring/file.sh@21 -- # killprocess 363861 00:39:59.930 12:23:34 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 363861 ']' 00:39:59.931 12:23:34 keyring_file -- common/autotest_common.sh@958 -- # kill -0 363861 00:39:59.931 12:23:34 keyring_file -- common/autotest_common.sh@959 -- # uname 00:39:59.931 12:23:34 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:59.931 12:23:34 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 363861 00:39:59.931 12:23:34 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:59.931 12:23:34 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:59.931 12:23:34 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 363861' 00:39:59.931 killing process with pid 363861 00:39:59.931 12:23:34 keyring_file -- common/autotest_common.sh@973 -- # kill 363861 00:39:59.931 12:23:34 keyring_file -- common/autotest_common.sh@978 -- # wait 363861 00:40:00.189 00:40:00.189 real 0m11.611s 00:40:00.189 user 0m28.873s 00:40:00.189 sys 0m2.654s 00:40:00.189 12:23:34 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:00.189 12:23:34 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:00.189 ************************************ 00:40:00.189 END TEST keyring_file 00:40:00.189 ************************************ 00:40:00.449 12:23:34 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:40:00.449 12:23:34 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:40:00.449 12:23:34 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:40:00.449 12:23:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:00.449 12:23:34 -- common/autotest_common.sh@10 -- # set +x 00:40:00.449 ************************************ 00:40:00.449 START TEST keyring_linux 00:40:00.449 ************************************ 00:40:00.449 12:23:34 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:40:00.449 Joined session keyring: 208360746 00:40:00.449 * Looking for test storage... 00:40:00.449 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:40:00.449 12:23:34 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:00.449 12:23:34 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:40:00.449 12:23:34 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:00.449 12:23:34 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:00.449 12:23:34 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:00.449 12:23:34 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:00.449 12:23:34 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:00.449 12:23:34 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:40:00.449 12:23:34 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:40:00.449 12:23:34 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:40:00.449 12:23:34 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:40:00.449 12:23:34 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:40:00.449 12:23:34 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:40:00.449 12:23:34 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:40:00.449 12:23:34 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:00.449 12:23:34 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:40:00.449 12:23:34 keyring_linux -- scripts/common.sh@345 -- # : 1 00:40:00.449 12:23:34 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:00.449 12:23:34 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:00.449 12:23:34 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:40:00.449 12:23:34 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:40:00.449 12:23:34 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:00.449 12:23:34 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:40:00.449 12:23:34 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:40:00.449 12:23:34 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:40:00.449 12:23:34 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:40:00.449 12:23:34 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:00.449 12:23:34 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:40:00.449 12:23:34 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:40:00.449 12:23:34 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:00.449 12:23:34 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:00.449 12:23:34 keyring_linux -- scripts/common.sh@368 -- # return 0 00:40:00.449 12:23:34 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:00.449 12:23:34 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:00.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:00.449 --rc genhtml_branch_coverage=1 00:40:00.449 --rc genhtml_function_coverage=1 00:40:00.449 --rc genhtml_legend=1 00:40:00.449 --rc geninfo_all_blocks=1 00:40:00.449 --rc geninfo_unexecuted_blocks=1 00:40:00.449 00:40:00.449 ' 00:40:00.449 12:23:34 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:00.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:00.449 --rc genhtml_branch_coverage=1 00:40:00.449 --rc genhtml_function_coverage=1 00:40:00.449 --rc genhtml_legend=1 00:40:00.449 --rc geninfo_all_blocks=1 00:40:00.449 --rc geninfo_unexecuted_blocks=1 00:40:00.449 00:40:00.449 ' 00:40:00.449 12:23:34 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:00.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:00.449 --rc genhtml_branch_coverage=1 00:40:00.449 --rc genhtml_function_coverage=1 00:40:00.449 --rc genhtml_legend=1 00:40:00.449 --rc geninfo_all_blocks=1 00:40:00.449 --rc geninfo_unexecuted_blocks=1 00:40:00.449 00:40:00.449 ' 00:40:00.449 12:23:34 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:00.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:00.449 --rc genhtml_branch_coverage=1 00:40:00.449 --rc genhtml_function_coverage=1 00:40:00.449 --rc genhtml_legend=1 00:40:00.449 --rc geninfo_all_blocks=1 00:40:00.449 --rc geninfo_unexecuted_blocks=1 00:40:00.449 00:40:00.449 ' 00:40:00.449 12:23:34 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:40:00.449 12:23:34 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:00.449 12:23:34 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:40:00.449 12:23:34 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:00.449 12:23:34 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:00.449 12:23:34 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:00.449 12:23:34 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:00.449 12:23:34 keyring_linux -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:00.449 12:23:34 keyring_linux -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:40:00.449 12:23:34 keyring_linux -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:00.449 12:23:34 keyring_linux -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:40:00.449 12:23:34 keyring_linux -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:40:00.450 12:23:34 keyring_linux -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:40:00.450 12:23:34 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:00.450 12:23:34 keyring_linux -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:40:00.450 12:23:34 keyring_linux -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:40:00.450 12:23:34 keyring_linux -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:00.450 12:23:34 keyring_linux -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:00.450 12:23:34 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:40:00.450 12:23:34 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:00.450 12:23:34 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:00.450 12:23:34 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:00.450 12:23:34 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:00.450 12:23:34 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:00.450 12:23:34 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:00.450 12:23:34 keyring_linux -- paths/export.sh@5 -- # export PATH 00:40:00.450 12:23:34 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:00.450 12:23:34 keyring_linux -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:40:00.450 12:23:34 keyring_linux -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:40:00.450 12:23:34 keyring_linux -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:40:00.450 12:23:34 keyring_linux -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:40:00.450 12:23:34 keyring_linux -- nvmf/common.sh@50 -- # : 0 00:40:00.450 12:23:34 keyring_linux -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:40:00.450 12:23:34 keyring_linux -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:40:00.450 12:23:34 keyring_linux -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:40:00.450 12:23:34 keyring_linux -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:00.450 12:23:34 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:00.450 12:23:34 keyring_linux -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:40:00.450 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:40:00.450 12:23:34 keyring_linux -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:40:00.450 12:23:34 keyring_linux -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:40:00.450 12:23:34 keyring_linux -- nvmf/common.sh@54 -- # have_pci_nics=0 00:40:00.450 12:23:34 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:40:00.450 12:23:34 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:40:00.450 12:23:34 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:40:00.450 12:23:34 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:40:00.450 12:23:34 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:40:00.450 12:23:34 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:40:00.450 12:23:34 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:40:00.450 12:23:34 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:40:00.450 12:23:34 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:40:00.450 12:23:34 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:40:00.450 12:23:34 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:40:00.450 12:23:34 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:40:00.450 12:23:34 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:40:00.450 12:23:34 keyring_linux -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:40:00.450 12:23:34 keyring_linux -- nvmf/common.sh@504 -- # local prefix key digest 00:40:00.450 12:23:34 keyring_linux -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:40:00.450 12:23:34 keyring_linux -- nvmf/common.sh@506 -- # key=00112233445566778899aabbccddeeff 00:40:00.450 12:23:34 keyring_linux -- nvmf/common.sh@506 -- # digest=0 00:40:00.450 12:23:34 keyring_linux -- nvmf/common.sh@507 -- # python - 00:40:00.710 12:23:34 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:40:00.710 12:23:34 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:40:00.710 /tmp/:spdk-test:key0 00:40:00.710 12:23:34 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:40:00.710 12:23:34 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:40:00.710 12:23:34 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:40:00.710 12:23:34 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:40:00.710 12:23:34 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:40:00.710 12:23:34 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:40:00.710 12:23:34 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:40:00.710 12:23:34 keyring_linux -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:40:00.710 12:23:34 keyring_linux -- nvmf/common.sh@504 -- # local prefix key digest 00:40:00.710 12:23:34 keyring_linux -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:40:00.710 12:23:34 keyring_linux -- nvmf/common.sh@506 -- # key=112233445566778899aabbccddeeff00 00:40:00.710 12:23:34 keyring_linux -- nvmf/common.sh@506 -- # digest=0 00:40:00.710 12:23:34 keyring_linux -- nvmf/common.sh@507 -- # python - 00:40:00.710 12:23:34 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:40:00.710 12:23:34 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:40:00.710 /tmp/:spdk-test:key1 00:40:00.710 12:23:34 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=365940 00:40:00.710 12:23:34 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 365940 00:40:00.710 12:23:34 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:40:00.710 12:23:34 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 365940 ']' 00:40:00.710 12:23:34 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:00.710 12:23:34 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:00.710 12:23:34 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:00.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:00.710 12:23:34 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:00.710 12:23:34 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:00.710 [2024-12-05 12:23:34.761086] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:40:00.710 [2024-12-05 12:23:34.761135] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid365940 ] 00:40:00.710 [2024-12-05 12:23:34.835441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:00.710 [2024-12-05 12:23:34.877041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:00.969 12:23:35 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:00.969 12:23:35 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:40:00.969 12:23:35 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:40:00.969 12:23:35 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:00.969 12:23:35 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:00.969 [2024-12-05 12:23:35.097384] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:00.969 null0 00:40:00.969 [2024-12-05 12:23:35.129446] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:40:00.969 [2024-12-05 12:23:35.129783] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:40:00.969 12:23:35 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:00.969 12:23:35 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:40:00.969 357712105 00:40:00.969 12:23:35 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:40:00.969 623530179 00:40:00.969 12:23:35 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=365948 00:40:00.969 12:23:35 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 365948 /var/tmp/bperf.sock 00:40:00.969 12:23:35 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:40:00.969 12:23:35 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 365948 ']' 00:40:00.969 12:23:35 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:00.969 12:23:35 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:00.969 12:23:35 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:00.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:00.969 12:23:35 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:00.969 12:23:35 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:01.227 [2024-12-05 12:23:35.201776] Starting SPDK v25.01-pre git sha1 b7fa4c06b / DPDK 24.03.0 initialization... 00:40:01.227 [2024-12-05 12:23:35.201816] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid365948 ] 00:40:01.227 [2024-12-05 12:23:35.272763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:01.227 [2024-12-05 12:23:35.312612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:01.227 12:23:35 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:01.227 12:23:35 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:40:01.227 12:23:35 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:40:01.227 12:23:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:40:01.485 12:23:35 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:40:01.485 12:23:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:40:01.744 12:23:35 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:40:01.744 12:23:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:40:02.003 [2024-12-05 12:23:35.973796] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:40:02.003 nvme0n1 00:40:02.003 12:23:36 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:40:02.003 12:23:36 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:40:02.003 12:23:36 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:40:02.003 12:23:36 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:40:02.003 12:23:36 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:40:02.003 12:23:36 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:02.262 12:23:36 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:40:02.262 12:23:36 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:40:02.262 12:23:36 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:40:02.262 12:23:36 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:40:02.262 12:23:36 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:02.262 12:23:36 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:40:02.262 12:23:36 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:02.262 12:23:36 keyring_linux -- keyring/linux.sh@25 -- # sn=357712105 00:40:02.262 12:23:36 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:40:02.262 12:23:36 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:40:02.262 12:23:36 keyring_linux -- keyring/linux.sh@26 -- # [[ 357712105 == \3\5\7\7\1\2\1\0\5 ]] 00:40:02.262 12:23:36 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 357712105 00:40:02.262 12:23:36 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:40:02.262 12:23:36 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:40:02.521 Running I/O for 1 seconds... 00:40:03.458 21770.00 IOPS, 85.04 MiB/s 00:40:03.458 Latency(us) 00:40:03.458 [2024-12-05T11:23:37.654Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:03.458 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:40:03.458 nvme0n1 : 1.01 21771.46 85.04 0.00 0.00 5860.18 4899.60 15104.49 00:40:03.458 [2024-12-05T11:23:37.654Z] =================================================================================================================== 00:40:03.458 [2024-12-05T11:23:37.654Z] Total : 21771.46 85.04 0.00 0.00 5860.18 4899.60 15104.49 00:40:03.458 { 00:40:03.458 "results": [ 00:40:03.458 { 00:40:03.458 "job": "nvme0n1", 00:40:03.458 "core_mask": "0x2", 00:40:03.458 "workload": "randread", 00:40:03.458 "status": "finished", 00:40:03.458 "queue_depth": 128, 00:40:03.458 "io_size": 4096, 00:40:03.458 "runtime": 1.005812, 00:40:03.458 "iops": 21771.464249780278, 00:40:03.458 "mibps": 85.04478222570421, 00:40:03.458 "io_failed": 0, 00:40:03.458 "io_timeout": 0, 00:40:03.458 "avg_latency_us": 5860.176053651344, 00:40:03.458 "min_latency_us": 4899.596190476191, 00:40:03.458 "max_latency_us": 15104.487619047619 00:40:03.458 } 00:40:03.458 ], 00:40:03.458 "core_count": 1 00:40:03.458 } 00:40:03.458 12:23:37 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:40:03.458 12:23:37 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:40:03.717 12:23:37 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:40:03.717 12:23:37 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:40:03.717 12:23:37 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:40:03.717 12:23:37 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:40:03.717 12:23:37 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:40:03.717 12:23:37 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:03.976 12:23:37 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:40:03.976 12:23:37 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:40:03.976 12:23:37 keyring_linux -- keyring/linux.sh@23 -- # return 00:40:03.976 12:23:37 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:03.976 12:23:37 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:40:03.976 12:23:37 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:03.976 12:23:37 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:40:03.976 12:23:37 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:03.976 12:23:37 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:40:03.976 12:23:37 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:03.976 12:23:37 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:03.976 12:23:37 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:03.976 [2024-12-05 12:23:38.128537] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:40:03.976 [2024-12-05 12:23:38.129304] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f3afa0 (107): Transport endpoint is not connected 00:40:03.977 [2024-12-05 12:23:38.130299] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f3afa0 (9): Bad file descriptor 00:40:03.977 [2024-12-05 12:23:38.131301] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:40:03.977 [2024-12-05 12:23:38.131310] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:40:03.977 [2024-12-05 12:23:38.131317] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:40:03.977 [2024-12-05 12:23:38.131325] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:40:03.977 request: 00:40:03.977 { 00:40:03.977 "name": "nvme0", 00:40:03.977 "trtype": "tcp", 00:40:03.977 "traddr": "127.0.0.1", 00:40:03.977 "adrfam": "ipv4", 00:40:03.977 "trsvcid": "4420", 00:40:03.977 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:03.977 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:03.977 "prchk_reftag": false, 00:40:03.977 "prchk_guard": false, 00:40:03.977 "hdgst": false, 00:40:03.977 "ddgst": false, 00:40:03.977 "psk": ":spdk-test:key1", 00:40:03.977 "allow_unrecognized_csi": false, 00:40:03.977 "method": "bdev_nvme_attach_controller", 00:40:03.977 "req_id": 1 00:40:03.977 } 00:40:03.977 Got JSON-RPC error response 00:40:03.977 response: 00:40:03.977 { 00:40:03.977 "code": -5, 00:40:03.977 "message": "Input/output error" 00:40:03.977 } 00:40:03.977 12:23:38 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:40:03.977 12:23:38 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:03.977 12:23:38 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:03.977 12:23:38 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:03.977 12:23:38 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:40:03.977 12:23:38 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:40:03.977 12:23:38 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:40:03.977 12:23:38 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:40:03.977 12:23:38 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:40:03.977 12:23:38 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:40:03.977 12:23:38 keyring_linux -- keyring/linux.sh@33 -- # sn=357712105 00:40:03.977 12:23:38 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 357712105 00:40:03.977 1 links removed 00:40:03.977 12:23:38 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:40:03.977 12:23:38 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:40:03.977 12:23:38 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:40:03.977 12:23:38 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:40:03.977 12:23:38 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:40:04.235 12:23:38 keyring_linux -- keyring/linux.sh@33 -- # sn=623530179 00:40:04.235 12:23:38 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 623530179 00:40:04.235 1 links removed 00:40:04.235 12:23:38 keyring_linux -- keyring/linux.sh@41 -- # killprocess 365948 00:40:04.235 12:23:38 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 365948 ']' 00:40:04.235 12:23:38 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 365948 00:40:04.235 12:23:38 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:40:04.235 12:23:38 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:04.235 12:23:38 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 365948 00:40:04.235 12:23:38 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:04.235 12:23:38 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:04.235 12:23:38 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 365948' 00:40:04.235 killing process with pid 365948 00:40:04.235 12:23:38 keyring_linux -- common/autotest_common.sh@973 -- # kill 365948 00:40:04.235 Received shutdown signal, test time was about 1.000000 seconds 00:40:04.236 00:40:04.236 Latency(us) 00:40:04.236 [2024-12-05T11:23:38.432Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:04.236 [2024-12-05T11:23:38.432Z] =================================================================================================================== 00:40:04.236 [2024-12-05T11:23:38.432Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:04.236 12:23:38 keyring_linux -- common/autotest_common.sh@978 -- # wait 365948 00:40:04.236 12:23:38 keyring_linux -- keyring/linux.sh@42 -- # killprocess 365940 00:40:04.236 12:23:38 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 365940 ']' 00:40:04.236 12:23:38 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 365940 00:40:04.236 12:23:38 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:40:04.236 12:23:38 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:04.236 12:23:38 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 365940 00:40:04.495 12:23:38 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:04.495 12:23:38 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:04.495 12:23:38 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 365940' 00:40:04.495 killing process with pid 365940 00:40:04.495 12:23:38 keyring_linux -- common/autotest_common.sh@973 -- # kill 365940 00:40:04.495 12:23:38 keyring_linux -- common/autotest_common.sh@978 -- # wait 365940 00:40:04.754 00:40:04.754 real 0m4.311s 00:40:04.754 user 0m8.092s 00:40:04.754 sys 0m1.453s 00:40:04.754 12:23:38 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:04.754 12:23:38 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:04.754 ************************************ 00:40:04.754 END TEST keyring_linux 00:40:04.754 ************************************ 00:40:04.754 12:23:38 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:40:04.754 12:23:38 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:40:04.754 12:23:38 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:40:04.754 12:23:38 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:40:04.754 12:23:38 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:40:04.754 12:23:38 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:40:04.754 12:23:38 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:40:04.754 12:23:38 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:40:04.754 12:23:38 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:40:04.754 12:23:38 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:40:04.754 12:23:38 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:40:04.754 12:23:38 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:40:04.754 12:23:38 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:40:04.754 12:23:38 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:40:04.754 12:23:38 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:40:04.754 12:23:38 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:40:04.755 12:23:38 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:40:04.755 12:23:38 -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:04.755 12:23:38 -- common/autotest_common.sh@10 -- # set +x 00:40:04.755 12:23:38 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:40:04.755 12:23:38 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:40:04.755 12:23:38 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:40:04.755 12:23:38 -- common/autotest_common.sh@10 -- # set +x 00:40:10.041 INFO: APP EXITING 00:40:10.041 INFO: killing all VMs 00:40:10.041 INFO: killing vhost app 00:40:10.041 INFO: EXIT DONE 00:40:12.579 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:40:12.579 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:40:12.579 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:40:12.579 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:40:12.579 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:40:12.579 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:40:12.579 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:40:12.579 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:40:12.579 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:40:12.579 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:40:12.579 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:40:12.579 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:40:12.579 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:40:12.579 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:40:12.579 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:40:12.579 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:40:12.579 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:40:15.871 Cleaning 00:40:15.871 Removing: /var/run/dpdk/spdk0/config 00:40:15.871 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:40:15.871 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:40:15.871 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:40:15.871 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:40:15.871 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:40:15.871 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:40:15.871 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:40:15.871 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:40:15.871 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:40:15.871 Removing: /var/run/dpdk/spdk0/hugepage_info 00:40:15.871 Removing: /var/run/dpdk/spdk1/config 00:40:15.871 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:40:15.871 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:40:15.871 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:40:15.871 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:40:15.871 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:40:15.871 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:40:15.871 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:40:15.871 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:40:15.871 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:40:15.871 Removing: /var/run/dpdk/spdk1/hugepage_info 00:40:15.871 Removing: /var/run/dpdk/spdk2/config 00:40:15.871 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:40:15.871 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:40:15.871 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:40:15.871 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:40:15.871 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:40:15.871 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:40:15.871 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:40:15.871 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:40:15.871 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:40:15.871 Removing: /var/run/dpdk/spdk2/hugepage_info 00:40:15.871 Removing: /var/run/dpdk/spdk3/config 00:40:15.871 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:40:15.871 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:40:15.871 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:40:15.871 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:40:15.871 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:40:15.871 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:40:15.871 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:40:15.871 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:40:15.871 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:40:15.872 Removing: /var/run/dpdk/spdk3/hugepage_info 00:40:15.872 Removing: /var/run/dpdk/spdk4/config 00:40:15.872 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:40:15.872 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:40:15.872 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:40:15.872 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:40:15.872 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:40:15.872 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:40:15.872 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:40:15.872 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:40:15.872 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:40:15.872 Removing: /var/run/dpdk/spdk4/hugepage_info 00:40:15.872 Removing: /dev/shm/bdev_svc_trace.1 00:40:15.872 Removing: /dev/shm/nvmf_trace.0 00:40:15.872 Removing: /dev/shm/spdk_tgt_trace.pid4084970 00:40:15.872 Removing: /var/run/dpdk/spdk0 00:40:15.872 Removing: /var/run/dpdk/spdk1 00:40:15.872 Removing: /var/run/dpdk/spdk2 00:40:15.872 Removing: /var/run/dpdk/spdk3 00:40:15.872 Removing: /var/run/dpdk/spdk4 00:40:15.872 Removing: /var/run/dpdk/spdk_pid100861 00:40:15.872 Removing: /var/run/dpdk/spdk_pid102704 00:40:15.872 Removing: /var/run/dpdk/spdk_pid102717 00:40:15.872 Removing: /var/run/dpdk/spdk_pid102952 00:40:15.872 Removing: /var/run/dpdk/spdk_pid103073 00:40:15.872 Removing: /var/run/dpdk/spdk_pid103528 00:40:15.872 Removing: /var/run/dpdk/spdk_pid105309 00:40:15.872 Removing: /var/run/dpdk/spdk_pid106133 00:40:15.872 Removing: /var/run/dpdk/spdk_pid106794 00:40:15.872 Removing: /var/run/dpdk/spdk_pid108905 00:40:15.872 Removing: /var/run/dpdk/spdk_pid109388 00:40:15.872 Removing: /var/run/dpdk/spdk_pid109901 00:40:15.872 Removing: /var/run/dpdk/spdk_pid114188 00:40:15.872 Removing: /var/run/dpdk/spdk_pid119820 00:40:15.872 Removing: /var/run/dpdk/spdk_pid119821 00:40:15.872 Removing: /var/run/dpdk/spdk_pid119822 00:40:15.872 Removing: /var/run/dpdk/spdk_pid123630 00:40:15.872 Removing: /var/run/dpdk/spdk_pid131992 00:40:15.872 Removing: /var/run/dpdk/spdk_pid135829 00:40:15.872 Removing: /var/run/dpdk/spdk_pid142191 00:40:15.872 Removing: /var/run/dpdk/spdk_pid143905 00:40:15.872 Removing: /var/run/dpdk/spdk_pid145488 00:40:15.872 Removing: /var/run/dpdk/spdk_pid147062 00:40:15.872 Removing: /var/run/dpdk/spdk_pid151769 00:40:15.872 Removing: /var/run/dpdk/spdk_pid156158 00:40:15.872 Removing: /var/run/dpdk/spdk_pid163535 00:40:15.872 Removing: /var/run/dpdk/spdk_pid163685 00:40:15.872 Removing: /var/run/dpdk/spdk_pid168266 00:40:15.872 Removing: /var/run/dpdk/spdk_pid168499 00:40:15.872 Removing: /var/run/dpdk/spdk_pid168732 00:40:15.872 Removing: /var/run/dpdk/spdk_pid169183 00:40:15.872 Removing: /var/run/dpdk/spdk_pid169190 00:40:15.872 Removing: /var/run/dpdk/spdk_pid173789 00:40:15.872 Removing: /var/run/dpdk/spdk_pid174287 00:40:15.872 Removing: /var/run/dpdk/spdk_pid178854 00:40:15.872 Removing: /var/run/dpdk/spdk_pid181455 00:40:15.872 Removing: /var/run/dpdk/spdk_pid187040 00:40:15.872 Removing: /var/run/dpdk/spdk_pid197879 00:40:15.872 Removing: /var/run/dpdk/spdk_pid197881 00:40:15.872 Removing: /var/run/dpdk/spdk_pid216485 00:40:15.872 Removing: /var/run/dpdk/spdk_pid216776 00:40:15.872 Removing: /var/run/dpdk/spdk_pid222816 00:40:15.872 Removing: /var/run/dpdk/spdk_pid223019 00:40:15.872 Removing: /var/run/dpdk/spdk_pid228249 00:40:15.872 Removing: /var/run/dpdk/spdk_pid228938 00:40:15.872 Removing: /var/run/dpdk/spdk_pid229412 00:40:15.872 Removing: /var/run/dpdk/spdk_pid229892 00:40:15.872 Removing: /var/run/dpdk/spdk_pid230625 00:40:15.872 Removing: /var/run/dpdk/spdk_pid231211 00:40:15.872 Removing: /var/run/dpdk/spdk_pid231792 00:40:15.872 Removing: /var/run/dpdk/spdk_pid232266 00:40:15.872 Removing: /var/run/dpdk/spdk_pid236675 00:40:15.872 Removing: /var/run/dpdk/spdk_pid242414 00:40:15.872 Removing: /var/run/dpdk/spdk_pid248134 00:40:16.132 Removing: /var/run/dpdk/spdk_pid252231 00:40:16.132 Removing: /var/run/dpdk/spdk_pid256492 00:40:16.132 Removing: /var/run/dpdk/spdk_pid266228 00:40:16.132 Removing: /var/run/dpdk/spdk_pid266716 00:40:16.132 Removing: /var/run/dpdk/spdk_pid270982 00:40:16.132 Removing: /var/run/dpdk/spdk_pid271231 00:40:16.132 Removing: /var/run/dpdk/spdk_pid275517 00:40:16.132 Removing: /var/run/dpdk/spdk_pid281176 00:40:16.132 Removing: /var/run/dpdk/spdk_pid284091 00:40:16.132 Removing: /var/run/dpdk/spdk_pid294477 00:40:16.132 Removing: /var/run/dpdk/spdk_pid30073 00:40:16.132 Removing: /var/run/dpdk/spdk_pid311229 00:40:16.132 Removing: /var/run/dpdk/spdk_pid315010 00:40:16.132 Removing: /var/run/dpdk/spdk_pid316786 00:40:16.132 Removing: /var/run/dpdk/spdk_pid317701 00:40:16.132 Removing: /var/run/dpdk/spdk_pid322439 00:40:16.132 Removing: /var/run/dpdk/spdk_pid325125 00:40:16.132 Removing: /var/run/dpdk/spdk_pid333624 00:40:16.132 Removing: /var/run/dpdk/spdk_pid333634 00:40:16.132 Removing: /var/run/dpdk/spdk_pid338821 00:40:16.132 Removing: /var/run/dpdk/spdk_pid340676 00:40:16.132 Removing: /var/run/dpdk/spdk_pid342621 00:40:16.132 Removing: /var/run/dpdk/spdk_pid343868 00:40:16.132 Removing: /var/run/dpdk/spdk_pid345858 00:40:16.132 Removing: /var/run/dpdk/spdk_pid346923 00:40:16.132 Removing: /var/run/dpdk/spdk_pid35408 00:40:16.132 Removing: /var/run/dpdk/spdk_pid355698 00:40:16.132 Removing: /var/run/dpdk/spdk_pid356157 00:40:16.132 Removing: /var/run/dpdk/spdk_pid356619 00:40:16.132 Removing: /var/run/dpdk/spdk_pid359105 00:40:16.132 Removing: /var/run/dpdk/spdk_pid359569 00:40:16.132 Removing: /var/run/dpdk/spdk_pid360031 00:40:16.132 Removing: /var/run/dpdk/spdk_pid363861 00:40:16.132 Removing: /var/run/dpdk/spdk_pid363866 00:40:16.132 Removing: /var/run/dpdk/spdk_pid365390 00:40:16.132 Removing: /var/run/dpdk/spdk_pid365940 00:40:16.132 Removing: /var/run/dpdk/spdk_pid365948 00:40:16.132 Removing: /var/run/dpdk/spdk_pid4082601 00:40:16.132 Removing: /var/run/dpdk/spdk_pid4083666 00:40:16.132 Removing: /var/run/dpdk/spdk_pid4084970 00:40:16.132 Removing: /var/run/dpdk/spdk_pid4085541 00:40:16.132 Removing: /var/run/dpdk/spdk_pid4086450 00:40:16.132 Removing: /var/run/dpdk/spdk_pid4086568 00:40:16.132 Removing: /var/run/dpdk/spdk_pid4087542 00:40:16.132 Removing: /var/run/dpdk/spdk_pid4087689 00:40:16.132 Removing: /var/run/dpdk/spdk_pid4087916 00:40:16.132 Removing: /var/run/dpdk/spdk_pid4089650 00:40:16.132 Removing: /var/run/dpdk/spdk_pid4090922 00:40:16.132 Removing: /var/run/dpdk/spdk_pid4091213 00:40:16.132 Removing: /var/run/dpdk/spdk_pid4091500 00:40:16.132 Removing: /var/run/dpdk/spdk_pid4091802 00:40:16.132 Removing: /var/run/dpdk/spdk_pid4092097 00:40:16.132 Removing: /var/run/dpdk/spdk_pid4092355 00:40:16.132 Removing: /var/run/dpdk/spdk_pid4092544 00:40:16.132 Removing: /var/run/dpdk/spdk_pid4092858 00:40:16.132 Removing: /var/run/dpdk/spdk_pid4093634 00:40:16.132 Removing: /var/run/dpdk/spdk_pid4096634 00:40:16.132 Removing: /var/run/dpdk/spdk_pid4096893 00:40:16.132 Removing: /var/run/dpdk/spdk_pid4097155 00:40:16.132 Removing: /var/run/dpdk/spdk_pid4097166 00:40:16.132 Removing: /var/run/dpdk/spdk_pid4097654 00:40:16.132 Removing: /var/run/dpdk/spdk_pid4097663 00:40:16.132 Removing: /var/run/dpdk/spdk_pid4098151 00:40:16.132 Removing: /var/run/dpdk/spdk_pid4098157 00:40:16.132 Removing: /var/run/dpdk/spdk_pid4098539 00:40:16.132 Removing: /var/run/dpdk/spdk_pid4098658 00:40:16.132 Removing: /var/run/dpdk/spdk_pid4098914 00:40:16.132 Removing: /var/run/dpdk/spdk_pid4098921 00:40:16.132 Removing: /var/run/dpdk/spdk_pid4099487 00:40:16.132 Removing: /var/run/dpdk/spdk_pid4099736 00:40:16.391 Removing: /var/run/dpdk/spdk_pid4100029 00:40:16.391 Removing: /var/run/dpdk/spdk_pid4103970 00:40:16.391 Removing: /var/run/dpdk/spdk_pid4108274 00:40:16.391 Removing: /var/run/dpdk/spdk_pid4118864 00:40:16.391 Removing: /var/run/dpdk/spdk_pid4119551 00:40:16.391 Removing: /var/run/dpdk/spdk_pid4123871 00:40:16.391 Removing: /var/run/dpdk/spdk_pid4124327 00:40:16.391 Removing: /var/run/dpdk/spdk_pid4128621 00:40:16.391 Removing: /var/run/dpdk/spdk_pid41288 00:40:16.391 Removing: /var/run/dpdk/spdk_pid4134527 00:40:16.391 Removing: /var/run/dpdk/spdk_pid4137141 00:40:16.391 Removing: /var/run/dpdk/spdk_pid4147609 00:40:16.391 Removing: /var/run/dpdk/spdk_pid4166077 00:40:16.391 Removing: /var/run/dpdk/spdk_pid4170034 00:40:16.391 Removing: /var/run/dpdk/spdk_pid4171648 00:40:16.391 Removing: /var/run/dpdk/spdk_pid4172592 00:40:16.391 Removing: /var/run/dpdk/spdk_pid4177580 00:40:16.391 Removing: /var/run/dpdk/spdk_pid47804 00:40:16.391 Removing: /var/run/dpdk/spdk_pid47824 00:40:16.391 Removing: /var/run/dpdk/spdk_pid48719 00:40:16.391 Removing: /var/run/dpdk/spdk_pid49632 00:40:16.391 Removing: /var/run/dpdk/spdk_pid50552 00:40:16.391 Removing: /var/run/dpdk/spdk_pid51018 00:40:16.391 Removing: /var/run/dpdk/spdk_pid51027 00:40:16.391 Removing: /var/run/dpdk/spdk_pid51278 00:40:16.391 Removing: /var/run/dpdk/spdk_pid51484 00:40:16.391 Removing: /var/run/dpdk/spdk_pid51492 00:40:16.391 Removing: /var/run/dpdk/spdk_pid52413 00:40:16.391 Removing: /var/run/dpdk/spdk_pid53298 00:40:16.391 Removing: /var/run/dpdk/spdk_pid54057 00:40:16.391 Removing: /var/run/dpdk/spdk_pid54702 00:40:16.391 Removing: /var/run/dpdk/spdk_pid54710 00:40:16.391 Removing: /var/run/dpdk/spdk_pid54940 00:40:16.391 Removing: /var/run/dpdk/spdk_pid55976 00:40:16.391 Removing: /var/run/dpdk/spdk_pid57017 00:40:16.391 Removing: /var/run/dpdk/spdk_pid65883 00:40:16.391 Removing: /var/run/dpdk/spdk_pid94217 00:40:16.391 Removing: /var/run/dpdk/spdk_pid98871 00:40:16.391 Clean 00:40:16.391 12:23:50 -- common/autotest_common.sh@1453 -- # return 0 00:40:16.391 12:23:50 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:40:16.391 12:23:50 -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:16.391 12:23:50 -- common/autotest_common.sh@10 -- # set +x 00:40:16.391 12:23:50 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:40:16.391 12:23:50 -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:16.391 12:23:50 -- common/autotest_common.sh@10 -- # set +x 00:40:16.651 12:23:50 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:40:16.651 12:23:50 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:40:16.651 12:23:50 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:40:16.652 12:23:50 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:40:16.652 12:23:50 -- spdk/autotest.sh@398 -- # hostname 00:40:16.652 12:23:50 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-06 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:40:16.652 geninfo: WARNING: invalid characters removed from testname! 00:40:38.597 12:24:11 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:39.536 12:24:13 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:41.444 12:24:15 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:43.344 12:24:17 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:45.299 12:24:19 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:47.208 12:24:21 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:49.114 12:24:23 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:40:49.114 12:24:23 -- spdk/autorun.sh@1 -- $ timing_finish 00:40:49.114 12:24:23 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:40:49.114 12:24:23 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:40:49.114 12:24:23 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:40:49.114 12:24:23 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:40:49.114 + [[ -n 4005376 ]] 00:40:49.114 + sudo kill 4005376 00:40:49.125 [Pipeline] } 00:40:49.142 [Pipeline] // stage 00:40:49.149 [Pipeline] } 00:40:49.168 [Pipeline] // timeout 00:40:49.175 [Pipeline] } 00:40:49.192 [Pipeline] // catchError 00:40:49.200 [Pipeline] } 00:40:49.218 [Pipeline] // wrap 00:40:49.226 [Pipeline] } 00:40:49.243 [Pipeline] // catchError 00:40:49.253 [Pipeline] stage 00:40:49.255 [Pipeline] { (Epilogue) 00:40:49.270 [Pipeline] catchError 00:40:49.272 [Pipeline] { 00:40:49.288 [Pipeline] echo 00:40:49.290 Cleanup processes 00:40:49.296 [Pipeline] sh 00:40:49.587 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:49.587 377138 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:49.602 [Pipeline] sh 00:40:49.889 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:49.889 ++ grep -v 'sudo pgrep' 00:40:49.889 ++ awk '{print $1}' 00:40:49.889 + sudo kill -9 00:40:49.889 + true 00:40:49.901 [Pipeline] sh 00:40:50.187 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:41:02.414 [Pipeline] sh 00:41:02.702 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:41:02.702 Artifacts sizes are good 00:41:02.716 [Pipeline] archiveArtifacts 00:41:02.724 Archiving artifacts 00:41:02.847 [Pipeline] sh 00:41:03.132 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:41:03.146 [Pipeline] cleanWs 00:41:03.157 [WS-CLEANUP] Deleting project workspace... 00:41:03.157 [WS-CLEANUP] Deferred wipeout is used... 00:41:03.165 [WS-CLEANUP] done 00:41:03.167 [Pipeline] } 00:41:03.185 [Pipeline] // catchError 00:41:03.197 [Pipeline] sh 00:41:03.482 + logger -p user.info -t JENKINS-CI 00:41:03.491 [Pipeline] } 00:41:03.504 [Pipeline] // stage 00:41:03.510 [Pipeline] } 00:41:03.525 [Pipeline] // node 00:41:03.530 [Pipeline] End of Pipeline 00:41:03.564 Finished: SUCCESS